If it’s not RT OS, you have no guarantees. The OS may preempt your function at any time to run a different thread.
Real-time processes on commodity OSs use buffers. When the buffer runs out you get dropped frames or whatever.
All things programming and coding related. Subcommunity of Technology.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
If it’s not RT OS, you have no guarantees. The OS may preempt your function at any time to run a different thread.
Real-time processes on commodity OSs use buffers. When the buffer runs out you get dropped frames or whatever.
In general, dropped frames/effects that either take too long or are lost are not inherently bad, since theres another one in just 10-20ms.
Its just that I want to make sure that these function are not taking literal seconds or longer.
But thanks, I already feared this was the case. I will take a look on an RTOS, I just havent read far into it and therefore dont know how you would code something against that.
Thanks anyways!
Writing your own FFT or something?
If your function gets called every 40ms, design it so that it normally completes in 20ms and use a buffer of a couple batches. Should work fine unless some other process monopolizes all the cores.
edit: I noticed you need data from external sources with high latency. You need to put the data read in it’s own thread so that processing current data doesn’t get held up by latency reading new data.
Nope, to be specific, my application is going to apply many effects onto light sources (DMX).
Those effects are going to be sine/cosine, pwm, triangle and more. Those make my head hurt most, especially since I cannot predict how long the calculation takes for the 8192 values (512 channels times 16 universes, this can/will expand to even more, e.g. 512 channels times 128 universes).
Those output-frames need to be fluent, e.g. it should not lag (-> high refresh-rate, max allowed is around 40-45 Hz).
Currently, Im running in lockstep, e.g. a single thread decoupled from the parent which first has to run inputs (e.g. network input, etc. and then has to apply effects (e.g. math operations) on many thousands of parameters.
While I only need 8 bit precision per channel ( a channel is a single byte), some devices may take 2 channels for fine control (e.g. 16 Bit), where my accuracy has to be higher.
I think that I can remove the inputs, I can just decouple them into another thread and just update some shared buffer, where it can be always read regardless of how long the input method actually takes.
Btw, while technically there can be multiple effects running (e.g. a sine on channels 1-12, a triangle on 32-35), no channel will ever have multiple effects. So I am technically always computing max. 8192 (or however many universes times 512) values.
I cannot post code yet (still have to tidy up the codebase), but it will be open source later on.
Even a Real Time Operating System cannot guarantee serial/network input will arrive in time.
Is this for an opensource software project, and if so can you tell more about the project?
If that's for a work or university project, you should share salary and/or credit with whoever is going to give you a solution.
Thanks, its not a university project, more of a home project trying to beat some other software :)
Fore more info, I just posted under @deegeese@sopuli.xyz comment on this post!
It will be open source later on, but I have to tidy everything up before pushing to github
There is no reason your IO and your math etc needs to run in the same thread. I am not talking about launching a new thread every IO, I'm talking about 2 threads with appropriate synchronization. Also do not forget about the autoparallel, autovectorize, openmp, and fastmath options you have too. Especially for arrays you probably want to make sure your compiler is using vectorization. Also choosing data sizes will affect the chunk size of the vector instructions too. That is, float will probably process faster then double. fastmath will probably accelerate things a lot too.
Keep in mind too that you don't actually have to use threading to launch an IO operation and then go off an do computing, then come back and later wait for IO completion or for that matter probably abort it too. There are IO calls that can run in the background. Forgot what they are called (async io?), but they exist. Threading may be easier though depending on your exact needs, or maybe both so you just have more IO control.
By the way. No expert on this, but you can give your process real time priority. You could also schedule your code to only run on certain cores and maybe reserve those cores just for your code. Not sure how code placement and affinity specifications work. You could also disable paging or lock your pages in ram. Just trying to eliminate things that could interrupt your process. Making sure your hardware is not running a lot of other stuff and has a lot of extra resources of course will help too.
I know for example on my Linux box I no longer use swap simply because modern Linux seems to love to swap stuff out and use a lot of memory for cache. At least on linux there is also a swapiness setting too that has an effect, but these days with large ram why bother.