I tried searching on google, but I need to apply “effects” as in mathematical equations (especially sine waves) for around 8192 values. All of this should ideally be around a few milliseconds, but should not take longer than 10ms.
Similarly, I need to run Input on 8192 values too. This most likely will happen with memcpy, from one array to another, but this might take longer as it can have multiple inputs (network/serial/usb).
In this case, I have to guarantee, that both steps before outputting do not take longer than around 25 ms.
Is there a way to definitely limit these functions to timeframes, so they will be killed/returned if taken to long?
I cannot spawn new threads for each input/effect step, as it takes some time to build these threads. Both methods will be called in a loop, directly after another, so they will run approx. 30-45 times a second.
Whats the way to limit these? I can surely guarantee the read-time where I copy a buffer, but running maths operations seems like something that could potentially take too long.
Realtime-Operating Systems also can somehow guarantee those times, so whats the way here?
(BTW, this all is going to run on a minimal debian linux system, with the GUI decoupled from the actual mehhanism.)
One of my ideas would be to pre-calculate those values in some other thread, lets say the next 100 ops for a given value and then just replay them, so I have some buffer.
Other ideas?
By the way. No expert on this, but you can give your process real time priority. You could also schedule your code to only run on certain cores and maybe reserve those cores just for your code. Not sure how code placement and affinity specifications work. You could also disable paging or lock your pages in ram. Just trying to eliminate things that could interrupt your process. Making sure your hardware is not running a lot of other stuff and has a lot of extra resources of course will help too.
I know for example on my Linux box I no longer use swap simply because modern Linux seems to love to swap stuff out and use a lot of memory for cache. At least on linux there is also a swapiness setting too that has an effect, but these days with large ram why bother.