![]() |
|
![]() |
| If you want to see the effect of the real-time kernel, build and run the cyclictest utility from the Linux Foundation.
https://wiki.linuxfoundation.org/realtime/documentation/howt... It measures and displays the interrupt latency for each CPU core. Without the real-time patch, worst case latency can be double digit milliseconds. With the real-time patch, worst case drops to single digit microseconds. (To get consistently low latency you will also have to turn off any power saving states, as a transition between sleep states can hog the CPU, despite the RT kernel.) Cyclictest is an important tool if you're doing real-time with Linux. As an example, if you're doing processing for software defined radio, it's the difference between the system occasionally having "blips" and the system having rock solid performance, doing what it is supposed to every time. With the real time kernel in place, I find I can do acid-test things, like running GNOME and libreoffice on the same laptop as an SDR, and the SDR doesn't skip a beat. Without the real-time kernel it would be dropping packets all over the place. |
![]() |
| Interestingly, whenever I touch my touchpad, the worst case latency shoots up 20x, even with RT patch. What could be causing this? And this is always on core 5. |
![]() |
| It can literally sound better (objectively).
Suppose your audio server attempts fancy resampling, but falls back to a crude approximation after the first xrun. |
![]() |
| Wow, we got a No True Scotsman right here. On a more serious note, why would there be (more) microjitter? Isn't the defaut reaction to jitter to automatically increase buffer size as stated above? |
![]() |
| Out of curiosity, what music do you compose? How would you judge the Linux experience doing so, outside the RT topic?
Do you have any published music you will to share? Thanks! |
![]() |
| GPU-bound stuff is largely unaffected; CPU-bound definitely takes a hit (although there's no noticeable additional latency on non-RT tasks), but that's kinda to be expected. |
![]() |
| I would not expect lower FPS, because the amount of available CPU does not materially change. I would expect higher latency, because RT threads would more often scheduled ahead of other threads. |
![]() |
| > If you have more than one core, can they introduce jitter or slowdown to each other accessing memory?
DMA and fancy peripherals like UART, SPI etc, could be namedropped in this regard, too. |
![]() |
| Well, maybe not for debugging the kernel itself, but it is very useful for finding failing hardware, missing/crashing drivers and so on as a user. Call it external debugging if you will, |
![]() |
| A “real” parallel port provides interrupts on each individual data line of the port, _much_ lower latency than a USB dongle can provide. Microseconds vs milliseconds. |
![]() |
| I used the parallel port extensively. I've had the IBM PC AT Technical Reference that had a complete reference to the parallel port. I've read it many times.
But alas, it was decades ago, so it's possible I'm wrong ;) This is the closest reference I can find: https://www.sfu.ca/phys/430/datasheets/parport.html The card does have an interrupt but only the ACK signal can interrupt. Not the Data lines. ACK makes sense since it would be part of the printing protocol, you'd send another byte each interrupt. |
![]() |
| hmm, i think what matters for hard-real-time performance is the worst-case number though, the wcet, not the best or average case number. not the worst-case number for some other system that is using power management, of course, but the worst-case number for the actual system that you're using. it sounds like you're saying it's hard to guarantee a number below a microsecond, but that a microsecond is still within reach?
osamagirl69 (⸘‽) seems to be saying in https://news.ycombinator.com/item?id=41596304 that they couldn't get better than 10μs, which is an order of magnitude worse |
![]() |
| Very cool! How is this "turned on"? Compile-time/boot-time option? Or just a matter of having processes running in the system that have requested timeslice/latency guarantees? |
![]() |
| Sounds exciting. Anyone recommend a good place to read what the nuances of these patches are? The zdnet link about the best, at the moment? |
![]() |
| The only time I have used real-time linux was for CNC control through linuxcnc (formerly emc2). https://linuxcnc.org/
It works great, and with a bit of tuning and the right hardware it could achieve ~1us worse cast jitter numbers (tested by setting a 1ms timer and measuring how long it actually takes using the linuxcnc internal tooling). Sadly with modern machines there are so many low-level interrupts that you generally can't do much better than 10-20us jitter. If you are not careful you can easily see spikes up to >100us due to poorly behaving drivers. |
![]() |
| Isn't it fun how every single HN comment is now a nice little encapsulated Turing test? Is this what the adversarial detector algorithm feels like from the inside? |
Here are a few links to see how the work is done behind the scenes. Sadly arstechnica has only funny links and doesn't provide the actual source (why LinkedIn?).
Most of the work was done by Thomas Gleixner and team. He founded Linutronix, now (I believe) owned by Intel.
Pull request for the last printk bits: https://marc.info/?l=linux-kernel&m=172623896125062&w=2
Pull request for PREEMPT_RT in the kernel config: https://marc.info/?l=linux-kernel&m=172679265718247&w=2
This is the log of the RT patches on top of kernel v6.11.
https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-...
I think there are still a few things you need on top of a vanilla kernel. For example the new printk infrastructure still needs to be adopted by the actual drivers (UART consoles and so on). But the size of the RT patchset is already much much smaller than before. And being configurable out-of-the-box is of course a big sign of confidence by Linus.
Congrats to the team!