One big difference that I’ve noticed between Windows and Linux is that Windows does a much better job ensuring that the system stays responsive even under heavy load.

For instance, I often need to compile Rust code. Anyone who writes Rust knows that the Rust compiler is very good at using all your cores and all the CPU time it can get its hands on (which is good, you want it to compile as fast as possible after all). But that means that for a time while my Rust code is compiling, I will be maxing out all my CPU cores at 100% usage.

When this happens on Windows, I’ve never really noticed. I can use my web browser or my code editor just fine while the code compiles, so I’ve never really thought about it.

However, on Linux when all my cores reach 100%, I start to notice it. It seems like every window I have open starts to lag and I get stuttering as the programs struggle to get a little bit of CPU that’s left. My web browser starts lagging with whole seconds of no response and my editor behaves the same. Even my KDE Plasma desktop environment starts lagging.

I suppose Windows must be doing something clever to somehow prioritize user-facing GUI applications even in the face of extreme CPU starvation, while Linux doesn’t seem to do a similar thing (or doesn’t do it as well).

Is this an inherent problem of Linux at the moment or can I do something to improve this? I’m on Kubuntu 24.04 if it matters. Also, I don’t believe it is a memory or I/O problem as my memory is sitting at around 60% usage when it happens with 0% swap usage, while my CPU sits at basically 100% on all cores. I’ve also tried disabling swap and it doesn’t seem to make a difference.

EDIT: Tried nice -n +19, still lags my other programs.

EDIT 2: Tried installing the Liquorix kernel, which is supposedly better for this kinda thing. I dunno if it’s placebo but stuff feels a bit snappier now? My mouse feels more responsive. Again, dunno if it’s placebo. But anyways, I tried compiling again and it still lags my other stuff.

  •  scsi   ( @scsi@lemm.ee ) 
    link
    fedilink
    4114 days ago

    The Linux kernel uses the CPU default scheduler, CFS, a mode that tries to be fair to all processes at the same time - both foreground and background - for high throughput. Abstractly think “they never know what you intend to do” so it’s sort of middle of the road as a default - every CPU cycle of every process gets a fair tick of work unless they’ve been intentionally nice’d or whatnot. People who need realtime work (classic use is for audio engineers who need near-zero latency in their hardware inputs like a MIDI sequencer, but also embedded hardware uses realtime a lot) reconfigure their system(s) to that to that need; for desktop-priority users there are ways to alter the CFS scheduler to help maintain desktop responsiveness.

    Have a look to Github projects such as this one to learn how and what to tweak - not that you need to necessarily use this but it’s a good point to start understanding how the mojo works and what you can do even on your own with a few sysctl tweaks to get a better desktop experience while your rust code is compiling in the background. https://github.com/igo95862/cfs-zen-tweaks (in this project you’re looking at the set-cfs-zen-tweaks.sh file and what it’s tweaking in /proc so you can get hints on where you research goals should lead - most of these can be set with a sysctl)

    There’s a lot to learn about this so I hope this gets you started down the right path on searches for more information to get the exact solution/recipe which works for you.

      •  scsi   ( @scsi@lemm.ee ) 
        link
        fedilink
        1414 days ago

        I would agree, and would bring awareness of ionice into the conversation for the readers - it can help control I/O priority to your block devices in the case of write-heavy workloads, possibly compiler artifacts etc.

    • “they never know what you intend to do”

      I feel like if Linux wants to be a serious desktop OS contender, this needs to “just work” without having to look into all these custom solutions. If there is a desktop environment with windows and such, that obviously is intended to always stay responsive. Assuming no intentions makes more sense for a server environment.

        • Totally agree, I’ve been in the situation where a remote host is 100%-ing and when I want to ssh into it to figure out why and possibly fix it, I can’t cause ssh is unresponsive! leaving only one way out of this, hard reboot and hope I didn’t lose data.

          This is a fundamental issue in Linux, it needs a scheduler from this century.

      •  witx   ( @witx@lemmy.sdf.org ) 
        link
        fedilink
        2
        edit-2
        12 days ago

        What do you even mean as serious contender? I’ve been using Linux for almost 15 years without an issue on CPU, and I’ve used it almost only on very modest machines. I feel we’re not getting your whole story here.

        On the other hand whenever I had to do something IO intensive on windows it would always crawl in these machines

        • You are getting the whole story - not sure what it is you think is missing. But I mean a serious desktop contender has to take UX seriously and have things “just work” without any custom configuration or tweaking or hacking around. Currently when I compile on Windows my browser and other programs “just works” while on Linux, the other stuff is choppy and laggy.

    • Wasn’t CFS replaced in 6.6 with EEDVF?

      I have the 6.6 on my desktop, and I guess the compilations don’t freeze my media anymore, though I have little experience with it as of now, need more testing.

    •  agilob   ( @agilob@programming.dev ) 
      link
      fedilink
      English
      1
      edit-2
      12 days ago

      The Linux kernel uses the CPU default scheduler, CFS,

      Linux 6.6 (which recently landed on Debian) changed the scheduled to EEVDF, which is pretty widely criticized for poor tuning. 100% busy which means the scheduler is doing good job. If the CPU was idle and compilation was slow, than we would look into task scheduling and scheduling of blocking operations.

  •  Lupec   ( @lupec@lemm.ee ) 
    link
    fedilink
    23
    edit-2
    14 days ago

    Responsiveness for typical everyday usage is one of the main scenarios kernels like Zen/Liquorix and their out of the box scheduler configurations are meant to improve, and in my experience they help a lot. Maybe give them a go sometime!

    Edit: For added context, I remember Zen significantly improving responsiveness under heavy loads such as the one OP is experiencing back when I was experimenting with some particularly computationally intensive tasks

    • https://github.com/zen-kernel/zen-kernel/wiki/Detailed-Feature-List

      That’s the reason I installed Zen too and use it as the default. While Zen is meant to improve responsiveness of interactive usage on the system, it comes at a price. The overall performance might decrease and it should require more power. But if someone needs to solve the problem of the OP (need to work on the computer while under heavy load), then Zen is probably the right tool. Some distributions have the Zen Kernel in their repository and the install process is straightforward.

      • Very good points, it’s all trade-offs at the end of the day. I’ve always found them more than worth it myself for non server workloads, but as always YMMV.

  • nice +5 cargo build

    nice is a program that sets priorities for the CPU scheduler. Default is 0. Goes from -19, which is max prio, to +19 which is min prio.

    This way other programs will get CPU time before cargo/rustc.

  •  Amanda   ( @amanda@aggregatet.org ) 
    link
    fedilink
    Svenska
    9
    edit-2
    14 days ago

    Lots of bad answers here. Obviously the kernel should schedule the UI to be responsive even under high load. That’s doable; just prioritise running those over batch jobs. That’s a perfectly valid demand to have on your system.

    This is one of the cases where Linux shows its history as a large shared unix system and its focus as a server OS; if the desktop is just a program like any other, who’s to say it should have more priority than Rust?

    I’ve also run into this problem. I never found a solution for this, but I think one of those fancy new schedulers might work, or at least is worth a shot. I’d appreciate hearing about it if it does work for you!

    Hopefully in a while there are separate desktop-oriented schedulers for the desktop distros (and ideally also better OOM handlers), but that seems to be a few years away maybe.

    In the short term you may have some success in adjusting the priority of Rust with nice, an incomprehensibly named tool to adjust the priority of your processes. High numbers = low priority (the task is “nicer” to the system). You run it like this: nice -n5 cargo build.

    • Obviously the kernel should schedule the UI to be responsive even under high load.

      Obviously… to you.

      This is one of the cases where Linux shows its history as a large shared unix system and its focus as a server OS; if the desktop is just a program like any other,

      Exactly.

      • Obviously… to you.

        No. I’m sorry but if you are logged in with a desktop environment, obviously the UI of that desktop needs to stay responsive at all times, also under heavy load. If you don’t care about such a basic requirement, you could run the system without a desktop or you could tweak it yourself. But the default should be that a desktop is prioritized and input from users is responded to as quickly as possible.

        This whole “Linux shouldn’t assume anything”-attitude is not helpful. It harms Linux’s potential as a replacement for Windows and macOS and also just harms its UX. Linux cannot ever truly replace Windows and macOS if it doesn’t start thinking about these basic UX guarantees, like a responsive desktop.

        This is one of the cases where Linux shows its history as a large shared unix system and its focus as a server OS; if the desktop is just a program like any other,

        Exactly.

        You say that like it’s a good thing; it is not. The desktop is not a program like any other, it is much more important that the desktop keeps being responsive than most other programs in the general case. Of course, you should have the ability to customize that but for the default and the general case, desktop responsiveness needs to be prioritized.

        • Even for a server, the UI should always be priority. If you’re not using the desktop/UI, what’s the harm?

          When you do need to remote into a box, it’s often when shit’s already sideways, and having an unresponsive UI (or even a sluggish shell) gets old.

          A person interacting with a system needs priority.

    • Is your browser Firefox?
      What kind of storage devices do you have? NVMe?
      Did you check with tools like iotop to see if something is going on IO wise?

      You assumed that the problem is caused by the CPU being utilized at 100%.
      This may not be the case.

      A lot of us don’t run a DE at all. I myself use Awesome WM.
      For non-tilers, Openbox with some toolbar would be the ideal setup.

      I mention this because we (non-DE users) would have no experience with some funky stuff like a possible KDE indexer running in the background killing IO performance and thrashing buffered/cached memory.

      Also, some of us run firefox with eatmydata because we hate fsync 🤨

      Neither KDE nor Gnome is peak Desktop Linux experience.
      Ubuntu and its flavors is not peak distro experience either.

      If you want to try Desktop Linux for real, you will need to dip your toes a little bit deeper.

      • Yes Firefox, yes NVMe. No, there is no IO happening and again, sitting at relatively low memory usage. I was not running anything else than the compiler, my editor and Firefox. I’m fairly confident the CPU usage is the culprit as memory usage is not severely affected and disk usage by the compiler should be pretty minimal (and I don’t see how disk usage would make Firefox slow if there’s still plenty of RAM available).

        Neither KDE nor Gnome is peak Desktop Linux experience. Ubuntu and its flavors is not peak distro experience either.

        If you want to try Desktop Linux for real, you will need to dip your toes a little bit deeper.

        I’ve heard much of the opposite - KDE is touted as an easy-to-use desktop and Ubuntu is largely a popular “just works” distro. And honestly that has been my primary experience. Mostly everything works, but there are some hiccups here and there like the problem I posted about in this thread.

        What alternative would you suggest?

        • What alternative would you suggest?

          A, rolling release first, distro (e.g. Arch or Void) with no DE installed.
          But you’re probably not ready for that.
          For me, a terminal and Firefox are the only GUI apps really needed. mpv too if it counts.
          But I’m someone who has been running Arch+AwesomeWM for ~15 years ago (been using Arch for even longer). So I probably can’t meaningfully put myself in new users’ shoes.

    • I wonder if Linux should also provide server and desktop variants like Windows does, with different scheduler settings and such. The use cases are quite different after all, it’s kinda weird they use the same settings.

  • I face similar issue when updating steam games although I think that’s related to disk read write

    But either way, issues like these gonna need to be address before we finally hit the year of Linux desktop lol

  • While I ultimately think your solution is to use a different scheduler, and that the most useful responses you’ve gotten have been about that; and that I agree with your response that Linux distros should really be tuning the scheduler for the UI by default and let developers and server runners take the burden of tuning differently for their workloads… all that said, I can’t let this comment on your post go by:

    which is good, you want it to compile as fast as possible after all

    If fast compile times are your priority, you’re using the wrong programming language. One of Go’s fundamental principles is fast compile times; even with add-on caching tooling in other languages, Go remains one of the fastest-compiling statically compiled, strongly typed programming languages available. I will not install Haskell programs unless they’re precompiled bin packages, that’s a hard rule. I will only reluctantly install Rust packages, and will always choose bins if available. But I’ll pick a -git Go package without hesitation, because they build crazy fast.

    Anyway, I hope you find the scheduler of your dreams and live happily ever after.

        • That’s an interesting take - that Go program code is more complex than Rust - if I understood you correctly. I came across a learning curve and cognitive load readability comparison analysis a while back, which I didn’t save and now can’t find. I haven’t needed it before because I think this is the first time I’ve heard anyone suggest that Rust code is less complex than Go.

          Your point about the tradeoff is right, but for different reasons. Go executables have a substantial runtime (with garbage collection, one of those things that make Go code less complex), making them much larger and measurably slower. And then there’s Rust’s vaunted safety, which Go - outside of the most basic compile-time type safety - lacks. Lots of places for Rust to claim superiority in the trade-offs, so it tickles me that you choose the one truly debatable argument, “complexity.”

          • Rust is simpler than Go or Python when a system scales.

            A program with 1000 lines will be simplest in Python because it’s just 1000 lines right? Doesn’t matter.

            A program with 1000000 lines will be much easier and simpler to work with in Rust than in Python or Go. The static analysis and the guarantees that the compiler provides suddenly apply to a much larger piece of code, making it more valuable.

            Python offloads type checking to the programmer, meaning that’s cognitive space you gotta use instead of the compiler. Go does the same with error handling and for inexplicable reasons use the billion dollar mistake even though it’s a relatively modern language.

            It is in this way that Rust is simpler than Go and Python. Also, because a system is likely to grow to a larger size over time in a corporate setting, Rust should be preferred in your professional workplace rather than Python or Go. That’s my take on it.

            Honestly, Go is a weird language. It’s so… “basic”. It doesn’t really provide anything new that other languages haven’t done already, perhaps aside from fast static compilation. If it wasn’t because Google was pushing it, I don’t believe Go would ever have become as popular as it is.

          • You’re right that garbage collection makes Go simpler, and maybe other patterns do contribute to prevent complexity from piling up. I never worked with Go outside of silly examples to try it out, so I’m no authority about it.

            What I meant was more of a “general” rule that the simpler a language is, the more code is necessary to express the same thing and then the intent can become nebulous, or the person reading might miss something. Besides, when the language doesn’t offer feature X, it becomes the programmer’s job to manage it, and it creates an extra mental load that can add pesky bugs (ex: managing null safety with extra checks, tracking pointers and bounds checking in C and so on…).

            Also there are studies that show the number of bugs in a software correlate with lines of code, which can mean the software is simply doing more, but also that the more characters you have to read and write, the higher the chance of something to go wrong.

            But yeah, this subject depends on too many variables and some may outweigh others.

  • This hasn’t been my experience when no swapping is involved (not a concern for me anymore with 32GiB physical RAM with 28GiB zram).

    And I’ve been Rusting since v1.0, and Linuxing for even longer.

    And my setup is boring (and stable), using Arch’s LTS kernel which is built with CONFIG_HZ=300. Long gone are the days of running linux-ck.

    Although I do use craneleft backend now day to day, so compiles don’t take too long anyway.

    • P.S. Since it wasn’t mentioned already, look up cgroups.

      Back when I had a humble laptop (pre-Rust), using nice and co. didn’t help much. Custom schedulers come with their own stability and worst-case-scenario baggage. cgroups should give you supported and well-tested tunable kernel-level resource usage control.