I’ve seen people talking about it and experienced it myself with a server, but why does Linux run so well on ARM (especially compared to Windows)?

  • Windows’s achilles heel is arguably its chief benefit - legacy compatibility and being the de facto platform for applications.

    Back when I had a Surface RT, I thought it was awfully neat, ARM-compiled versions of Office, IE, Windows 8.x bits ran well and it was fanless with fine battery life. (although I surely sound weird, I had a Windows Phone back then too and the syncing with IE on both was a nice feature) It’s just they were pushing the Store then and if you jailbroke it, ARM applications were rare.

    Apple is a pro at architecture transitions and can steer the whole ship, MS can put Windows on ARM all they want but OEM’s will be reluctant since it’ll be a relatively big risk to sell a “Windows, buuut…” computer and the popular closed-source applications probably won’t bother with ARM for a while

    • Apple can be much more heavy handed than Apple can. Apple controls their hardware ecosystem. They make everybody buy the architecture they want. They make the OS stop working on older hardware. They set minimum OS requirements for application writers. So, all the software ( from everybody - not just Apple ) gets moved to the new architecture quickly. It does not take long before being on the new architecture is all that makes sense.

      Windows on the other hand does not control the hardware. They are trying. They make their own now so they can at least seed the new ecosystem. However most Windows users buy their Windows hardware from somebody other than Microsoft. It makes sense for most hardware to target the larger application audience and that will be the old architecture. It makes sense for application devs to target the older architecture. For a long, long time, the older arch makes the most sense for almost everybody in the ecosystem. Only early adopters make the switch ( both users and application sellers ). In practice, that means that moving to the new arch means not having native apps in many cases which means the new arch will, in practice, be worse than the old one.

  •  nyan   ( @nyan@lemmy.cafe ) 
    link
    fedilink
    English
    401 year ago

    Linux, and much of the open-source software that goes with it, has been multi-architecture for a long time. If you take something that already runs pretty decently on x86, x86_64, PA-RISC, Motorola 68000, PowerPC, MIPS, SPARC, and Intel Itanium CPUs, porting it to yet another architecture is, while not trivial, at least mostly a known problem.

    Windows, by contrast, was built for descendants of the Intel 8088, period. It’s unsurprising that porting it is a hard problem and that results aren’t always satisfactory.

    (Apple built on top of a modified BSD kernel, and BSD has also been ported around quite a bit, so they also have a ports-are-a-known-problem advantage.)

      • NT is not the majority of windows code though; for windows to be multi architecture, all of windows needs to work with the new architecture; NT, drivers & userspace.

        For Linux, if an existing userspace application doesn’t work in aarch64, somebody somewhere will build a port. For windows, so much of their stuff is proprietary that Microsoft are the only ones able to build that port.

        Not because “windows bad”, just a consequence of such a locked down system which doesn’t have anything open source to inherit.

    • Hi, Perhaps a stupid question, but what exactly is required to port an OS to a different architecture? OK, there is the boot-process, and low-level language compilers, … but what else?

      How much code has actually to be rewriten, and how much just needs “make” to be recompiled?

      Kr.

      •  nyan   ( @nyan@lemmy.cafe ) 
        link
        fedilink
        English
        31 year ago

        Not my area, but since OSs are really low-level (obviously), they can be affected by details of the host architecture that we don’t often think about. Endianness, for instance.

        I opened up the source package for the kernel I’m currently running (6.1.42) and looked at it. The smallest set of architecture-specific code is the ~2MB for sh (I assume that’s SuperH, a 32-bit RISC architecture from the early 1990s). 32-bit ARM takes up 27MB, although if you check the individual files, a fair amount of that is device trees and the like. So we’re talking about less than 50MB of arch-specific source code for most platforms, and probably less than 10 in many cases, but it depends on the design of the architecture and how many times it’s been extended.

        Looking at individual file names, topics addressed in the kernel’s arch-specific code files appear to include booting, low-level memory access, how to idle the CPU, crypto primitives, interrupts, suspending/hibernating the system and other power management, virtualization facilities if the CPU provides them, crash dumps and stack traces, and, yes, endianness.

        You may also need additional drivers for odd bits of hardware not used by other systems. Or not, but it’s a common sticking point with ARM SOCs and other small-format machines.

        That’s just the kernel. You’ll also need to establish a working cross-compiler before you can get your kernel onto the system. At that point, you can probably bootstrap much of the rest by running make and get to a working command-line system (GUI is going to be more of a crapshoot, requiring additional work on video acceleration and such in order to run well). And there may be odd warts in other pieces of software, each requiring a few lines of code that add up over time.

      • Because it’s open source and most of the applications for it are open source. That means you can compile it and the applications specifically for the hardware you have.

        Windows does kind of support ARM on its specific hardware, but it can’t be adjusted for other hardware and they have to translate most applications to work. Apple has done much of that work for their hardware to work well, as well as very good translation for x86, and because they leaned hard into the transition, developers were mostly forced to compile for ARM going forward. Microsoft hasn’t done the same, and ARM is a tiny target, so it doesn’t happen with any regularity there.

      • Because people have been doing so for a long time and have ironed out most of the quirks. The software is also generally quite simple, meaning there are just fewer quirks that need to be ironed out. And the ecosystem is largely open source, meaning everything can be recompiled to target the relevant architecture, so while translation layers are still useful, they’re not the essential tool they are in proprietary ecosystems. The main headaches that plague windows on arm mostly just don’t exist on the Linux side.

      • Because you can try compile it on arm, and if something doesn’t work you can report it or fix it yourself. That said windows worked fine on arm years ago. Many gps, medical, and such devices used to use windows ce on arm, mips. (Windows phone too, arm)

      • ARM the company as well as industry partners contribute code & resources to the linux kernel…so that would be one reason why linux on ARM runs well.

        Unsure how we are tracking Microsoft ARM as worse than Linux arm, what benchmarks did we see?

    • Since the OP specified server hardware, probably not. RH said RHEL wasn’t going to support anything which didn’t use UEFI to boot, and Arm specified UEFI in their ServerReady hardware certification.

    •  jabjoe   ( @jabjoe@feddit.uk ) 
      link
      fedilink
      English
      31 year ago

      DeviceTree is a massive improvement compared to no discoverablity AND no DeviceTree. Each device was a custom kernel build, duplicate drivers and other code. It was madness. Linus lost his shit with the ARM kernel devs saying it has to be sorted. DeviceTree was the solution.

      In the end, ARM will have discoverablity. Buses like I2C and SPI will have some standard to discover what the hell is on them. But today it’s chaos and DeviceTree is the only source or order.

        •  jabjoe   ( @jabjoe@feddit.uk ) 
          link
          fedilink
          English
          21 year ago

          Completely agree that discoverablity is better. But that needs hardware. It’s crazy this isn’t solved. It means ARM devices are basically made to be e-waste. Make phones and other ARM devices like PCs. Google could mandate that hardware must be discoverable and be able to run a generic stock, to carry the Android brand.

  • I’ve run Linux on a Rockchip Chromebook, several Pi boards, and an M1 Macbook Pro, all with good results. I think that it helps that Linux comes from a long lineage of highly portable operating systems. One of the early victories of Unix was its ease of portability to new types of processor, due (at least in part) to being programmed in C. The BSDs and Linux have always had developers who took joy in getting the operating system up and running on more than one type of architecture. Debian, for instance, has run on one sort of ARM chip or another since around 2000. Windows has a core business that thrives on X86-based chip designs and they have had very little pressure to branch out over the years. Computer companies build around their operating system, rather than the other way around.

    • Just in general, Linux on ARM more often than not just… works. Compared to Windows on ARM, that’s an anomaly (yes I know part of the reason is Microsoft is just bad at making it, but there’s got to be more to the Linux side for it to be that good)

      • A key factor is LINUX has been available for ARM since nearly “the beginning”. Unlike Windows, which was basically Intel only for well over a decade, LINUX has had strong support for multiple architectures throughout its lifecycle. As a result, software that grew up within that ecosystem tended to be more agnostic in design which helps porting efforts.

  • Linux has a low footprint, similar to ARM, so the two were naturally combined for low footprint platforms like Android and Raspberry Pis.

    The open-source ecosystem also helped. If proprietary software is compiled only for x86, then the best you can do, is to try to run them with a translation layer.
    With open-source, you can compile them for ARM yourself. No guarantees that that will just work, but devs can contribute fixes and eventually the original software package can be officially released with an ARM package.