I thought I’ll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!

I’ll try my best to answer any questions here, but I hope others in the community will contribute too!

  • How do symlinks work from the point of view of software?

    Imagine I have a file in my downloads folder called movie.mp4, and I have a symlink to it in my home folder.

    Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?

    Can I use sync software (like Dropbox, Gdrive or whatever) to sync symlinks? Can I use sync software to sync actual files, but only have symlinks in my sync folder?

    Is there a rule of thumb to predict how software behaves when dealing with symlinks?

    I just don’t grok symbolic links.

    • A symlink works more closely to the first way you described it. The software opening a symlink has to actually follow it. It’s possible for a software to not follow the symlink (either intentionally or not).

      So your sync software has to actually be able to follow symlinks. I’m not familiar with how gdrive and similar solutions work, but I know this is possible with something like rsync

      • An application can know that a file represents a soft link, but they don’t need to do anything differently to follow it. If the program just opens it, reads it, writes to it, etc, as though it were the original file, it will just worktm without them needing to do anything differently.

        It is possible for the software to not follow a soft symlink intentionally, yes (if they don’t follow it unintentionally, that might be a bug).

        As for hard links, I’m not as certain, but I think these need to be supported at the filesystem level (which is why they often have specific restrictions), and the application can’t tell the difference.

      • So I guess it’s something like pressing ctrl+c: most software doesn’t specifically handle this hotkey so in general it will interrupt a running process, but software can choose to handle it differently (like in vim ctrl+C does not interrupt it).

        Thanks.

        Fun fact: pressing X (close button) on a window does not make it that your app is closed, it just sends a signal that you wish to close it, your app can choose what to do with this signal.

    • A symlink is a file that contains a shortcut (text string that is automatically interpreted and followed by the operating system) reference to another file or directory in the system. It’s more or less like Windows shortcut.

      If a symlink is deleted, its target remains unaffected. If the target is deleted, symlink still continues to point to non-existing file/directory. Symlinks can point to files or directories regardless of volume/partition (hardlinks can’t).

      Different programs treat symlinks differently. Majority of software just treats them transparently and acts like they’re operating on a “real” file or directory. Sometimes this has unexpected results when they try to determine what the previous or current directory is.

      There’s also software that needs to be “symlink aware” (like shells) and identify and manipulate them directly.

      You can upload a symlink to Dropbox/Gdrive etc and it’ll appear as a normal file (probably just very small filesize), but it loses the ability to act like a shortcut, this is sometimes annoying if you use a cloud service for backups as it can create filename conflicts and you need to make sure it’s preserved as “symlink” when restored. Most backup software is “symlink aware”.

    • Software opens a symlink the same way as a regular file. The kernel reads a path stored in a symlink and then opens a file with that path (or returns a error if unable to do this for some reason). But if a program needs to perform specific actions on symlinks, it is able to check the file type and resolve symlink path.

      To determine how some specific software handle symlinks, read its documentation. It may have settigs like “follow symlinks” or “don’t follow symlinks”.

    •  Ramin Honary   ( @Ramin_HAL9001@lemmy.ml ) 
      link
      fedilink
      English
      2
      edit-2
      6 months ago

      Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?

      Others have answered well already, I just will say that symlinks work at the filesystem level, but the operating system is specially programmed to work with them. When a program asks the operating system to open a file at a given path, the OS will automatically “reference” the link, meaning it will detect a symlink and jump to the place where the symlink is pointing.

      A program may choose to inspect whether a file is a symlink or not. By default, when a program opens a file, it simply allows the operating system to reference the file path for it.

      But some apps that work on directories and files together (like “find”, “tar”, “zip”, or “git”) do need to worry about symlinks, and will check if a path is a symlink before deciding whether to reference it. For example, you can ask the “find” command to list only symlinks without referencing them: find -type l

    • its a pointer.

      E: Okay so someone downvoted “it’s a pointer”. Here goes. both hard links and symbolic links are pointers.

      The hard link is a pointer to a spot on the block device, whereas the symbolic link is a pointer to the location in the filesystems list of shit.

      That location in the filesystems list of shit is also a pointer.

      So like if you have /var/2girls1cup.mov, and you click it, the os looks in the file system and sees that /var/2girls1cup.mov means 0x123456EF and it looks there to start reading data.

      If you make a symlink to /var/2girls1cup.mov in /bin called “ls” then when you type “ls”, the os looks at the file in /bin/ls, sees that it points to /var/2girls1cup.mov, looks in the file system and sees that it’s at 0x123456EF and starts reading data there.

      If you made a hard link in /bin called ls it would be a pointer to the location on the block device, 0x123456EF. You’d type “ls” and the os would look in the file system for /bin/ls, see that /bin/ls means 0x123456EF and start reading data from there.

      Okay but who fucking cares? This is stupid!

      If you made /bin/ls into /var/2girls1cup.mov with a symlink then you could use normal tools to work with it, looking at where it points, it’s attributes etc and like delete just the link or fully follow (dereference) the link and delete all the links in the chain including the last one which is the filesystems pointer to 0x123456EF called /var/2girls1cup.mov in our example.

      If you made /bin/ls into a hardlink to 0x123456EF, then when you did stuff to it the os wouldn’t know it’s also called /var/2girls1cup.mov and when /bin/ls didn’t work as expected you’d have to diff the output of mediainfo on both files to see that it’s the same thing and then look where on the hard drive /var/2girls1cup.mov and /bin/ls point to and compare em to see oh, someone replaced my ls with a shock video using a hard link.

      When you delete the /bin/ls hardlink, the os deletes the entry in the file system pointing to 0x123456EF and you are able to put normal /bin/ls back again. Deleting the hard link wouldn’t actually remove the data that comprises that file off the drive because “deleting” a “file” is just removing the file systems record that there’s something there to be aware of.

      If instead of deleting the /bin/ls hardlink, you opened it up and replaced the video portion of its data with the music video to never gonna give you up, then when someone tried to open /var/2girls1cup.mov they’d instead see that music video.

      if that is, the file wasn’t moved to another place on the block device when you changed it. Never gonna give you up has a much longer running time than 2girls1cup and without significant compression the os is gonna end up putting /bin/ls in a different place in the block device that can accommodate the longer data stream. If the os does that when you get done modifying your 2girls1cup /bin/ls into rickroll then /bin/ls will point to 0x654321EF or something and only you will experience astleys dulcet tones when you use ls, the old 0x123456EF location will still contain the data that /var/2girls1cup.mov is meant to point to and you will have played yourself.

      Okay with all that said: how does the os know what to do when one of its standard utilities encounters a symlink? They have a standard behavior! It’s usually to “follow” (dereference) the link. What the fuck good would a symbolic link be if it didn’t get treated normally? Sometimes though, like with “ls” or “rm” you might want to see more information or just delete the link. In those cases you gotta look at how the software you’re trying to use treats links.

      Or you can just make some directories and files with touch and try what you wanna do and see what happens, that’s what I do.

    •  wolf   ( @wolf@lemmy.zip ) 
      link
      fedilink
      English
      16 months ago

      Symlinks are fully transparent for all software just opening the file etc.

      If the software really cares about this (like file managers) they can simply ask the Linux kernel for additional information, like what type of file it is.

      • I mean, Wayland is still a hot topic, as are snaps and flatpaks. Years ago it was how the GTK2 to GTK3 upgrade messed up Gnome (not unlike the python 2 to 3 upgrade), some hardcore people still want to fight against systemd. Maybe it’s just “the loud detractors”, dunno

        • Why would one be discouraged by the fact that people have options and opinions on them? That’s the part I’m not buying. I don’t disagree that people do in fact disagree and argue. I don’t know if I’d call it fighting. People being unreasonably aggressive about it are rare.

          I for one am glad that people argue. It helps me explore different options without going through the effort of trying every single one myself.

          • I’m using wayland right now, but still use X11 sometimes. I love the discussion and different viewpoints. They are different protocols, with different strengths and weaknesses. People talking about it js a vitrue in my opinion

            • I like the fact that I can exercise my difficulty with usage commitment by installing both and switching between them :D.

              Wayland is so buttery smooth it feels like I just upgraded my computer for free…but I still get some window Z-fighting and screen recording problems and other weirdness.

              I’m glad X11 is still there to fall back on, even if it really feels janky from an experience point of view now.

    • Linux users are often very passionate about the software they put on their computers, so they tend to argue about it. I think the customization and choices scares off a lot of beginners, I think the main reason is lack of compatibility with Windows software out of the box. People generally want to use software they are used to.

    • Because you don’t have an in person user group and only interact online where the same person calling all mandrake users fetal alcohol syndrome babies doesn’t turn around and help those exact people figure out their smb.conf or trade sopranos episodes with them at the lan party.

    • It did take off, just not so much on the Desktop. I think those infights are really just opinions and part of further development. Having choices might be a great part of the overall success.

      • just not so much on the Desktop

        Unix already had a significant presence in server computers during the late 80s, migrating to Linux wasn’t a big jump. Besides, the price of zero is a lot more attractive when the alternative option costs several thousand dollars

        • the price of zero is a lot more attractive when the alternative option costs several thousand dollars

          Dang, I WISH. Places that constantly beg for donations like public libraries and schools will have Windows-everything infrastructure “because market share”. (This is what I was told when I was interviewing for a library IT position)

          They might have gotten “lucky” with a grant at some point, but having a bank of 30+ computers for test-taking that do nothing but run MS Access is a frivilous budget waste, and basically building your house on sand when those resources could go to, I dunno… paying teachers, maybe?

          • Licensing is weird especially in schools. It may very well be practically free for them to license. Or for very small numbers of computers they might be able to come out ahead by only needing to hire tech staff that are competent with Windows compared to the cost of staff competent with Linux. Put another way, in my IT degree program every single person in my graduating class was very competent as a Windows admin, but only a handful of us were any good with Linux (with a couple actively avoiding Linux for being different)

  • Is there a way to remove having to enter my password for everything?

    Wake computer from Screensaver? Password.
    Install something? Password.
    Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.

    I understand sudo needs a password,but all the other stuff I just want off. The frequency is rediculous. I don’t ever leave my house with my computer, and I don’t want to enter a password for my wife everytime she wants to use it.

    • I understand sudo needs a password

      You can configure sudo to not need a password for certain commands. Unfortunately the syntax and documentation for that is not easily readable. Doas which can be installed and used along side sudo is easier.

      For software updates you can go for unattended-upgrades though if you turn off your computer when it is upgrading software you may have to fix the broken pieces.

    • For wake from screensaver/sleep, this should be configurable. Your window manager is locking your session, so you probably just need to turn that option off.

      For installations and updates, I suspect you’re used to Windows-style UAC where it just asks you Yes or No for admin access in a modal overlay. As I understand it, this is easier said than done on linux due to an insistence on never running GUI applications as admin, which makes sense given how responsibilities are divided and the security and technical challenges involved. I will say, I agree 100% that this is a serious area that’s lacking for linux, but I also (think I) understand why no one has implemented something similar to UAC. I’ll try to give the shortest version I can:

      All programs (on both Windows and Linux) are run as a user. It’s always possible for any program to have a bug in it that gives another program to opportunity to exploit the bug to hijack that program, and start executing arbitrary, malicious code as that user. For this reason, the philosophical stance on all OSes is, if it’s gonna happen, let’s not give them admin access to the whole machine if we can avoid it, so let’s try to run as much as possible as an unprivileged user.

      On linux, the kernel-level processes and admin (root-level) account are fundamentally detached from running anything graphical. This means that it’s very hard to securely, and generically, pop up a window with just a Yes or No box to grant admin-level permissions. You can’t trust the window manager, it’s also unprivileged, but even if you could, it might be designed in a supremely insecure way, and allow just any app with a window to see and interact with any other app’s windows (Xorg), so it’s not safe to just pop up a simple Yes/No box, because then any other unprivileged application could just request root permissions, and then click Yes itself before you even see it. Polkit is possible because even if another app can press OK, you still need to enter the password (it’s not clear to me how you avoid other unprivileged apps from seeing the keystrokes typed into the polkit prompt).

      On windows, since the admin/kernel level stuff is so tightly tied to the specific GUI that a user will be using, it can overlay its own GUI on top of all the other windows, and securely pop in to just say, “hey, this app wants to run as admin, is that cool?” and no other app running in user mode even knows it’s happening, not even their own window manager which is also running unprivileged. The default setting of UAC is to just prompt Yes/No, but if you crank it to max security you get something like linux (prompt for the password every time), and if you crank it to lowest security you get something closer to what others are commenting (disable the prompt, run things as root, and cross your fingers that nothing sneaks in).

      I do think that this is a big deal when it comes to the adoption of linux over windows, so I would like to see someone come up with a kernel module or whatever is needed to make it happen. If someone who knows linux better than me can correct me where I’m wrong, I’d love to learn more, but that is how I understand it currently.

    • These are all valid reasons to request a password 🤔

      • Wake computer from Screensaver? Password.

      Check your screen saver settings. Dunno which desktop environment you’re using. KDE should allow you to not enter a password for this.

      • Install something? Password.
      • Updates (biggest one. Updates should in my opinion just work without, because being up to date is important for security reasons)? Password.

      Installing stuff runs sudo in the background hence the password prompt. Updates = installing stuff. Look up “passwordless sudo”. At this point, when do you even want a password to be shown? If you don’t need a password, get rid of it entirely.

      Anti Commercial AI thingy

      CC BY-NC-SA 4.0

    •  Julian   ( @julianh@lemm.ee ) 
      link
      fedilink
      English
      206 months ago

      Someone already gave an answer, but the reason it’s done that way is because on Linux, generally programs don’t install themselves - a package manager installs them. Windows (outside of the windows store) just trusts programs to install themselves, and include their own uninstaller.

        •  Ramin Honary   ( @Ramin_HAL9001@lemmy.ml ) 
          link
          fedilink
          English
          4
          edit-2
          6 months ago

          They do! /bin has the executables, and /usr/share has everything else.

          Apps and executables are similar but separate things. An app is concept used in GUI desktop environments. They are a user-friendly front end to one or more executable in /usr/bin that is presented by the desktop environment (or app launcher) as a single thing. On Linux these apps are usually defined in a .desktop file. The apps installed by the Linux distribution’s package manager are typically in /usr/share/applications, and each one points to one of the executables in /usr/bin or /usr/libexec. You could even have two different “apps” launch a single executable, but each one using different CLI arguments to give the appearance of different apps.

          The desktop environment you use might be reconfigured to display apps from multiple sources. You might also install apps from FlatHub, Lutris, Nix, Guix, or any of several other package managers. This is analogous to how in the CLI you need to set the “PATH” environment variable. If everything is configured properly (and that is not always the case), your desktop environment will show apps from all of these sources collected in the app launcher. Sometimes you have the same app installed by multiple sources, and you might wonder “why does Gnome shell show me OpenTTD twice?”

          For end users who install apps from multiple other sources besides the default app store, there is no easy solution, no one agreed-upon algorithm to keep things easy. Windows, Mac OS, and Android all have the same problem. But I have always felt that Linux (especially Guix OS) has the best solution, which is automated package management.

          • In /etc? Are you sure? /usr/share/applications has your system-wide .desktop files, (while .local/share/applications has user-level ones, kinda analogous to installing a program to AppData on Windows). And .desktop files could be interpreted at a high level as an “app”, even though they’re really just a simple description of how to advertise and launch an application from a GUI of some kind.

              • The actual executables shouldn’t ever go in that folder though.

                Typically packages installed through a package manager stick everything in their own folder in /usr/lib (for libs) and /usr/share (for any other data). Then they either put their executables directly in /usr/bin or symlink over to them.

                That last part is usually what results in things not living in a consistent place. A package might have something that qualifies as both an executable and a lib, so they store it in their lib folder, but symlink to it from bin. Or they might not have a lib folder, and just put everything in their share folder and symlink to it from bin.

    • Expanding on the other explanations. On Windows, it’s fairly common for applications to come with a copy of everything they use in the form of DLL files, and you end up with many copies of various versions of those.

      On Linux, the package manager manages all of that. So if say, an app needs GTK, then the package manager makes sure GTK is also installed. And since your distribution’s package manager manages everything and mostly all from source code, you get a version of the app specifically compiled for that version of GTK the distribution provides.

      So if we were to do it kind of the Windows way, it would very, very quickly become a mess because it’s not just one big self contained package you drop in C:\Program Files. Linux follows the FSH which roughly defines where things should be. Binaries go to /usr/bin, libraries to /usr/lib, shared files go to /usr/shared. A bunch of those locations are somewhat special, for example .desktop files in /usr/share/applications show up in the menu to launch them. That said Linux does have a location for big standalone packages: that’s usually /opt.

      There’s advantages and inconveniences with both methods. The Linux way has the advantage of being able to update libraries for all apps at once, and reduce clutter and things are generally more organized. You can guess where an icon file will be located most of the time because they all go to the same place, usually with a naming convention as well.

    • different strokes.

      windows comes from the personal computing world and retains a bunch of stuff from it to this very day for no good reason, in this case there used to be no guarantee that a particular installation target would have the target directory mapped in a consistent way so the installer would make a guess and give the user a chance to change it.

      if that sounds stupid, it is. no one writes in assembly anymore, they target the OS and nowadays the OS will have a consistent set of folders to install stuff to. we all know where the program “should” be installed to already.

      but it didn’t used to be like that in the PC world! used to be your computer wasn’t a fixed purpose windows computer from the jump, never to be anything else. there were different OSes that people would use regularly and even different DOS environments which a person could use to run programs under. Hard disks weren’t disks inside the machine, but big beige external disks that you’d plug up, set beside the computer and access after booting. in that setup where a programmer targeted DOS (if they cared about the execution environment at all and didn’t just write for the processor) it made sense to ask where someone was gonna want to install their software, and to what extent they’d even want to start dirtying up the media they paid good money for with some knuckleheads weird files from some goofy program on a stack of floppy disks.

      linux comes from the unix world, where the question of where something installs is easy and straightforward: it installs in $PATH. what is $PATH? it’s where the os will look when you try to run something to see if it can run any program by that name. if a program isn’t installed in $PATH then when you type its’ name in and hit enter the computer won’t know what the hell youre talking about and you’ll have to type it’s whole ass location out and hit enter.

      Why didn’t unix systems that linux imitates ask you where to install stuff? because usually it wasn’t your choice! linux was unix for personal computers and unix was run on systems that took up whole rooms with all sorts of equipment. you might be the user of that system but never have access to the room with all the spinning disks and flashing lights, stuck on a terminal dialing in over a serial line.

      so the assumption was that you’d have a variable in your user environment that would say where things were installed but not that you’d have the ability to change it or even install things.

      so why in a linux environment would you ever install anything outside of $PATH or even want to be sure where something’s installed at all?

      even under linux it can be useful to do either. installing outside of path keeps programs from being accidentally autocompleted or invoked. installing in a particular component of $PATH ($PATH can be many directories!) lets you put serious business programs that demand maximum performance on faster media.

      so why the hell won’t linux systems give you the option of installing in a specific location or outside of $PATH altogether?

      they will, but unlike windows, they don’t ask you. unless you specifically ask to do that unique and very abnormal operation, they just do the usual thing. when you want to install weirdly you gotta dig into your package manager and packaging system. sometimes you unzip a package and change a line in a file then zip it back up and install from your modified version.

    • You want a disk imager like clonezilla or something. If you’re not ready for that just show hidden files and copy your /home/your_username directory to a usb or something. That’s where all your files live.

    • I ran Linux in a vm and destroyed it about… 5 times. It allowed me to really get in and try everything. Once I rana command that removed everything, and I remember watching icons disappear as the destruction unfolded in front of me. It was kind of fun.

      I have everything backed up and synced so it’s all fine. Just lots of reinstalling Thunderbird, Firefox, re logging into firefox sync, etc.

      Once I stopped destroying everything I did a proper install and haven’t looked back.

      This will be my 7th year on Linux now. And I have to say, it feels good to be free.

    • Install everything from store, and you should be fine. If you see a tutorial being too complicated, it is probably not worth following. Set your search engine to past year and see if there are better tutorials.

      You might also want to consider atomic distros, they are much harder to mess up, and much easier to restore.

    •  wolf   ( @wolf@lemmy.zip ) 
      link
      fedilink
      English
      16 months ago

      Another perspective: Your question implies you want to try out things with Debian. If this assumption is correct, I would highly recommend you just create a virtual machine with qemu/libvirt and learn within this environments/try out things there before doing stuff ‘on the metal’.

      Of course backups are always a good idea and once you got your feed wet you might want to learn about ‘Infrastructure as code’. Have fun!

      • That’s a fantastic suggestion and I’ve already been doing exactly this :) but, I’ve done it just enough to know that I’m really really good at breaking stuff, and I don’t want to wait to fully transition from windows. Hence the need for full system backups

  •  starman   ( @starman@programming.dev ) 
    link
    fedilink
    English
    12
    edit-2
    6 months ago

    On Android, when an app needs something like camera or location or whatever, you have to give it permission. Why isn’t there something like this on Linux desktop? Or at least not by default when you install something through package manager.

    • Android apps are sandboxed by default while packages on Linux run with the users permission.

      There is already something like this with Flatpak since it also sandboxes every installed program and only grants requested permissions.

    •  Eugenia   ( @eugenia@lemmy.ml ) 
      link
      fedilink
      English
      76 months ago

      Because it requires a very specific framework to be built from the ground up, and FDO doesn’t specify these. A lot of breakage would happen if were to shoehorn such changes into Linux suddenly. Android has many layers of security that they’re fundamentally different than that of the unix philosophy. That’s why Android, even if it’s based on Linux, it’s not really considered “a distro”.

    • It is technically doable, but that would require a unified method to call when an app needs camera, and that method will show the prompt.

      This would technically require developers to rewrite their apps on linux, which is not happening anytime soon.

      Fortunately, pipwire and xdg-portal is currently doing this work, like when you screen share on zoom using pipwire, a system prompt will pop up asking you for what app to share. Unlike on Windows, zoom cannot see your active windows when using this method, only the one that you choose to share.

      Most application framework, including GTK and electron, are actively supporting pipwire and portal, so the future is bright.

      There is a lot of work in improving security and usablity of linux sandbox, and it is already much better than Windows (maybe also better than macos?). I am confident, in 5 years, linux sandbox stack (flatpak, protal, pipewire) will be as secure and usable as on android and ios.

    • As others have said you can use something like Flatpak to get the ability to sandbox apps, but Android was designed to have its apps sandboxed from the start, Linux never was.

      There are efforts like SELinux which locks down the OS and makes doing pretty much anything impossible without explicit permission being given. However working with SELinux is typically very tedious and usually not worth it unless working in a highly secure environment.

    • There is no direct equivalent, system32 is just a collection of libraries, exes, and confs.

      Some of what others have said is accurate, but to explain a bit further:

      Longer explanation:

      spoiler

      system32 is just some folder name the MS engineers came up back in the day.

      Linux on the other hand has many distros, many different contributors, and generally just encourages a … better … separation for types of files, imho

      The linux filesystem is well defined if you are inclined to research more about it.
      Understanding the core principals will make understanding virtually everything else about “linux” easier, imho.

      https://tldp.org/LDP/intro-linux/html/sect_03_01.html

      tl;dr; “On a UNIX system, everything is a file; if something is not a file, it is a process.”

      The basics:

      • /bin - base level executables, ls, mv, things like that
      • /sbin - super-level-only (root) executables, parted, reboot, etc
      • /lib - Somewhat self-explanatory, holds libraries, lots of things put their libs here, including linux kernel modules, /lib/modules/*, similar to system32’s function of holding critical libraries
      • /etc - Configuration lives here, generally speaking, /etc/\ can point you in the right direction, typically requires super-user (root) to edit
      • /usr - “User installed” software, which can be a murky definition in today’s world, but lots of stuff ends up here for installed software, manuals, icon files, executables

      Bonus:

      • /opt - A special location, generally third-party, bundled-style software likes to use this, Java for instance, but historically some admins use it as the “company location”, meaning internally developed software would live there.
      • /srv - Largely subjective, but myself and others I know use it for partitions that are outside the primary disk, for instance we use /srv/db for database volumes, /srv/www for web-data volumes, /srv/Media for large-file storage, etc, etc

      For completeness:

      • /home - You’ll find your user directories here, personally, this is my directory I backup, I don’t carry much more with me on most systems.
      • /var - “Variable data”, basically meaning any data that will likely grow over time, eg: /var/log
  •  jack   ( @jack@monero.town ) 
    link
    fedilink
    11
    edit-2
    6 months ago

    Why are debian-based systems still so popular for desktop usage? The lack of package updates creates a lot of unnecessary issues which were already fixed by the devs.

    Newer (not bleeding edge) packages have verifiably less issues, e.g. when comparing the packages of a Debian and Fedora distro.

    That’s why I don’t recommend Mint

    • This is where I see atomic distros like Silverblue becoming the new way to get reliable systems, and up to date packages. Because the base system is standardised there can be a lot more QA as there is alot less entropy in the installed system. Plus free rollbacks if something goes wrong. You don’t get that by default on Debian.

      Distrobox can be used to install other programs (including GUI apps), I currently run Steam in a distrobox container on Silverblue and vscode with all of my development stuff in another one. And of course use flatpaks from FlatHub where I can, these are more stable than distro packages imo (when official) as the developers are developing for a single target with defined library versions. Not whatever ancient version Debian has or the latest which appeared on Arch very soon after release.

      I’ve tried Debian a couple of times but it’s just too out of date. I like new stuff and when developing stuff I need new stuff and it’s not good enough to just install the development/unsupported versions of Debian. It’s probably great for servers, but I think atomic distros will be taking over that space as well, eventually.

      • Distrobox can be used to install other programs (including GUI apps)

        I need to play around with that sometime. Is it a chroot or a privileged container or is it a sandboxed container with limited access? How’s hardware excelleration in those?

        • It’s just a podman/docker container. I’m pretty sure it is unprivileged (you don’t need root). I’ve tried it on both NVIDIA (RTX 3050 Mobile) and AMD (Radeon RX Vega 56) and setting up the distrobox through BoxBuddy (a nice GUI app that makes management easy) I didn’t need to do anything to get the graphics drivers working. I only mentioned BoxBuddy because I haven’t set one up from the command line so I don’t know if it does any initial set up. I haven’t noticed any performance issues (yet).

      •  jack   ( @jack@monero.town ) 
        link
        fedilink
        1
        edit-2
        6 months ago

        You should definetely check out Bazzite, it’s based on Fedora Atomic and has Steam on the base image. Image and Flatpak updates are applied automatically in the background, no need to wait for the update on next boot. Media codecs and necessary drivers are installed by default.

        The Bazzite image also directly consists of the upstream Fedora Atomic image, just with quality of life changes added and optimized for gaming

        • It looks pretty good, I’ve been planning on installing it on another computer for use as a media centre. Probably wouldn’t use it as my main image as I’m not a huge fan of their customised GNOME experience (I quite like vanilla GNOME with maybe a system tray extension). But I must admit watching some of the videos by the creator of Bazzite and ublue got me interested in this atomic desktop thing again

    • Noob question?

      You do seem confused though… Debian is both a distribution and a packaging system… the Debian Stable distribution takes a very conservative approach to updating packages, while Debian Sid (unstable) is more up-to-date while being more likely to break. While individual packages may be more stable when fully-updated, other packages that depend on them generally lag and “break” as they need updating to be able to adapt to underlying changes.

      But the whole reason debian-based distros exist is because some people think they can strike a better balance between newness and stability. But it turns out that there is no optimal balance that satifies everyone.

      Mint is a fine distro… but if you don’t like it, that is fine for you too. The only objection I have to your objection is that you seem to be throwing the baby out with the bathwater… the debian packaging system is very robust and is not intrinsically unlikely to be updated.

      •  jack   ( @jack@monero.town ) 
        link
        fedilink
        1
        edit-2
        6 months ago

        Noob question?

        Should I’ve made a new post instead?

        You do seem confused though… Debian is both a distribution and a packaging system…

        Yes, Debian is a popular distro depending on Debian packages. My concern is about the update policy of the distro

        But the whole reason debian-based distros exist is because some people think they can strike a better balance between newness and stability.

        Debian is pure stability, not the balance between stability and newness. If you mean debian-BASED in particular, trying to introduce more newness with custom repos, I don’t think that is a good strategy to get balance. The custom additional repos quickly become too outdated as well. Also, the custom repos can’t account for the outdatedness of every single Debian package.

        you seem to be throwing the baby out with the bathwater… the debian packaging system is very robust and is not intrinsically unlikely to be updated.

        Yes, I don’t understand/approve the philosophy around the update policy of Debian. It doesn’t make sense to me for desktop usage. The technology of the package system however is great and apt is very fast

    •  wolf   ( @wolf@lemmy.zip ) 
      link
      fedilink
      English
      4
      edit-2
      6 months ago

      Debian desktop user here, and I would happily switch to RHEL on the desktop.

      I fully agree, outdated packages can be very annoying (running a netbook with disabled WIFI sleep mode right now, and no, backported kernel/firmware don’t solve my problem.)

      For some years, I used Fedora (and I still love the community and have high respect for it).

      Fedora simply does not work for me:

      • Updated packages can/did break compatibility for stuff I need to get stuff done. Fine if Linux is your hobby, not acceptable if you need to deliver something
      • In the industry, many times not the last recent packages of development environments are used (if you are lucky, you are only a few months or years behind), so having the most recent packages in Fedora helps me exactly zero
      • With Debians 2 years release cycle (and more years of support), I can upgrade to the next version when it is appropriate for me (= 1-2 days when there is a slow week and the worst bugs have been found already)
      • My setup/desktop is heavily customized and fully automated via IaC, no motivation to tweak this stuff constantly (rolling) or every 6-12 months (Fedora)
      • From time to time I have to use software packages from 3rd parties, with Fedora, I might be one update way from breaking this software packages because of version incompatibilities (yes, I might pin a version of something to use a 3rd party software, but this might break Fedora updates (direct and transitive dependencies)
      • I once had a cheap netbook for travel with an infamous chip set bug concerning sleep modes, which would be triggered by some kernels. You can imagine how it is to run Fedora, when you get often Kernel updates and the bug will be triggered or not after double digit numbers of minutes of work.

      Of course, I could now start playing around with containerizing everything I need for work somehow and run something like Silverblue, perhaps I might do it someday, but then I would again need to update my IaC every 6-12months, would have to take care of overlays AND containers etc…

      When people go ‘rolling’ or ‘Fedora’, they simply choose a different set of problems. I am happy we have choice and I can choose the trouble I have to life with.

      On a more positive note: This also shows how far Linux has come along, I always play around with the latest/BETA Fedora Gnome/KDE images in a VM, and seriously don’t feel I am missing anything in Debian stable.

    • Unlike other commenters, I agree with you. Debian based systems are less suitable for desktop use, and imo is one of the reasons newcomers have frequent issues.

      When installing common applications, newcomers tend to follow the windows ways of downloading an installer or a standalone executable from the Internet. They often do not stick with the package manager. This can cause breakage, as debian might expect you to have certain version of programs that are different from what the installer from the Internet expects. A rolling release distro is more likely to have versions that Internet installers expect.

      To answer your question, I believe debian based distros are popular for desktop because they were already popular for server use before Linux desktop were significant.

    • Because people have the opposite experience and outlook from what you wrote.

      I’m one of those people.

      I’m surprised no one brought up the xz thing.

      Debian specifically targeted by complex and nuanced multi prong attack involving social engineering and very good obfuscation. Defeated because stable (12 stable, mind you, not even 11 which is still in lots of use) was so slow that the attack was found in unstable.

      • This is not a good argument imo. It was a miracle that xz vulnerability was found so fast, and should not be assumed as standard. The developer had been contributing to the codebase for 2 years, and their code already landed in debian stable iirc. There’s still no certainty that that code had no vulnerabilities. Some vulnerabilities in the past were caught decades after their introduction.

        • Its not a miracle it is just probability. When you have enough eyes on something you are bound to catch bugs and problems.

          Debian holds back because its primary goal is to be stable, reliable and consistent. It has been around longer that pretty much everything else and it can run for decades without issue. I read a article about a university that still had the original Debian install from the 90’s. It was on newer hardware but they just copied over the files.

          • Lots of eyes is not enough. As I mentioned earlier, there are many popular programs found on most machines, and some actually user facing (unlike xz) where vulnerabilities were caught months, years, and sometimes decades later. xz is an exception, not a rule.

        • I was running 12 stable on a machine that had been updated and upgraded in between the time when the backdoor was introduced and when it was discovered. At no point in time did either dpkg query or the self report show that system had the affected 5.6.0(?) version.

          Stable had versions of xz that contained commits from the attacker and has been walked back to before those were made out of an abundance of caution.

          There’s a lot of eyes on that software now and I haven’t seen anyone report that versions between the attacker gaining commit rights and the attacked version were compromised yet, as you said though: that doesn’t mean it isn’t and vulnerabilities have existed for many years without being discovered.

          As to whether it’s a good argument, vulnerabilities have a short lifespan generally. Just hanging back and waiting a little while for something to crop up is usually enough to avoid them. If you don’t believe me, check the nist database.

          I’m gonna sound like a goober here, but the easiest way to not trip is to slow down and look where you’re going.

      • If that is a good tradeoff for you, old/broken packages but more trusted, then that’s okay. Btw, the xz backdoor was found so quickly it didn’t even ship to most distros in use, except for Debian Sid and Arch I think

        • I see it as a fantastic trade off. There are some packages I use that need to be more up to date than stable repos and I either install them from different repos or in a different way.

          And arch never even had the whole backdoor because they built from source and didn’t include the poison pill binary component from the attacker.

  •  eezeebee   ( @eezeebee@lemmy.ca ) 
    link
    fedilink
    English
    10
    edit-2
    6 months ago

    Considering switching to Linux, but don’t know what to choose/what will work for my needs. I want to be able to play my steam games, use discord desktop application, and use FL Studio. I need it to work with an audio interface and midi controller too. I am not interested in endless tweaking of settings, simple install would be nice. What should I go for?

    •  Julian   ( @julianh@lemm.ee ) 
      link
      fedilink
      English
      136 months ago

      Mint would probably work for you. Some stuff is outdated, but it has flatpak which is a package manager with more up to date apps. If you’re willing to put in the time though, I’d recommend trying some of the more common distros out (Mint, Debian, Ubuntu, Fedora). You can use a liveusb to test them without installing.

      Steam is available anywhere so that’s not a problem.

      Discord officially only has a .deb package, so that’s only for Debian based distros (Debian, Ubuntu, Mint). There are other options for almost all distros though - I personally use Webcord

      Fl studio might be tricky - supposedly it runs through wine but you might have to do a bit of work. I’ve personally used Reaper and I works great.

    •  cobra89   ( @cobra89@beehaw.org ) 
      link
      fedilink
      1
      edit-2
      6 months ago

      Your biggest hurdles will probably be getting FL Studio working properly and getting your niche hardware like the audio interface and midi controller to work. It’s possible they’ll work under Linux but it’s also possible their drivers and software are specific to Windows.

  •  SagXD   ( @sag@lemm.ee ) 
    link
    fedilink
    10
    edit-2
    6 months ago

    Why in Linux, Software uses a particular version of a library? Why not just say it’s dependent on that library regardless of version? It become pain in ass when you are using an ancient software it required old version of newer library so you have to create symlinks of every library to match old version.

    I know that sometimes newer version of Library is not compatible with software but still. And what we can do as a software developer to fix this problem? Or as a end user.

    •  Eugenia   ( @eugenia@lemmy.ml ) 
      link
      fedilink
      English
      66 months ago

      Because it’s not guaranteed that it’ll work. FOSS projects don’t run under strict managerial definitions where they have to maintain compatibility in all their APIs etc. They are developed freely. As such, you can’t really rely on full compatibility.

    •  wolf   ( @wolf@lemmy.zip ) 
      link
      fedilink
      English
      26 months ago

      IMHO the answer is social, not technical:

      Backwarts compatibility/legacy code is not fun, and so unless you throw a lot of money at the problem (RHEL), people don’t do it in their free time.

      The best way to distribute a desktop app on Linux is to make it Win32 (and run it with WINE) … :-P (Perhaps Flatpak will change this.)

    •  cobra89   ( @cobra89@beehaw.org ) 
      link
      fedilink
      1
      edit-2
      6 months ago

      what we can do as a software developer to fix this problem?

      Update the software to ensure it works with the latest version of the library or rewrite it using a different library that maintains backwards compatibility.

      Edit: To clarify a little further, windows typically does this by including the needed libraries with every application (dll files) so you end up with many copies of the same library because every app brings their own version.

      Linux doesn’t do this traditionally, but you can package your own libraries (including old out of date libraries, the issue is they’re typically insecure and have security fixes added after those versions) with your apps by using one of the new all in one package management systems like Flatpak or God forbid, snap.

    • Software changes. Version 0.5 will not have the same features as Version 0.9 most of the time. Features get added over time, features get removed over time and the interface of a library might change over time too.

      As a software dev, the only thing you can do is keep the same API for ever, but that is not always feasible.

      • To add some nuance, all features in v0.5.0 should still exist in v0.9.0 in the modern software landscape.

        If v0.5.0 has features ABC and then one was then changed, under semantic versioning which most software follows these days then it should get a breaking change and would therefore get promoted to v1.0.0.

        If ABC got a new feature D but ABC didn’t change, it would have been v0.6.0 instead. This system, when stuck to,helps immensely when upgrading packages.