What opinion just makes you look like you aged 30 years

  • Idea is that you can have different apps that require different versions of dependency X, and that could stop you with traditional package managment, but would be OK with containers

    That’s what I mean by “static linking with extra steps”. This problem was already solved a very long time ago. You only get these version conflicts if your dependencies are dynamically linked, and you don’t have to dynamically link your dependencies.

        • That’s why semver exists. Major-update-number.Minor-update-number.Patch-number Usually, you don’t care about patches, they address efficency of things inside of lib, no Api changes. Something breaking could be in minor update, so you should check changelogs to see if you gonna make something about it. Major version most likely will break things. If you’ll understand this, you’ll find dynamic linking beneficial(no need to recompile on every lib update), and containers will eliminate stability issues cause libs won’t update to next minor/major version without tests.

            • Still, it’s going to take some time, every time some dependency(of dependency(of dependency)) changes(cause you don’t wanna end up with critical vulnerability). Also, if app going to execute some other binary with same dependency X, dependency X gonna be in memory only once.

              • Still, it’s going to take some time

                Compared to the downsides of using a container image (duplication of system files like libc, dynamic linking overhead, complexity, etc), this is not a compelling advantage.

                Also, if app going to execute some other binary with same dependency X

                That seems like a questionable design choice.

                • That seems like a questionable design choice.

                  I mean, you could have GUI for some CLI tool. Then you would need to run binary GUI, and either run binary CLI from GUI or have it as daemon. Also, if you are going to make something that have more than one binary, you’ll get more space overhead for static linking than for containers

                  Compared to the downsides of using a container image (duplication of system files like libc, dynamic linking overhead, complexity, etc), this is not a compelling advantage.

                  Man, that’s underestimating compiling time and frequency of updates of various libs, and overestimating overhead from dynamic linking (it’s so small it’s calculated in CPU cycles). Basically, dynamic linking reduces update overhead, like with static linking you’ll need to download full binary every update, even if lib is tiny, while with dynamic you’ll have to download only small lib.

                  • I mean, you could have GUI for some CLI tool.

                    Yes, I’ve seen that pattern before, but:

                    1. I wouldn’t expect them to have many libraries in common, other than platform libraries like libc, since they have completely different purposes.
                    2. I was under the impression that Docker is for server applications. Is it even possible to run a GUI app inside a Docker container?

                    Also, if you are going to make something that have more than one binary

                    If they’re meant to run on the same machine and are bundled together in the same container image, I would call that a questionable design choice.

                    Man, that’s underestimating compiling time and frequency of updates of various libs

                    Well, I have only my own experience to go on, but I am not usually bothered by compile times. I used to compile my own Linux kernels, for goodness’ sake. I would just leave it to do its thing and go do something else while I wait. Not a big deal.

                    Again, there are exceptions like Chromium, which take an obscenely long time to compile, but I assume we’re talking about something that takes minutes to compile, not hours or days.

                    and overestimating overhead from dynamic linking (it’s so small it’s calculated in CPU cycles).

                    No, I’m not. If you’re not using JIT compilation, the overhead of dynamic linking is severe, not because of how long it takes to call a dynamically-linked function (you’re right, that part is reasonably fast), but because inlining across a dynamic link is impossible, and inlining is, as matklad once put it, the mother of all other optimizations. Dynamic linking leaves potentially a lot of performance on the table.

                    This wasn’t the case before link-time optimization was a thing, mind you, but it is now.

                    Basically, dynamic linking reduces update overhead, like with static linking you’ll need to download full binary every update, even if lib is tiny, while with dynamic you’ll have to download only small lib.

                    Okay, but I’m much more concerned with execution speed and memory usage than with how long it takes to download or compile an executable.