I installed a few different distros, landed on Cinnamon Mint. I’m not a tech dummy, but I feel I’m in over my head.

I installed Docker in the terminal (two things I’m not familiar with) but I can’t find it anywhere. Googled some stuff, tried to run stuff, and… I dunno.

I’m TRYING to learn docker so I can set up audiobookshelf and Sonarr with Sabnzbd.

Once it’s installed in the terminal, how the hell do I find docker so I can start playing with it?

Is there a Linux for people who are deeply entrenched in how Windows works? I’m not above googling command lines that I can copy and paste but I’ve spent HOURS trying to figure this out and have gotten no where…

Thanks! Sorry if this is the wrong place for this

EDIT : holy moly. I posted this and went to bed. Didn’t quite realize the hornets nest I was going to kick. THANK YOU to everyone who has and is about to comment. It tells you how much traction I usually get because I usually answer every response on lemmy and the former. For this one I don’t think I’ll be able to do it.

I’ve got a few little ones so time to sit and work on this is tough (thus 5h last night after they were in bed) but I’m going to start picking at all your suggestions (and anyone else who contributes as well)

Thank you so much everyone! I think windows has taught me to be very visually reliant and yelling into the abyss that is the terminal is a whole different beast - but I’m willing to give it a go!

  • To be fair, you’re taking on a lot of new things at once. You can spin up docker containers on windows too, all while using a UI. I think it’s great your exposing yourself to self hosting, linux, command line interface, and containerization all at once, but don’t beat yourself up for it taking longer than expected. A lot of it takes time. I encourage you to keep trying and playing. Good luck!

    • There is docker desktop on Linux too.

      sudo apt install docker flatpak -y
      # add flathub if not already there
      flatpak install docker
      

      Edit: please use Podman. And if you think about Virtualbox, please use Virt-manager instead. Both are RedHat products and they are pretty awesome. Podman is more secure and works well for your job, it is letter-for-letter compatible with docker. You can use podman-compose if you need) but that requires to run a daemon which is also possible.

      You can use Podman with many container sources natively, while docker only allows dockerhub. Says enough.

            • I dont know if you still need an external repo for docker, podman is in the system repo.

              When using Containers it works the same. Yes systemd stuff may be manual thats what Podman Desktop is probably for.

              Its more secure, more free and when learning it new anyways, why not the better tool?

              • Podman is not really a replacement for docker. It is its own separate thing and it has trade offs with docker.

                The reason I use podman on my local machine and for Jellyfin is that it is darn fast. It makes docker look like a emulator by comparison. With that being said the issue with podman is mostly permission related. However, it also has some instability in cases where a container malfunctions. This often is happens when you try to stop and start a container at the same time.

                Once that happens the runtime effectively locks up as the system is in a state that it doesn’t know how to handle.

                Some of the benefits of docker include its ability to recover from just about anything. If you need a container to always be available docker can do that. It also can do on the fly patching and self healing.

                Docker compose is very nice to have for larger software with multiple containers. I can write a docker compose that builds and deploys my nodejs applications with a database back end and it will just work without any issues. Deploy it and you are good.

  • Docker is one of the container technologies

    Containers vs Images

    This is a very simplified explanation, which hopefully clears up for you. As with all simplifications, they aren’t entirely correct.

    Containers put processes, files, and networking into a space where they are secluded from the rest. You main OS is called the host and the container is called the guest. You can selectively share resources with the guest. To use an analogy, if you house were the computer with linux, if you took a room, put tools and resources for those tools into it, put workers into it, got them to start working and locked the door, they’d be contained in the room, unable to break out. If you want to give workers access to resources, you either a window, a corridor, or even a door depending on much access you want to give them.

    Containers are created from an image. Think of it as the tools, resources, and configuration required every time you create a room in your house for workers to do a job. The woodworkers will need different tools and resources than say metalworkers.

    Most images are stored on DockerHub. So when you do docker pull linuxserver/sonarr you download the image. When you do docker run linuxserver/sonarr you create a container from an image.

    Installation

    You’re on Cinnamon Mint which is linux distribution derived from another linux distribution called debian. You have to follow the installation instructions. Everything is there. If something doesn’t work, it’s most likely because you skipped a step. The most important ones are the post-installation steps:

    • Adding your user to the docker group
    • Logging out and back in (or simply restarting)

    Those are the most commonly missed steps. I’ve fallen for this trap too!

    Local help

    To use linux, you need to learn about ways to help yourself or find help. On linux, most well-written programs print a help. Simply running the command without any arguments most often output a help text --> running docker does so. If they don’t, then the --help flag often does --> docker --help. The shorthand is -h --> docker -h.

    Some commands have sub commands e.g docker run, docker image, docker ps, … . Those subcommands also take flags of which -h and --help are available.

    The help output is often not extensive and programs often have a manual. To access it the command is man --> man find will output the manual for the find command. Docker doesn’t have a local manual but an online one.

    For clarification when running a command there are different ways to interpret the text after the command:

    Flags/Options

    These are named parameters to the command. Some do not take input like -h and --help which are called flags. Some do like --file /etc/passwd and are often called options.

    Arguments

    These are unnamed parameters and each command interprets them differently. echo "hello world" --> echo is the command and "hello world" is the argument. Some commands can take multiple arguments

    Running containers

    Imperatively

    As described above docker run linuxserver/sonarr runs an image in a container. However, it runs in the foreground (as opposed to the background in what is most often called a “daemon”). Starting in the foreground is most likely not how you want to run things as that means if you close your terminal, you end the process too. To run something in the background, you use docker run --detatch linuxserver/sonarr.

    You can pass options like -v or --volume to make a file or folder from your host system available in the guest e.g -v /path/on/host:/tmp/path/in/guest. Or -p / --port to forward a host port to a guest port e.g -p 8080:80. That means if you access port 8080 on your host, the traffic will be forwarded to port 80 in the guest.

    These are imperatives as in you command the computer to do a specific action. Run that docker image, stop that docker container, restart these containers, start a container with this port forward and that volume with this user …

    Declaratively

    If you don’t want to keep typing the same commands, you can declare everything about your containers up front. Their volumes, ports, environment variables, which image is used, which network card/interface they have access to, which other network they share with other containers, and so on.

    This is done with docker-compose or docker compose for newer docker versions (not all operating systems have the new docker version).

    This already a long text, so if you want to know more, the best resource is the docker compose manual and the compose file reference.


    Hopefully this helped with the basics and understanding what you’re doing. There are probably great video resources out there that explain it more didactically than I do with steps you can follow along.

    Good luck!

    CC BY-NC-SA 4.0

        •  Arthur Besse   ( @cypherpunks@lemmy.ml ) 
          link
          fedilink
          English
          1
          edit-2
          4 months ago

          Can containers boot on their own? Then they are hosts, if not they are guests.

          It depends what you mean by “boot”. Linux containers are by definition not running their own kernel, so Linux is never booting. They typically (though not always) have their own namespace for process IDs (among other things) and in some cases process ID 1 inside the container is actually another systemd (or another init system).

          However, more often PID 1 is actually just the application being run in the container. In either case, people do sometimes refer to starting a container as “booting” it; I think this makes the most sense when PID 1 in the container is systemd as the word “boot” has more relevance in that scenario. However, even in that case, nobody (or at least almost nobody I’ve ever seen) calls containers “guests”.

          As to calling containers “hosts”, I’d say it depends on if the container is in its own network namespace. For example, if you run podman run --rm -it --network host debian:bookworm bash you will have a container that is in the same network namespace as your host system, and it will thus have the same hostname. But if you omit --network host from that command then it will be in its own network namespace, with a different IP address, behind NAT, and it will have a randomly generated hostname. I think it makes sense to refer to the latter kind of container as a separate host in some contexts.

  • Keep in mind that you’re not just learning to use linux, but also learning to use docker,and docker is a complex tool by itself, which makes your journey significantly harder.

    I never user Sabnzbd so I wouldn’t be of much help. However, you could post some of the problems you find, so that other people lay help you.

  • Once it’s installed in the terminal, how the hell do I find docker so I can start playing with it?

    Type docker in the terminal, it’s a CLI application.

    But it sounds like you might want to install Docker Desktop, which does give you a GUI to use.

  • Docker is professional software and because of that isn’t always the most intuitive thing to use.

    The first big thing to get your head around is that there is no GUI. Everything you do to manage docker is through the command line. If you really want to, there’s some third party GUI software for managing Docker, but I haven’t used it in the 2 years I’ve been using Docker.

    Once you’ve installed docker, there’s a little bit of setup required to make it run smoothly. The Docker Docs page on Linux post-installation steps has detailed instructions on how to do that and how to run a test container

  •  ugh   ( @ugh@lemm.ee ) 
    link
    fedilink
    184 months ago

    I’m also pretty new to Linux, but I’ve finally gotten a bit of a grasp on it. I started learning Linux to set up a home server, so I also jumped straight into Docker. You have gotten some thorough replies, but I thought I’d share my chaotic journey with it that has ended in a decent ratio of success vs confusion. Note: I have used Ubuntu from the start.

    Don’t use docker desktop. It’s garbage. Also, don’t use the Snap image.

    $sudo apt install docker.io

    $sudo apt install docker-compose

    Those are both cli “programs”. They aren’t apps like you have on Windows. It seems VERY intimidating to talk into the void of the terminal, but you’ll build confidence. Docker commands work like any other commands, all in the same place.

    Now install Portainer CE. The instructions are very simple to follow. You can reach Portainer through your browser at the localhost address it gives you, which you type directly into the URL bar. I think it’s http://localhost:9000.

    Portainer will give you an easy visual way to manage Docker. You can perform many tasks through Portainer instead of using the command line. Honestly, I’m pretty sure you could do everything on Portainer and not even touch the terminal. I don’t suggest that because you will have to have at least a basic understanding of how Linux and Docker work. You will be confused, and you will feel crazy. Eventually, you’ll get more comfortable living in that psychosis.

    On to Docker Compose!! This is my preferred way to run containers. I have a designated folder in /opt that I use for my compose files. This way, I know exactly how I set up my programs. My memory is awful and I tweak things so often that I’ll completely forget how I have even gotten to this point or where ANY of my files are. It’s pretty easy to find docker compose files online that you can copy and paste and it instantly works!

    To make it simple, after I have saved my docker-compose.yaml file in the designated folder, I right click on the empty area and choose “open in terminal”.

    $sudo docker-compose up -d

    The -d instructs the program to continue to run, even if you exit out of the terminal. At this point, your container will also show up in portainer!

    I think that covers the basics. My biggest tip is to keep a notepad handy to write down commands that you have to search for. Your bookmarks will fill up very quickly otherwise. Expect to get stuck sometimes. Expect to spend hours trying to troubleshoot an issue, then have it suddenly work with no idea what you actually did to fix it. Accept the win and never touch it again.

    I have done fresh installs many times. Some because I’ve played with 10 different programs that I decided against and want the leftover files gone, some because I wanted to try different mixes of distros, and once because I legitimately broke the OS.

    Keep your important stuff on an external drive to avoid any loss and don’t be afraid to mess around with it!

    Btw, I’m a huge KDE plasma fan. It’s lighter than GNOME, but very user friendly. I’ve settled on Kubuntu as my distro of choice.

    • Don’t use docker-compose anymore, it’s been obsolete for a while now and won’t be getting new features.

      It’s best to add the docker official repo and install docker and docker-compose-plugin from there.

      The -plugin version acts as a docker subcommand (docker compose) and will be updated alongside docker going forward.

  • There’s not a fantastic GUI for managing docker. There are a few like dockge (my favorite) or Portainer.

    I recommend spending some time learning docker run with exposed ports, bind volumes (map local folders from your drive to folders inside the container so you can access your files, configs, content, etc. Also so you don’t lose it when you delete the container and pull a newer version).

    Once you’ve done that, check out the spec page for docker-compose.yaml. This is what you’ll eventually want to use to run your apps. It’s a single file that describes all the configuration and details required for multiple docker containers to run in the same environment. ie: postgres version 4.2 with a volume and 1 exposed port, nginx latest version with 2 volumes, 4 mapped ports, a hostname, restart unless-stopped, and running as user 1000:1000, etc.

    I’ve been using docker for home a LIGHT business applications for 8 years now and docker-compose.yaml is really all you need until you start wanting high availability and cloud orchestration.

    Some quick tips though.

    • Search some-FOSS-app-name docker-compose read through a dozen or so templates. Check the spec page to see what most of the terms mean. It’s the best way to learn how to structure your own compose files later.
    • Use other people’s compose.yaml files as templates to start from. Expect to change a few things for your own setup.
    • NEVER use restart: always. Never. Change it to restart: unless-stopped. Nothing is more annoying than stopping an app and having it keep doom spiraling. Especially at boot.
    • Take a minute to set the docker daemon or service to run at boot. It takes 1 google and 30 seconds, but it’ll save you when you drunkenly decide to update your host OS right before bed.
    • Use mapped folders for everything. If you map /srv/dumb-app/data:/data then anything that container saves to the /data folder is accessible to you on your host machine (with whatever user:group is running inside the container, so check that). If you use the docker volumes like EVERYONE seems to like doing, it’s a pain to ever get that data back out if you want to use it outside of docker.
  • Linux is a slightly different way of thinking. There are any number of ways that you can solve any problem you have. In Windows there are usually only one or two that work. This is largely a result of the hacker mentality from which linux and Unix came from. “If you don’t like how it works, rewrite it your way” and “Read the F***ing Manual” were frequent refrains when I started playing with linux.

    Mint is a fine distro which is based off of Ubuntu, if I remember correctly. Most documentation that applies to Ubuntu will also apply to you.

    Not sure what exactly you installed, but I’m guessing that you did something along the lines of sudo apt-get install docker.

    If you did that without doing anything ahead of time, what you probably got was a slightly out of date version of docker only from Mint’s repositories. Follow the instructions here to uninstall whatever you installed and install docker from docker’s own repositories.

    The Docker Desktop that you may be used to from Windows is available for linux, however it is not part of the default install usually. You might look at this documentation.

    I don’t use it, as I prefer ctop combined with docker-compose.

    Towards that end, here is my docker-compose.yaml for my instance of Audiobookshelf. I have it connected to my Tailscale tailnet, but if you comment out the tailscale service stuff and uncomment the port section in the audiobookshelf service, you can run it directly. Assuming your not making any changes,

    Create a directory somewhere,

    mkdir ~/docker

    mkdir ~/docker/audiobookshelf

    This creates a directory in your home directory called docker and then a directory within that one called audiobookshelf. Now we want to enter that directory.

    cd ~/docker/audiobookshelf

    Then create your docker compose file

    touch docker-compose.yaml

    You can edit this file with whatever text editor you like, but I prefer micro which you may not have installed.

    micro docker-compose.yaml

    and then paste the contents into the file and change whatever setting you need to for your system. At a minimum you will need to change the volumes section so that the podcast and audiobook paths point to the correct location on your system. it follows the format :.

    Once you’ve made all the needed changes, save and exit the editor and start the the instance by typing

    sudo docker compose up -d

    Now, add the service directly to your tailnet by opening a shell in the tailscale container

    sudo docker exec -it audiobookshelf-tailscale /bin/sh

    and then typing

    tailscale up

    copy the link it gives you into your browser to authenticate the instance. Assuming that neither you or I made any typos you should now be able to access audiobookshelf from http://books If you chose to comment out all the tailscale stuff you would find it at http://localhost:13378

    docker-compose.yaml

    version: "3.7"
    services:
      tailscale:
        container_name: audiobookshelf-tailscale
        hostname: books                         # This will become the tailscale device name
        image: ghcr.io/tailscale/tailscale:latest
        volumes:
          - "./tailscale_var_lib:/var/lib"        # State data will be stored in this directory
          - "/dev/net/tun:/dev/net/tun"           # Required for tailscale to work
        cap_add:                                    # Required for tailscale to work
          - net_admin
          - sys_module
        command: tailscaled
        restart: unless-stopped
      audiobookshelf:
        container_name: audiobookshelf
        image: ghcr.io/advplyr/audiobookshelf:latest
        restart: unless-stopped
    #    ports:                                                                  # Not needed due to tailscale
    #      - 13378:80                                                                                                     
        volumes:
          - '/mnt/nas/old_media_server/media/books/Audio Books:/audiobooks'       # This line has quotes because there is a space that needed to be escaped.
          - /mnt/nas/old_media_server/media/podcasts:/podcasts                               # See, no quotes needed here, better to have them though.
          - /opt/audiobookshelf/config:/config                                       # I store my docker services in the /opt directory. You may want to change this to './config' and './metadata' while your playing around
          - /opt/audiobookshelf/metadata:/metadata
        network_mode: service:tailscale                                  # This line tells the audiobookshelf container to send all traffic to tailscale container
    

    I’ve left my docker-compose file as-is so you can see how it works in my setup.

  • Is there a Linux for people who are deeply entrenched in how Windows works?

    Zorin is this, though your choice of Mint is good too. It will not help you understand docker though.

    If you’re trying to do Audibookshelf on a home server CasaOS made docker super easy for me.

  • I think it will be easier to use docker compose with a premade docker compose file.

    Create a new directory cd into it and then nano docker-compose.yaml. For instance, here is a docker compose I found one the audio bookshelf website:

    version: "3.7"
      services: 
        audiobookshelf:
          image: ghcr.io/advplyr/audiobookshelf:latest
          ports: - 13378:80
          volumes:
            - :/audiobooks
            - :/podcasts - :/config
            - :/metadata
    

    https://www.audiobookshelf.org/docs/#docker-compose-install

  •  Julian   ( @julianh@lemm.ee ) 
    link
    fedilink
    English
    7
    edit-2
    4 months ago

    Docker is a developer* tool, not really something you should be using without some technical knowledge, or at least some experience in the terminal. It’s purely a terminal application, so you just type “docker” in the terminal to use it. You can also type “man docker” to view the manual (which shows arguments and command you can use) but again, that won’t help much without some prior knowledge.

    The things you’re trying to use look like self-hosted web servers, which is a lot to set up for someone who’s new to the terminal. I won’t stop you if you want, but be warned. I’d recommend using something simpler like cozy, which you should be able to find and download in the software store.

    *Edit: it’s not only a developer tool, it’s used for deployment as well. I lumped the two together. It’s still a tool made for people with more familiarity using the terminal though.