I’ve been setting up a new Proxmox server and messing around with VMs, and wanted to know what kind of useful commands I’m missing out on. Bonus points for a little explainer.

Journalctl | grep -C 10 'foo' was useful for me when I needed to troubleshoot some fstab mount fuckery on boot. It pipes Journalctl (boot logs) into grep to find ‘foo’, and prints 10 lines before and after each instance of ‘foo’.

  •  InFerNo   ( @InFerNo@lemmy.ml ) 
    link
    fedilink
    arrow-up
    18
    ·
    30 days ago

    I use $_ a lot, it allows you to use the last parameter of the previous command in your current command

    mkdir something && cd $_

    nano file
    chmod +x $_

    As a simple example.

    If you want to create nested folders, you can do it in one go by adding -p to mkdir

    mkdir -p bunch/of/nested/folders

    Good explanation here:
    https://koenwoortman.com/bash-mkdir-multiple-subdirectories/q

    Sometimes starting a service takes a while and you’re sitting there waiting for the terminal to be available again. Just add --no-block to systemctl and it will do it on the background without keeping the terminal occupied.

    systemctl start --no-block myservice

  • The watch command is very useful, for those who don’t know, it starts an automated loop with a default of two seconds and executes whatever commands you place after it.

    It allows you to actively monitor systems without having to manually re-run your command.

    So for instance, if you wanted to see all storage block devices and monitor what a new storage device shows up as when you plug it in, you could do:

    watch lsblk
    

    And see in real time the drive mount. Technically not “real time” because the default refresh is 2 seconds, but you can specify shorter or longer intervals.

    Obviously my example is kind of silly, but you can combine this with other commands or even whole bash scripts to do some cool stuff.

  •  HiddenLayer555   ( @HiddenLayer555@lemmy.ml ) 
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    30 days ago

    parallel, easy multithreading right in the command line. This is what I wish was included in every programming language’s standard library, a dead simple parallelization function that takes a collection, an operation to be performed on the members of that collection, and optionally the max number of threads (should be the number of hardware threads available on the system by default), and just does it without needing to manually set up threads and handlers.

    inotifywait, for seeing what files are being accessed/modified.

    tail -F, for a live feed of a log file.

    script, for recording a terminal session complete with control and formatting characters and your inputs. You can then cat the generated file to get the exact output back in your terminal.

    screen, starts a terminal session that keeps running after you close the window/SSH and can be re-accessed with screen -x.

    Finally, a more complex command I often find myself repeatedly hitting the up arrow to get:

    find . -type f -name '*' -print0 | parallel --null 'echo {}'

    Recursively lists every file in the current directory and uses parallel to perform some operation on them. The {} in the parallel string will be replaced with the path to a given file. The '*' part can be replaced with a more specific filter for the file name, like '*.txt'.

    • should be the number of hardware threads available on the system by default

      No, not at all. That is a terrible default. I do work a lot on number churning and sometimes I have to test stuff on my own machine. Generally I tend to use a safe number such as 10, or if I need to do something very heavy I’ll go to 1 less than the actual number of cores on the machine. I’ve been burned too many times by starting a calculation and then my machine stalls as that code is eating all CPU and all you can do is switch it off.

  •  mel ♀   ( @mat@jlai.lu ) 
    link
    fedilink
    arrow-up
    6
    ·
    1 month ago

    I’d say that journalctl is not only boot, but every service that runs on the computer has its logs collected through it, so you can use it as journalctl --grep="your regex". You can also add -k to check kernel logs, -b -n to check nth precedent boot or -b n to check the absolute nth boot. There is a lot that you can check with it and it is quite nice :)

    Otherwise, I like eza as an ls replacement, or batcat as a human friendly cat

  •  Ŝan • 𐑖ƨɤ   ( @Sxan@piefed.zip ) 
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    ripgrep has mostly replaced grep for me, and I am extremely conservative about replacing core POSIX utilities - muscle memory is critical. I also tend to use fd, mainly because of its forking -x, but its advantages over find are less stark þan rg’s improvements over grep.

    nnn is really handy; I use it for everything but the most trivial renames, copies, and moves - anyþing involving more þan one file. It’s especially handy when moving files between servers because of þe built-in remote mounting.

      •  Ŝan • 𐑖ƨɤ   ( @Sxan@piefed.zip ) 
        link
        fedilink
        English
        arrow-up
        2
        ·
        30 days ago

        No. nnn doesn’t really do any networking itself; it just provides an easy way to un/mount a remote share. nnn is just a TUI file manager.

        For transfering 5TB of media, I’d acquire a 5TB USB 3.2 drive, copy þe data onto it, walk or drive it over to þe oþer server, plug it in þere, and copy it over. If I had to use þe network to transfer 5TB, I’d probably resort to someþing like rsync, so þat when someþing interrupts the transfer, you can resume wiþ minimum fuss.

          • I’m sure it’ll be fine, I’m no expert but i use it it sync/clone my music storage with my music player. There’s thousands of songs, lyrics, and album art getting synced and backed up regularly in my case.

            Worst thing that happened to me happened when I was new to the tool and accidentally overwrote my source directory (luckily I had backups)

  • find /path/to/starting/dir -type f -regextype egrep -regex 'some[[:space:]]*regex[[:space:]]*(goes|here)' -exec mv {} /path/to/new/directory/ \;
    

    I routinely have to find a bunch of files that match a particular pattern and then do something with those files, and as a result, find with -exec is one of my top commands.

    If you’re someone who doesn’t know wtf that above command does, here’s a breakdown piece by piece:

    • find - cli tool to find files based on lots of different parameters
    • /path/to/starting/dir - the directory at which find will start looking for files recursively moving down the file tree
    • -type f - specifies I only want find to find files.
    • -regextype egrep - In this example I’m using regex to pattern match filenames, and this tells find what flavor of regex to use
    • -regex 'regex.here' - The regex to be used to pattern match against the filenames
    • -exec - exec is a way to redirect output in bash and use that output as a parameter in the subsequent command.
    • mv {} /path/to/new/directory/ - mv is just an example, you can use almost any command here. The important bit is {}, which is the placeholder for the parameter coming from find, in this case, a full file path. So this would read when expanded, mv /full/path/of/file/that/matches/the/regex.file /path/to/new/directory/
    • \; - This terminates the command. The semi-colon is the actual termination, but it must be escaped so that the current shell doesn’t see it and try to use it as a command separator.
  • nc is useful. For example: if you have a disk image downloaded on computer A but want to write it to an SD card on computer B, you can run something like

    user@B: nc -l 1234 | pv > /dev/$sdcard

    And

    user@A: nc B.local 1234 < /path/to/image.img

    (I may have syntax messed up–also don’t transfer sensitive information this way!)

    Similarly, no need to store a compressed file if you’re going to uncompress it as soon as you download it—just pipe wget or curl to tar or xz or whatever.

    I once burnt a CD of a Linux ISO by wgeting directly to cdrecord. It was actually kinda useful because it was on a laptop that was running out of HD space. Luckily the University Internet was fast and the CD was successfully burnt :)