Recently learned about this stuff on a Lemmy World post and I thought I’d move the conversation here since they’ve been fussy about DB0 in the past.

I’m really just a common seaman when it comes to the high seas. I just use Proton and qBit and whichever website is supposed to be safe and active nowdays (currently Torrent Galaxy?). I just download from the magnet link to qBit and save it on my drive. I don’t know much about torrent streaming or ports or networks or anything IT might ask me to check beyond “plug it in”.

But for some shows I’ve only been able to find single episodes, not full seasons, so when I heard about something that compiles stuff for me, it seemed convenient. I’d be curious to learn more. Unfortunately the websites for these services don’t really offer any explanation to new users and laymen, so I got a bit lost. Thought I’d ask here rather than venture into their forums where they already don’t seem to welcome idiots like me.

So… what the heck is Sonarr and how do I use it?

  • So there are multiple technologies at play. One is an indexer program (jackett/prowlarr/etc). These basically hook up to public trackers (1337x, TPB, etc).

    Then you have Sonarr/Radarr which are connected to the indexer. Sonarr and radarr basically have an rss feed (which is basically a list of content, podcasts and youtube apps use this to show you new episodes/videos).

    I think they use tmdb or something as there source of rss feeds. They also let you select which shows to monitor and it stores that inforamation in a database. So sonarr will reach out to tmdb and request the latest rss feed for a show every so often for the shows in the database. If an episode that sonarr is supposed to download is listed on the rss feed it will then send a request to its indexer and tell it what show, what episode, what season, etc.

    The indexer then searches each tracker it is connected to for that show, season, episode combo and returns a list of links to sonarr/radarr.

    Sonarr then has a set of rules in its database to filter these links (ie minimum quality, language, etc) to determine which link to pick). Finally in its settings sonarr/radarr has a location where it should save the files.

    Now sonarr/radarr cant download themselves, instead they are also hooked to a torrent client. For example qbittorrent which has an api which allows you to programatically download torrents (ie it has a command to download a torrent and sonarr/radarr sends the command along with additional information like the link and where to save the files.

    This is the basic setuo but there are other tools used sometimes like unpackarr which is for decompressing files that get downloaded. Unpackarr watches a folder for new files and if it finds a file in a compressed format (7z, rar, zip, etc) it will automatically decompress it so that a media program like jellyfin can play it without you having to do it manually.

    Programs like jellyfin are media servers where you would specify folders for movies/tv shows/etc and any playable file in those folders can be streamed in their app/web interface. These kind of programs are really just graphical programs that are easy to set up and use that are built on top of more technical programs like ffmpeg which does the transcoding and streaming.

    Then there are also programs like flaresolverr. You would integrate this into your indexer because some trackers might use cloudflare to prevent bots (they require you to click a checkbox and watch the movement of the cursor to see if it is robotic). Flaresolverr uses something called selenium webdriver which is a program that can automate a webbrowser. You can program it to open web pages, click things, etc. I assume the code uses randomization to make cloudflare think a person is moving the mouse to click the button so you can access those trackers

    In simple terms that’s how it works. All these programs set up a web interface and api and send each other http requests to communicate

  • Hey @TimewornTraveler@lemm.ee, I was in your shoes a month ago. I am proud to announce I just completed my (first) setup of all *ARR tools on Synology Container Manager (Docker). I documented my progress. If you need help with setup, please don’t hesitate to reach out.

    EDIT : If the community would help me sanitise my files and get a GitHub repo going, I would happily help build an all-in-one download-and-run install package. Please let me know.

  •  PenguinCoder   ( @Penguincoder@beehaw.org ) 
    link
    fedilink
    English
    4
    edit-2
    8 months ago

    The *arrs don’t actually find content for you, they give you an interface to track and view content. You would still need some type of finding service to plug into Sonnar be it Usenet or torrents. What ever providers you use for finding content, would stay the same if you just add Sonarr etc. Maybe this will help explain things.

  •  Naate   ( @Naate@beehaw.org ) 
    link
    fedilink
    English
    28 months ago

    Sonarr (and the other 'arrs) is just a management tool. From the servarr wiki:

    Sonarr is a PVR for Usenet and BitTorrent users. It can monitor multiple RSS feeds for new episodes of your favorite shows and will grab, sort and rename them. It can also be configured to automatically upgrade the quality of files already downloaded when a better quality format becomes available.

    At a high level, you tell it where your current tv show episodes are saved, and add new shows as you want. It then automates the process of searching and downloading. But you still need to have an indexer and download client. If you’re not able to find shows searching your current tracker/indexer, Sonarr won’t have any better luck.

    Finding a good source of the media you want is the most important part. If you’re not comfortable with installing and managing your own server applications, the *arr stack could be overwhelming at first. The wiki I linked has a lot of good information to get you started.

  •  Freeman   ( @freeman@lemmy.pub ) 
    link
    fedilink
    English
    18 months ago

    The ARR tools are basically a search engine website you host. The interact with a few other tools you have to have access to/pay for. Namely an indexing service and a (for some) a download service. They can use torrents, so you dont HAVE to pay for downloading, but using something like newsgroups is really nice and add reliability and security.

    THe “ARR’s” basically then are just a fancy UI and scheduler and just search the indexing service, download the files you want, re-assemble them and copy them to the location you want (often a file share that your media player like Plex or Jellyfin will use).

    You can set them to continually look for something too. So for Sonarr, it will auto-download new episodes as soon as they appear in the index. Or if you see a commercial for something upcoming, you can add it and monitor it and as soon as it starts showing up in the indexes it will download.

  •  maxprime   ( @maxprime@lemmy.ml ) 
    link
    fedilink
    English
    18 months ago

    The *arr services are fantastic for helping you organize your content as you download them. They won’t find anything that you wouldn’t be able to find yourself, though. You feed it your indexers via Jackett or Prowlarr for torrents, or NewzNab (or equivalent) for Usenet. What shows up on my Sonarr searches is likely very different than somebody else’s since we likely have different indexers.

    If you are looking for content that is hard to find I would recommend getting into Usenet or joining a private tracker.

    If you want to get into building a large library and hosting your it on a Plex/Jellyfin/Emby server, I would recommend getting into Sonarr and Radarr, and then learning about the other *arrs.