All the recent dark net arrests seem to be pretty vague on how the big bad was caught (except the IM admin’s silly opsec errors) In the article they say he clicked on a honeypot link, but how was his ip or any other identifier identified, why didnt tor protect him.

Obviously this guy in question was a pedophile and an active danger, but recently in my country a state passed a law that can get you arrested if you post anything the government doesnt like, so these tools are important and need to be bulletproof.

  • He most likely had bad OPSEC.

    Secondly, he took this imagery he had created and then “turned to AI chatbots to ensure these minor victims would be depicted as if they had engaged in the type of sexual contact he wanted to see.” In other words, he created fake AI CSAM—but using imagery of real kids.

    This probably didn’t help much either.

  • There are many ways your real IP can leak, even if you are currently using Tor somehow. If I control the DNS infrastructure of a domain, I can create an arbitrary name in that domain. Like artemis.phishinsite.org, nobody in the world will know that this name exists, the DNS service has never seen a query asking for the IP of that name. Now I send you any link including that domain. You click the link and your OS will query that name through it’s network stack. If your network stack is not configured to handle DNS anonymously, this query will leak your real IP, or that of your DNS resolver, which might be your ISP.

    Going further, don’t deliver an A record on that name. Only deliver a AAAA to force the client down an IPv6 path, revealing a potentially local address.

    Just some thoughts. Not sure any of this was applicable to the case.

    There are many ways to set up something that could lead to information leakage and people are rarely prepared for it.

        • Honestly i believe there is no point in speculating whether there are backdoors installed in popular privacy and encryption apps; for all we know, the powers that are may already have a digital fortress’esque quantum computer decrypting everything from your signal messages to onion sites in a matter of seconds.

          I think(my personal headcanon) that there probably was a Manhattan project like top secret research project that has yielded some very fruitful results, now i guess we have to just wait for some whistleblower or a disgruntled employee to feed it a file that blows it up.

  • Compromised ? Maybe, but this guy doesn’t provide any evidence one way or the other. He’s using at least 7 other possible vectors (apparently Calculator Photo Vault just hides the gallery, no encryption, so it’s over right there) which is way too many for good opsec.

    With Tor the question has always been compromised exit nodes as I understand it.

  • I went one step further than OP and actually read the article.

    Web-based generative AI tools/chatbots

    he created fake AI CSAM—but using imagery of real kids.

    All the privacy apps in the world won’t save you if you’re uploading pics to a subscription cloud service.

  • All the crypto in the world won’t help if you do stupid stuff and have crap OPSEC.

    A big part of that is stay under the radar. If I were NSA I’d be running a great many TOR nodes (both relay nodes and exit nodes) in the hope of generating some correlations. Remember, you don’t need to prove in order to raise suspicion.

    So for example if you have an exit node so you can see the request is CSAM related, and you run a bunch of intermediate nodes and your exit nodes will prefer routing traffic through your intermediate nodes (which also prefer routing traffic through your other intermediate nodes), you can guess that wherever the traffic goes after one or two relay hops through your nodes is whoever requested it.
    If you find a specific IP address frequently relaying CSAM traffic to the public Internet, that doesn’t actually prove anything but it does give you a suspicion ‘maybe the guy who owns that address likes kiddy porn, we should look into him’.

    Doing CSAM with AI tools on the public Internet is pretty stupid. Storing his stash on cell phones was even more stupid. Sharing any of it with anyone was monumentally stupid. All the hard crypto in the world won’t protect you if you do stupid stuff.


    So speaking to OP- First, I’d encourage you to consider moving to a country that has better free speech protections. Or advocate for change in your own country. It’s not always easy though, because sadly it’s the unpopular speech that needs protecting; if you don’t protect the unpopular stuff you jump down a very slippery slope. We figured that out in the USA but we seem to be forgetting it lately (always in the name of ‘protecting kids’ of course).

    That said, OP you should decide what exactly you want to accomplish. Chances are your nation’s shitty law is aimed at public participation type websites / social media. If it’s important for you to participate in those websites, you need to sort of pull an Ender’s Game type strategy (from the beginning of the book)- create an online-only persona, totally separate from your public identity. Only use it from devices you know are secure (and are protected with a lot of crypto). Only connect via TOR or similar privacy techniques (although for merely unpopular political speech, a VPN from a different country should suffice). NEVER use or allude to your real identity from the online persona. Create details about your persona that are different from your own- what city you’re in, what your age and gender are, what your background is, etc. NEVER use any of your real contact info or identity info.

    • Feasibility aside, the shitty laws in question attacks content hosting platforms first(safe harbor laws). So no matter how many vpns i hop through, the site would simply limit the visibility of my post in the region and go about their day.

      • Yes exactly. This is a big part of why some repressive countries are starting to require identity registration in order to participate in social media. Arresting people is unnecessary if you can simply stamp out non-preferred speech at the point of discussion.

  • Let’s see here…

    Potato Chat - This is the first I’ve heard of it so I can’t speak to it one way or another. A cursory glance suggests that it’s had no security reviews.

    Enigma - Same. The privacy policy talks about cloud storage, so there’s that. The following is also in their privacy policy:

    A super group can hold up to 100,000 people, and it is not technically suitable for end-to-end encryption. You will get this prompt when you set up a group chat. Our global communication with the server is based on TLS encryption, which prevents your chat data from being eavesdropped or tampered with by others… The server will index the chat data of the super large group so that you can use the complete message search function when the local message is incomplete, and it is only valid for chat participants… we will record the ID, mobile phone number, IP location information, login time and other information of the users we have processed.

    So, plaintext abounds. Definite OPSEC problem.

    nandbox - No idea, but the service offers a webapp client as a first class citizen to users. This makes me wonder about their security profile.

    Telegram - Lol. And I really wish they hadn’t mentioned that hidden API

    Tor - No reason to re-litigate this argument that happens once a year, every year ever since the very beginning. Suffice it to say that it has a threat model that defines what it can and cannot defend against, and attacks that deanonymize users are well known, documented, and uses by law enforcement.

    mega.nz - I don’t use it, I haven’t looked into it, so I’m not going to run my mouth (fingers? keyboard?) about it.

    Web-based generative AI tools/chatbots - Depending on which ones, there might be checks and traps for stuff like this that could have twigged him.

    This bit is doing a lot of heavy lifting in the article: “…created his own public Telegram group to store his CSAM.”

    Stop and think about that for a second.