• That does not solve anything, since people are drawn to platforms with easy accessible content they enjoy. People will use those platforms. The problem is that algorithms will recommend you radicalizing content. And it’s an extremely complex task to solve. I would have no idea where to beginn, except better education - but that’s something that will take a generation to work out.

      • If the system is so fucked that it’d “take a generation to work out”, maybe the system isn’t worth saving in the first place. We’re talking social media websites here, not something like hospitals or schools that are required for a functioning society.

        • It took us hundred thousand years to figure out that hitting children as education is bad. We might not be the brightest.

          More serious - sure, but it’s not that it’s easy to get rid of social media. Sure in china you can just forbid them, but they go more the way of using them to spread state propaganda. And in most democracies, people won’t support a blanket ban - I wouldn’t.

          • Treat them like cigarettes. Systems designed to amplify “engagement” are rage-farms, and bad for your brain. Limit their marketing, de-platform the companies, and commit to public health measures that educate people on that fact.

    • Not using the platforms is a personal solution for any individual who wants to escape, not a general solution. For “don’t use the platforms” to work as a solution for the masses, so few people would use the platforms that the platforms would cease to exist.

        • I’m genuinely failing to see the downside here of facebook, twitter, and the like ceasing to exist.

          Me neither. I should have been clearer. Despite all the bad things they do to make things worse, the problem is not the existence of the platforms. The problem is people. I was alluding to the fact that if there were enough people who recognized the problems of these platforms and acted on that, those kinds of platforms would never have arisen in the first place.

          The various nasty types have always found ways to spread their messages, convert people to their cause, and convince others to do the actual dirty work.

          Throughout history, every time a technology was introduced to increase the speed and geographical distribution of a message, extremism founded on false conspiracy, propaganda, disinformation, and misinformation has at least temporarily increased. There are really simple explanations for why that is. First, we have the problems of human cognition. Our brains are really lousy at identifying cause and effect, separating meaningful patterns from useless ones, and creating and maintaining accurate memories.

          Second, truth requires verification. Verification cannot happen without investigation and communication among investigators. This means that verification will always happen much slower than message distribution. That is why a lie can circle the globe before the truth can get out of the starting blocks.

          As bad as these platforms are, it’s important to remember that their problematic algorithms are little more than codification of the methods that propagandists have used for centuries. Rush Limbaugh brought these concepts to a peak before most people had ever heard of the internet. Usenet was filled with the same stuff we see on Facebook, and there were no algorithms or central systems, just people doing what people do.