•  Zworf   ( @Zworf@beehaw.org ) 
    link
    fedilink
    1
    edit-2
    1 year ago

    Ideally, I think a social platform should lure radicalizing agents, then expose them to de-radicalizing ones, without exposing everyone else. Might be a hard task to achieve, but worth it.

    You really think this works? I don’t. I just see them souring the atmosphere for everyone and attracting more mainstream users to their views.

    We’ve seen in Holland how this worked out. The nazi party leader (who chanted “Less Moroccans”) won the elections by a landslide a month ago. There is a real danger of disenchanted mainstreamers being attracted to nazi propaganda in droves. We’re stuck with them now for 4 years (unless they manage to collapse on their own, which I do hope).

    • No, that’s why I said “Ideally”, meaning it as a goal.

      I don’t think we have the means to do it yet, or at least I don’t know of any platform working like that, but I have some ideas of how some of it could be done. Back in the days of Digg, with some people, we spitballed some ideas for social networks, among them a movie ranking one (that turned out to be a flop because different people would categorize films differently), and a kind of PageRank for social networks, that back then was computationally impractical. But with modern LLMs running trillions of parameters, and further hardware advances, even O(n²) with n=millions becomes feasible in real time, and in practice it wouldn’t need to do nearly that much work. With the right tuning, and dynamic message visibility, I think something like that could create the exact echo chambers that would attract X people, allow in des-X people, while keeping everyone else out and unbothered.

      Of course there is a dark side, in that a platform could use the same strategy to mold the opinion of any group… and I wouldn’t be surprised to learn that Meta had been doing exactly that.