Americans can become more cynical about the state of society when they see harmful behavior online. Three studies of the American public (n = 1,090) revealed that they consistently and substantially overestimated how many social media users contribute to harmful behavior online. On average, they believed that 43% of all Reddit users have posted severely toxic comments and that 47% of all Facebook users have shared false news online. In reality, platform-level data shows that most of these forms of harmful content are produced by small but highly active groups of users (3–7%). This misperception was robust to different thresholds of harmful content classification. An experiment revealed that overestimating the proportion of social media users who post harmful content makes people feel more negative emotion, perceive the United States to be in greater moral decline, and cultivate distorted perceptions of what others want to see on social media. However, these effects can be mitigated through a targeted educational intervention that corrects this misperception. Together, our findings highlight a mechanism that helps explain how people’s perceptions and interactions with social media may undermine social cohesion.

  •  TehPers   ( @TehPers@beehaw.org ) 
    link
    fedilink
    English
    arrow-up
    17
    ·
    21 hours ago

    these effects can be mitigated through a targeted educational intervention that corrects this misperception.

    Let me make sure I understand this. The solution to “users who post harmful content” on moderated platforms is to educate others that those users are the minority?

    Sure, education would be good here, but I’m sure someone out there can think of another solution to this problem.

  • I think we gotta shift social media to be wildly hostile to every and any company. We gotta bully companies such that it’s risky business to advertise on social media.

    Social media can never truly belong to the people as long as advertisers wanna use it for their own means. We gotta get mean and indiscriminate.

    Every corporations post and page should be filled with hatred and disdain, toxicity heretofore unfound online.

    •  hallettj   ( @hallettj@leminal.space ) 
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      16 hours ago

      I think publicly operated social media could be a good thing. If you want to remove profit motive, you need an operator that is not motivated by profit. Take a look at how effective public broadcasting is.

      Lemmy and Mastodon do much better than the big corporate apps. But currently these are small-scale operations. A state or federal government program would be better suited to scale up, and better able to resist capitalist takeover.

      • I thought about that, and while there’s probably some value in diversifying the approaches, I’d worry about the potential backlash of people thinking anyone trashing a company is a bot.

        I think the key to success is making it memetically hilarious to be incredibly toxic towards any company.

        •  Tyrq   ( @Tyrq@lemmy.dbzer0.com ) 
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          It’s interesting, the language models are typically good enough to fool most people in an off hand way, it seems like it’s worked well for the propaganda machine, in so far that it might encourage actual people to think the opinion is more popular than it is.

          And this all runs into the idea of forcing people to have verified online identities to limit the harm the dead internet can do to actual people. Not that I like that, but that’s the genie I see being out of the bottle with or without this route.

          Either way, you’re probably right by pointing out the toxicity of these corps with our own toxicity, so I guess it’s still fire with fire, just lke any memevent, it just needs to catch on with the right few people

  • Great link! The anti-humanist tilt the toxicity of rightwing owned corporate social media has caused in people is alarming. People don’t listen when you point out structural reasons for the toxicity that are results of active choices of the platform owners (they are too wrapped up in emotional reactions to toxic users to focus on the broader system causing it) and it scares me because it is beginning to make people turn even more towards a cynical future and isolate in private bubbles that are only more vulnerable to structural forces of toxicity trying to make us feel cornered and alone.

    I don’t think it is any coincidence this is happening, it just makes me deeply sad. When people pull back from seeing a positive potential in social media they completely surrender the capacity to hear narratives that challenge the structures of power around them because the only way they will then hear about the broader world is through established corporate news channels which are literally no less toxic than social media anyways even though people ignore this fact.

    To give one specific example, if people weren’t on social media they wouldn’t know about the Palestinian Genocide, it was completely and thoroughly sweeped under the rug in the channels that people have retreated to from social media because “social media = bad” and if that doesn’t scare you, you are an idiot.

    This article is super helpful as a reality check for people lost in cynical reductive narratives about humans on social media, thank you!

    • I certainly don’t doubt the top line trends here in this study. However, I wonder how the fediverse might differ. Anyone can set up a Lemmy or Mastodon instance, regardless of their technical aptitude and desire to secure the instance from toxic content. It’s also inherently more anonymous. A more direct comparison might be 4chan not Reddit.

      Both of the platforms they studied on have more sophisticated methods to determine bad actors because of their dominance. Particularly Facebook, where a profile is supposed to be mappable to a single, real identity.

      That being said, there’s a very real concern about how algorithms end up placing these “loud mouths” in other people’s feeds. After all, outrage is still something that is preferred by algorithms. So those 3 to 7% of users creating the toxic content, might represent an outsized proportion of views.

      It’s good to know the reality on these platforms is that most people are reasonable. I guess the bigger question is why people come to the opposite conclusion. And I think that algorithms overly indexing on outrage are part of that.

      • I guess the bigger question is why people come to the opposite conclusion.

        It has to do with the rise in rightwing “populism” which is founded on the morality story that people are inherently toxic and bad and must be violently oppressed by righteous force to create society.

        This story is being firehosed at people by the rich in a million ways and people are largely uncritically accepting it. If you want to understand it, look at how in the US people have become convinced society is becoming more violent, people are becoming less trustable, and that crime is increasing year over year. If you look at the evidence, it points in the opposite direction except for a brief spike of crime during Covid, but who cares about reality? The story of everybody becoming more depraved and scary is a good one and it gets us engaged, why talk about hard numbers and policy?

        •  t3rmit3   ( @t3rmit3@beehaw.org ) 
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          4 hours ago

          Crime going down and society becoming more violent are different things. “Crime” is only a measure of illegal violence.

          I think right-wing ethno-nationalism has made society more violent. Violence by agents of the state like ICE also isn’t being included in those stats: we’ve had more assaults and abductions this year than in decades thanks to ICE, if we are counting honestly.