Hey all,

Moderation philosophy posts started out as an exercise by myself to put down some of my thoughts on running communities that I’d learned over the years. As they continued I started to more heavily involve the other admins in the writing and brainstorming. This most recent post involved a lot of moderator voices as well, which is super exciting! This is a community, and we want the voices at all levels to represent the community and how it’s run.

This is probably the first of several posts on moderation philosophy, how we make decisions, and an exercise to bring additional transparency to how we operate.

  • Most misinformation is poorly veiled hate speech and as of such it would be removed. Down votes don’t change how visible it is, or how much it’s spread. You deal with misinformation by removing it and banning repeat offenders/spreaders.

    • I would argue that only a subset of misinformation of veiled hate speech. The rest, and majority, are misinformed individuals repeating/regurgitating their inherited misinformation.

      There is definitely some hate speech veiled as misinformation, I’m not arguing against that. My argument is that’s not the majority. There are severity scales of misinformation, with hate speech being near the top, and mundane conversational, every day, transient factual incorrectness being near the bottom.

      There exists between those two a range of unacceptable misinformation that should be considered.

      A consequence of not considering or recognizing it is a lack of respect for the problem. Which leads to the problem existing unopposed.

      I don’t have a solution here since this is a broad & sticky problem and moderating misinformation is an incredibly difficult thing to do. Identifying and categorizing the levels you care about and the potential methods to mitigate it (whether you can or can’t employ those is another problem) should, in my opinion, be on the radar.

      • If you’re volunteering to take it on, feel free to put together a plan. Until then you’ll have to trust that we’re trying to moderate within scope of the tools we have and the size of our platform, but we’re still human and don’t catch everything. Please report any misinformation you see.

        • Maybe my edit was too late! I did not communicate my objective clearly and edited my comment to reflect that.


          I’m not proposing you solve misinformation, but rather that you recognize it as more than you stated, and respect the problem. That’s the first step.

          This is not something I can do, it is only something that admins can do in synchrony as a first step. I am doing my part in trying to convince you of that.

          Only after that has been achieved can solutions be theorized/probed. Which is something I would happily be part of, and do foot work towards (Though I’m sure there are experts in the community, it’s a matter of surfacing them). That’s a long term project, which takes a considerable amount of research and time, doing it without first gaining traction on the problem space would be a fools errand.

          At the risk of sounding abrasive (I intend no disrespect, just not sure how else to ask this atm), is that understood/clear?


          Edit: Want to note that I am actually impressed by the level of engagement community founders have had. It’s appreciated.

          • Yes it’s one of many problems with modern social media, no I don’t have time right now to elaborate a plan on how to tackle it. Something on this subject will likely come much further in the future but right now I’m focused mostly on creating the docs necessary for people to understand our ethos more when I’m not busy living my life.

      • An excellent example of very sneaky misinformation was an article in the Guardian the other day, which kept talking about 700,000 immigrants. Since 350,000 of those are foreign students, that is a blatant lie. Foreign students aren’t immigrants.