Hey all,

Moderation philosophy posts started out as an exercise by myself to put down some of my thoughts on running communities that I’d learned over the years. As they continued I started to more heavily involve the other admins in the writing and brainstorming. This most recent post involved a lot of moderator voices as well, which is super exciting! This is a community, and we want the voices at all levels to represent the community and how it’s run.

This is probably the first of several posts on moderation philosophy, how we make decisions, and an exercise to bring additional transparency to how we operate.

  • It makes it seem like a platform free of hate speech isn’t a worthy or desirable goal.

    I don’t think that’s a fair characterization of what’s happening or what is explained in the philosophy post. In fact, we explicitly state that sanitized spaces are both desirable and needed in the world, but that’s not what we’re trying to accomplish here. The relevant quote is about 1/3rd of the way down, and copied here for posterity

    To be clear: a sanitized space has its place. We are not disputing the overall utility of said spaces, and it’s fine to want one. For our purposes however this is not possible or desirable - we do not wish Beehaw to be a sanitized space.

    We don’t spend a lot of time talking about the why, but that’s also explained in a footnote. Unfortunately we are busy running this website and moderating the content, which is a LOT of work - there’s often hundreds or thousands of messages a day in the moderator channels discussing what content should be left up and what should be taken down. There isn’t a ton of time to spend on posts like this one, which I made sure we prioritized, so that people could have better transparency into everything happening behind the scenes.

    Moreover, other comments in this thread seem to suggest that there in fact have been harmful comments not removed, which to me is indefensible.

    To be clear, nearly all harmful material is removed. This post was about the stuff that falls into a gray area, which we tried to do our best to explain. If you have specific examples of speech that you take issue with and are looking for more detail into why it remained up, feel free to reach out with tangible examples and we can do our best to explain how we arrived at a consensus on whether to leave it up or take it down. Most of the cases where content isn’t removed involve individuals who are learning and some of these examples actually result in the original poster editing their post and explaining that they learned something tangible that day and apologize for causing issues.

    If you see offending content, it should be removed.

    This is also addressed in the post - what is offensive to one user isn’t necessarily offensive to another user, and this is explaining how we do our best to accommodate that. Earlier this week there was an individual who had been traumatized by men who was offended by the very presence of cis men on this instance. If we were to accommodate this individual, we’d have to remove all cis men from the instance. That action would be offensive in and of itself to every cis man on the instance. There’s no way to accommodate both sides of this issue and that’s what the spirit of this post is about.