The social media platform Bluesky recently had an incident where a user created an account with a racial slur as the handle. The Bluesky team quickly removed the account but realized they should have had automated filters in place to prevent such issues. They are now implementing a two-step automated filtering and flagging system for user handles while still involving human moderators. The team acknowledges they were too slow to communicate with the community about the incident and are working to improve their Trust and Safety team and communication processes going forward. They are committed to learning from this mistake and building a safer and more resilient social media platform over time.


Previous post about this topic https://beehaw.org/post/2152596

Bluesky allowed people to include the n-word in their usernames | Engadget

Bluesky, a decentralized social network, allowed users to register usernames containing the n-word. When reports surfaced about a user with the racial slur in their name, Bluesky took 40 minutes to remove the account but did not publicly apologize. A LinkedIn post criticized Bluesky for failing to filter offensive terms from the start and for not addressing its anti-blackness problem. Bluesky later claimed it had invested in moderation systems but the oversight highlighted ongoing issues considering Twitter co-founder Jack Dorsey backs the startup. The fact that Bluesky allowed such an obvious racial slur shows it was unprepared to moderate a social network effectively.

  • We don’t know they didn’t design and implement it. Happens all the time where you imolement a feature, it works, there’s a regression and you have no clue. 40 min to resolve means it’s there, no way you’re building that completely, testing it, oushing it in that tineframd.

    I coukd be wrong.

    • It says in the post:

      “realized they should have had automated filters in place to prevent such issues. They are now implementing a two-step automated filtering and flagging system for user handles while still involving human moderators.”

      They wouldn’t need to implement a system they already implemented but wasn’t working properly. They’d just be fixing it.