•  Zworf   ( @Zworf@beehaw.org ) 
    link
    fedilink
    1
    edit-2
    4 months ago

    It’s mostly actual people. I know some of them at different platforms (for some reason this city has become a bit of a moderation hub). Most of these companies take moderation very seriously and if AI is involved it’s so far just in an advisory capacity. Twitter being the exception because… well, Elon.

    But their work is strictly internally regulated based on a myriad of policies (most of which are not made public especially to prevent bad actors from working around them). There usually isn’t much to discuss with a user nor could it really go anywhere. Before a ban gets issued the case has already been reviewed by at least 2 people and their ‘accuracy’ is constantly monitored by QA people.

    Most are also very strict to their employees. No remote work, no phones on the workfloor, strong oversight etc… To make sure cases are handled personally and employees don’t share screenshots of private data.

    And most of them have a psychologist on site 24/7. It’s not much fun watching the stuff these people get to deal with on a daily basis. I don’t envy them.