Here’s a brief summary, although you miss something if you don’t read the study (trigger warning: stats):

  • The researchers suggest a novel incentive structure that significantly reduced the spread of misinformation and provide insights into the cognitive mechanisms that make it work. This structure can be adopted by social media platforms at no cost.

  • The key was to offer reaction buttons that participants were likely to use in a way that discerned between true and false information. Users who found themselves in such an environment, shared more true than false posts.

  • In particular, ‘trust’ and ‘distrust’ reaction buttons, which in contrast to ‘likes’ and ‘dislikes’, are by definition associated with veracity. For example, the study authors say, a person may dislike a post about Joe Biden winning the US presidential election, however, this does not necessarily mean that they think it is untrue.

  • Study participants used ‘distrust’ and ‘trust’ reaction buttons in a more discerning manner than ‘dislike’ and ‘like’ reaction buttons. This created an environment in which the number of social rewards and punishments in form of clicks were strongly associated with the veracity of the information shared.

  • The findings also held across a wide range of different topics (e.g., politics, health, science, etc.) and a diverse sample of participants, suggesting that the intervention is not limited to a set group of topics or users, but instead relies more broadly on the underlying mechanism of associating veracity and social rewards.

  • The researchers conclude that the new structure reduces the spread of misinformation and may help in correcting false beliefs. It does so without drastically diverging from the existing incentive structure of social media networks by relying on user engagement. Thus, this intervention may be a powerful addition to existing intervention such as educating users on how to detect misinformation.

  • I feel like the downvote button in special should / could be multidimensional. People downvote content out of multiple reasons: “this is incorrect”, “this is really dumb”, “this is off-topic”, “the poster is a jerk”, so goes on.

    IMO this would combo really well with the experimental study in the OP.

      • My idea is partially inspired on the Slashdot system, but I suggest doing it for downvotes instead of upvotes for two reasons:

        1. Bad content usually has a single blatant flaw, but good content often has multiple qualities.
        2. People take negative feedback more seriously than positive feedback.

        As consequences for both things:

        • It’s easier for the user to choose the type of downvote than the type of upvote.
        • If you’re including multiple categories of an action, you’ll likely do it through multiple clicks. If downvotes require two clicks while upvotes require only one, you’re mildly encouraging upvote often, and downvote less often (as it takes more effort).
        • Negative feedback needs to be a bit more informative.