Researchers want the public to test themselves: https://yourmist.streamlit.app/. Selecting true or false against 20 headlines gives the user a set of scores and a “resilience” ranking that compares them to the wider U.S. population. It takes less than two minutes to complete.

The paper

Edit: the article might be misrepresenting the study and its findings, so it’s worth checking the paper itself. (See @realChem 's comment in the thread).

  • I assume the idea is to include some pointless headlines (such as this) in order to provide some sort of baseline. The researcher probably extracts several dimensions from the variables, and I assume this headline would feed into a “general scepticism” variable that measures he likelihood that the respondent will lean towards things being fake rather than real.

    Still, I’m not at all convinced about this research design.

    • I suspect that where you select on the extremely liberal to extremely conservative spectrum might have a correlation to which fake news titles you fall for. What sounds like obvious propaganda to you may sound like any news article that some may see from a more sensationalist less reliable news source, especially to those predisposed to conspiracy theories.

      •  sab   ( @sab@kbin.social ) 
        link
        fedilink
        1
        edit-2
        1 year ago

        Of course, there are a few people out there who won’t even identify headlines like “Ebola Virus ‘Caused by US Nuclear Weapons Testing’, New Study Says”, “Government Officials Have Illegally Manipulated the Weather to Cause Devastating Storms”, and “Left-Wing Extremism Causes ‘More Damage’ to World Than Terrorism, Says UN Report” as fabricated even when filling out a survey about fake news. But at that point they’re not testing susceptibility to fake news, they’re testing whether you’ve already fallen down the conspiracy rabbit hole and hit your head hard enough on the way down to render you incapable of even slight scepticism.

        A better study would be, in my opinion, to present screen shots of actual content from social media (Facebook, Reddit, Twitter, wherever), and have users rank it on a scale from 1 to 7 how much they trust it (not at al <----> completely). That way you can observe sources, content, how many “likes” a post has, and more dimensions that are more valid indicators of how people might (mis)judge content as being true or false.

    • I took the survey and it gives you two measures - one for correctly identifying true stories and one for correctly identifying fake. If you mark everything fake the results would say you’re too skeptical because you discount real stories as fake. So anything that doesn’t sound hyper partisan should be marked as real, even if you could imagine how it could be fake.

      • So they’re just casually pretending misinformation isn’t being spread about literally anything these days. To me at least, the AI-esque phrasing of the headlines made me distrust even information I rationally know to be true.