As far as I understand this, they seem to think that AI models trained on a set of affluent westerners with unknown biases can be told to “act like [demographic] and answer these questions.”
It sounds completely bonkers not only from a moral perspective, but scientifically and statistically this is basically just making up data and hoping everyone is impressed by how complicated the data faking is to care.
Most toothpastes in the US have fluoride - it’s the ones that don’t which likely cost more (ones with “natural” ingredients, ones with hydroxyapatite…).
It amuses me greatly to think that companies trying to sell shit to people will be fooled by “infinitely richer” feedback. Real people give “bland” feedback because they just don’t care that much about a product, but I guess people would rather live in a fantasy where their widget is the next best thing.
Overall, though, this horrifies me. Psychological research already has plenty of issues with replication and changing methodologies and/or metrics mid-study, and now they’re trying out “AI” participants? Even if it’s just used to create and test surveys that eventually go out to humans, it seems ripe for bias.
I’ll take a example close to home - take studies on CFS/ME. A lot of people on the internet (including doctors), think CFS/ME is hypochondria, or malingering, or due to “false illness beliefs” - so how is an “AI” trained on the internet tasked with thinking like a CFS/ME patient going to answer questions?
As patients we know what to look for when it comes to insincere/leading questions. “Do you feel anxious before exercise?” - the answer may be yes, because we know we’ll crash, but a question like this usually means researchers think resistance to activity is an irrational anxiety response that should be overcome. An “AI” would simply answer yes with no qualms or concerns, because it literally can’t think or feel (or withdraw from a study entirely).