Copilot then listed a string of crimes Bernklau had supposedly committed — saying that he was an abusive undertaker exploiting widows, a child abuser, an escaped criminal mental patient. [SWR, in German]
These were stories Bernklau had written about. Copilot produced text as if he was the subject. Then Copilot returned Bernklau’s phone number and address!
and there’s fucking nothing in place to prevent this utterly obvious failure case, other than if you complain Microsoft will just lazily regex for your name in the result and refuse to return anything if it appears
it helps they did it to someone with contacts and it was on prime time news telly
god, so this is actually the best the AI researchers can do with the tools they’ve shit out into the world without giving any thought to failure cases or legal liability (beyond their manager on
slackTeams claiming it’s been taken care of)so fuck it, let’s make the defamation machine a non-optional component of windows. we’ll just make it a P0 when someone who could actually get us in legal trouble complains! everyone else is a P2 that never gets assigned.
llms are (approximately) advanced versions of predictive text, any censorship will make them worse.
worse at what, exactly?
Predicting words.
How do you measure good/bad at predicting words? What’s the metric? Cause it doesn’t seem to be “the words make factual sense” if you’re defending this.
like fuck, all you or I want out of these wandering AI jackasses is something vaguely resembling a technical problem statement or the faintest outline of an algorithm. normal engineering shit.
but nah, every time they just bullshit and say shit that doesn’t mean a damn thing as if we can’t tell, and when they get called out, every time it’s the “well you ¡haters! just don’t understand LLMs” line, as if we weren’t expecting a technical answer that just never came (cause all of them are only just cosplaying as technically skilled people and it fucking shows)
No. Predicting words is barely related to facts. I’ll defend AI as an occasionally useful tool, but nothing it ever says should be taken as fact without confirmation. Sometimes that confirmation can be experimental — does this recipe taste good? Sometimes you need expert supervision to say this part was translated wrong or this code won’t work because of xyz. Sometimes you have to go out and look it up.
I like AI but there is a real problem treating it like the output means anything. It might give you a direction to look closer at, but it can never be the endpoint. We’d be better off not trying to censor it, but understanding it will bullshit you without blinking.
I summarize all of that by saying AI is a useful tool, but a terrible product.
lazily regex
I’m having a sneaking suspicion that this is what they do for all the viral ‘here the LLM famously says something wrong’ problems, as I don’t think they can actually reliably train the model it made an error.
That’s the most straightforward fix. You can’t actually fix the output of an LLM, so you have to run something on the output. You can have it scanned by another AI but that costs money and is also fallible. Regex/delete is the most reliable way to censor.
Yes, and then the problem is that this doesn’t really scale well. Esp as it is always hard to regexp all the variants correctly without false positives and negatives. Time to regexp html ;).
Yeah, and you can really see this in image generation. There’s often blocks on using the names of celebrities in the prompts, but if you misspell the names enough it can bypass the censor, and the image generator still understands it.
Microsoft published, using their software and servers, a libelous claim, to potentially millions of people.
The details of how the software was programmed should be legally irrelevant.* a GDPR violation, in Germany
The details of how the software was programmed should be legally irrelevant.
Why? Programmers should be legally liable for what they program.
Why? Programmers should be legally liable for what they program.
Too many degrees of separation between a programmer and the final product and how it’s used, usually.
Additionally, the decision to deploy an incomplete product or one that contains known flaws is an administrative decision, not a programming one.
Very chill and ethical behaviour daddy Microsoft
Does Copilot have Disney+ ?