- cross-posted to:
- technology
- cross-posted to:
- technology
Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.
LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.
LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools.
The relevant passage, which takes effect on November 20, 2024.
In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won’t be held responsible for any consequences.
- supersquirrel ( @supersquirrel@sopuli.xyz ) English21•2 months ago
I would rather clean up dog vomit than use linkedin.
- Possibly linux ( @possiblylinux127@lemmy.zip ) English4•2 months ago
Well that’s not saying much as most dog owners have cleaned up vomit at least once.
- driving_crooner ( @driving_crooner@lemmy.eco.br ) English4•2 months ago
My hope is to get a government job so I can delete that shit from my life.
- supersquirrel ( @supersquirrel@sopuli.xyz ) English2•2 months ago
My dream is to build a secret laser big enough that I can deathstar linkedin out of existence in one zap, my hope is however much the same as yours.
- bluGill ( @bluGill@fedia.io ) 12•2 months ago
The real question is will this hold up in court. Judges are likely to frown on this type of thing. Sure the EULA that they know nobody reads says that, but their tools are giving advice in an authoritative tone. My company has got in trouble in court because in an advertisement it appeared our tools were being used in ways the warning label says don’t.
- FiveMacs ( @Fiivemacs@lemmy.ca ) English5•2 months ago
Lol