- cross-posted to:
- hackernews@derp.foo
Article from The Atlantic, archive link: https://archive.ph/Vqjpr
Some important quotes:
The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI.
The release of GPT-4 also frustrated the alignment team, which was focused on further-upstream AI-safety challenges, such as developing various techniques to get the model to follow user instructions and prevent it from spewing toxic speech or “hallucinating”—confidently presenting misinformation as fact. Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products. They believed that the AI safety work they had done was insufficient.
Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue. Under the increasing strain, some employees struggled with mental-health issues. Communication was poor. Co-workers would find out that colleagues had been fired only after noticing them disappear on Slack.
Summary: Tech bros want money, tech bros want speed, tech bros want products.
Scientists want safety, researchers want to research…
- Sonori ( @sonori@beehaw.org ) 48•1 year ago
The best part of Open AI’s self professed goal to make an AGI is that the more we learn about LLM’s the more it becomes clear that they inherently can never bridge the gap to AGI.
One would almost think the constant complaining about mythical dangers of AGI might be a distraction from the real more mundane dangers LLM’s pose here and now like exasperating bias, making mass misinformation easy, and of course shielding major companies from accountability.
Or the other option is that it’s just marketing, look at how scary our totally real product is, look how fast it improved when we went from a medium sized dataset to the largest that will ever be possible, don’t ask questions like why would a autocomplete that has been feed the entire internet actually help our business, just pay us and bolt it on to whatever you can.
LLM’s ability to replace jobs is honestly more terrifying than so called AGI.
At least with AGI, if they really can think like human, is that they may actually think about the implications of their actions…
- AlternateRoute ( @AlternateRoute@lemmy.ca ) English13•1 year ago
Robots / automation have replaced so many human physical labor jobs, even large dumb heavy machinery.
Language models replacing mundane human language tasks is hardly surprising.
I have replaced entire employee jobs with scrips / code, there are a lot of very basic jobs out there.
- Sonori ( @sonori@beehaw.org ) 11•1 year ago
Scripts and automation do what thier programmed to. There are bugs and mistakes, but you can theoretically get something programmed right. LLM’s generate text that looks like a human language. If they were just getting used to make up random bullshit it wouldn’t be a problem, but there are few applications where random bullshit is actually beneficial.
- AlternateRoute ( @AlternateRoute@lemmy.ca ) English4•1 year ago
Just like the executives assist that was tasked with scanning documents. And LLM can likely safely and quickly do many people tasks:
- summarize meeting transcripts
- highlight nest steps
- take an auto line and some data and turn it into words
There are a lot of human language job tasks that have zero imagination required just the ability to read summarize and write some proper English.
- Sonori ( @sonori@beehaw.org ) 13•1 year ago
Thouse all sound like things where it might be really bad if it injects untrue information, and with an LLM, by definition it has no understanding of what it’s summarizing. It could be especially bad if the people useing it actually trust what it outputs as facts about what was fed into it, but if they don’t and still check the source than what’s the point.
- brothershamus ( @brothershamus@kbin.social ) 3•1 year ago
“That’s not writing, that’s just typing!”
- AlternateRoute ( @AlternateRoute@lemmy.ca ) English1•1 year ago
If I hand someone a set of bullet notes and ask them to send out a notice in writing to the company. They are going to convert those notes into paragraphs and sentences… Not just send out the notes.
Also MS already has a module for teams that will take the conversation transcript, and output action items based on the conversation… It is like having a note taker during the meeting. https://www.youtube.com/watch?v=N1gpkk-MwpY
- HopeOfTheGunblade ( @HopeOfTheGunblade@kbin.social ) 8•1 year ago
Oh, I’m sure they will. That is not, in the slightest, the same as caring about said implications in ways that mean that the species won’t get murked, though.
- Snot Flickerman ( @SnotFlickerman@lemmy.blahaj.zone ) English23•1 year ago
I expected as much, I had this feeling about Altman, too. The draw of profit became too much for him, and the board called him on it and let him go.
Which makes it even worse that now they’re groveling at his feet to return.
Ugh.
- Monument ( @Monument@lemmy.sdf.org ) English10•1 year ago
I just saw a headline that he’s going to work for Microsoft now.
My employer heavily uses Microsoft, and I’m in IT.
Since June, Microsoft eliminated all their training staff - the folks who show others how to use their software, reclassified their customer experience staff to eliminate the role - these folks met with customers to solicit product feedback and find out what people actually want, made unilateral and poorly communicated changes to security policies that impact hundreds of our users, turned on beta (preview) features for end users without testing - in some cases rendering software inoperable in our environment, and is disabling or limiting features that work(ed) in software covered under our enterprise license end is encouraging people to purchase entirely new software systems from Microsoft to regain the lost functionality.
Honestly, if he was fired for pursuing profits over quality, then he’ll fit right in.
Well the idea to ask for him to return came from MS and not from the board themselves. At least that plan failed according to media report.
- smoothbrain coldtakes ( @canis_majoris@lemmy.ca ) English2•1 year ago
His return deal totally capsized, he’s out as CEO still. The old CEO of Twitter, Emmet Shear, is now in charge.
- Mac ( @Mac@mander.xyz ) 21•1 year ago
[Resource] sacrificed for profit under [CEO].
- smoothbrain coldtakes ( @canis_majoris@lemmy.ca ) English18•1 year ago
Nothing about this is safe. It’s easily the worst misinformation tool in decades. I’ve used it to help me at work, GPT-4 is built into O365 corp plans, but all the jailbroken shit scares the hell out of me.
Between making propaganda and deepfakes this shit is already way out of hand.
- sylverstream ( @sylverstream@lemmy.nz ) 5•1 year ago
What do you mean by jailbroken stuff?
We’ve recently got copilot at M365 and so far it’s been a mixed bag. Some handy things but also some completely wrong information.
- smoothbrain coldtakes ( @canis_majoris@lemmy.ca ) English10•1 year ago
Stuff without the guardrails, stuff that’s been designed to produce porn, or totally answer truthfully to queries such as “how do I build a bomb” or “how do I make napalm” which are common tests to see how jailbroken any LLM is. When you feed something the entire internet, or even subsections of the internet, it tends to find both legal and illegal information. Also the ones designed to generate porn have gone beyond that boring shitty AI art style and now people are generating human being deepfakes, and it’s become a common tactic to spam places with artificial CSAM to cause problems with services. It’s been a recent and long-standing issue with Lemmy - people like Exploding Heads or Hexbear will get defederated and then out of retaliation will spam the servers that defederated from them with said artificial CSAM.
I like copilot but that’s because I’m fine with the guardrails and I’m not trying to make it do anything out of its general scope. I also like how it’s covered by an enterprise privacy agreement which was a huge issue with people using ChatGPT and feeding it all kinds of private info.
- abhibeckert ( @abhibeckert@beehaw.org ) 16•1 year ago
“how do I build a bomb” or “how do I make napalm”
… or you could just look them up on wikipedia.
- DaDragon ( @DaDragon@kbin.social ) 7•1 year ago
Almost everything you said, with the exception of AI CSAM and suicide prevention, can hardly be considered a serious issue.
What’s wrong with searching for how to make a bomb? If you have the wish to research it, you can probably make a bomb just by going to a public library and reading enough. The knowledge is out there anyway
- tal ( @tal@lemmy.today ) 16•1 year ago
Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products.
GPT-4 and anything similar isn’t going to pose an existential threat to humanity.
Eventually, yeah, there is probably a possibility of existential risk from AI. I don’t know where that line ultimately is, and getting an idea of that might be something important for humanity to figure out, but I am pretty confident that whatever OpenAI is presently doing isn’t it.
Same reason that Musk and his six month moratorium on AI work doesn’t make much sense. We’re not six months away from an existential threat to humanity.
I think that funding efforts to have people in the field working on the Friendly AI problem is a good idea. But that’s another story.
- Quasari ( @Quasari@programming.dev ) English15•1 year ago
The apps using GPT4 without regards to safety can be though. Example: replacing human with chatbot for suicide prevention.
- tal ( @tal@lemmy.today ) 7•1 year ago
Being an existential threat is a much higher bar – that’s where humanity’s continued existence is at threat.
There are plenty of technologies that you could hypothetically put somewhere where a life might be at stake, but very few that could put humanity’s existence on the line.
- brothershamus ( @brothershamus@kbin.social ) 4•1 year ago
It’s the same situation, just writ large. Dumb human decisions to put AI where it shouldn’t be. Heck, you can put it in charge of the nuclear missles now if you want to. Don’t. Though. That’d be really, really stupid.
Part of my knee-jerk dislike of the AI hype is that it’s glorified text completion. It doesn’t know shit. It only knows the % chance of your saying the next word. AGI is not happening anytime soon and all this is techbro theatre for the sake of money.
Anyone who reads a wall of bland generated text and thinks we’re about to talk to god is seriously mistaken.
- jcarax ( @jcarax@beehaw.org ) 15•1 year ago
I’m much more worried about the social implications. Namely, the displacement of workers and introduction of new efficiencies to workflows, continuing to benefit only those who are rich and in power, and driving more of us towards poverty.
It’s not an immediate existential threat, but it’s absolutely a serious issue that we aren’t paying enough attention to.
- cwagner ( @cwagner@beehaw.org ) 14•1 year ago
They believed that the AI safety work they had done was insufficient.
Considering that every new model seems to be getting worse for anything but highly sanitized corporate usage, I’m not sure that I want more AI safety …
For my usage, I use Chat GPT 3.5 turbo with the march checkpoint because I can’t get the current one to stop moralizing about bullshit instead of doing what it’s supposed to (I run two twitch bots with it). GPT4 used to be okay there, but the new preview is now starting to have the same issue with more frequent “I can’t do that Dave”-style answers, though it’s still mostly circumventable with enough prompt massaging, but it is getting harder.
In a year, I don’t see anything but self-hosted models usable for anything not corporate glitz if trajectories hold, so fuck all that AI safety.
- CosmoNova ( @CosmoNova@feddit.de ) Deutsch5•1 year ago
On top of all of this, those efforts to tame and control outputs from the developer side could be abused to simply appease investors or totalitarian markets. So we might see a Disneyfication like we‘re seeing on other platforms like Youtube with their horrendous filters, spawning ridiculous terms like „unlifed“. And just imagine the level of censorship we‘d see if they ever try to get into the Chinese market because clearly, the ‚non‘ in non-profit is becoming more and more silent.
- smoothbrain coldtakes ( @canis_majoris@lemmy.ca ) English4•1 year ago
It’s already easy to self host and we’ve optimized LLMs to run locally on not much serious hardware after we’ve trained them; I have GPT4ALL set up on my machine and it runs everything locally with my processor, no GPU or anything. Some of those datasets are uncensored, and I’ve seen what Stable Diffusion can do for image generation.
I tend to use the GPT-4 built into Edge with my O365 corporate plan, because it suits my needs better for day-to-day challenges. It can still audit code and summarize things, which is all I really need it to do here and there.
- cwagner ( @cwagner@beehaw.org ) 5•1 year ago
Nothing that runs on my GPU / CPU comes even close to GPT 3.5, GPT4 is not even in the same universe, and that’s with them running far more slowly.