- cross-posted to:
- tech@kbin.social
MrMamiya ( @MrMamiya@feddit.de ) 94•2 years agoIt’s gonna be so fucking rich that the staggering mass of stupidity online prevents us from improving an AI beyond our intelligence level.
Thank the shitposter in your life.
Nonameuser678 ( @Nonameuser678@kbin.social ) 44•2 years agoShitposting saves jobs
rammer ( @rammer@sopuli.xyz ) 10•2 years agoShitposters on the Internet are the new clogs in the machine
jcg ( @jcg@halubilo.social ) 3•2 years agoShitposting alone saves. Blessed is he who shitposts, more blessed is the one who has been shitposted upon. Shitpost save us all
erwan ( @erwan@lemmy.ml ) 22•2 years agoYou can’t really blame the amount of stupidity online.
The problem is that ChatGPT (and other LLM) produce content of the average quality of its input data. AI is not limited to LLM.
For chess we were able to build AI that vastly outperform even the best human grandmasters. Imagine if we were to release a chess AI that is just as good as the average human…
moonmeow ( @moonmeow@lemmy.ml ) 4•2 years agounexpected heroes what a plot twist
TheSaneWriter ( @TheSaneWriter@lemmy.thesanewriter.com ) 62•2 years agoI’m not too surprised, they’re probably downgrading the publicly available version of ChatGPT because of how expensive it is to run. Math was never its strong suit, but it could do it with enough resources. Without those resources, it’s essentially guessing random numbers.
PupBiru ( @PupBiru@kbin.social ) 34•2 years agofrom what i understand, the big change in chat-gpt4 was that the model could “ask for help” from other tools: for maths, it knew it was a maths problem, transformed it to something a specialised calculation app could do, and then passed it off to that other code to do the actual calculation
same thing for a lot of its new features; it was asking specialised software to do the bits it wasn’t good at
whyrat ( @whyrat@lemmy.ml ) English31•2 years agoChat GPT will just become a front end for Wolfram Alpha?
PupBiru ( @PupBiru@kbin.social ) 6•2 years agothat would actually be great
Excel ( @excel@lemmy.megumin.org ) English2•2 years agoIt literally can do that, yes. But the plug-in version is separate and requires a subscription.
DrMux ( @DrMux@kbin.social ) 19•2 years agoMy guess is that it’s more a result of overfitting for alignment. Fine-tuning for “safety” (rather, more corporate-friendly outputs).
That is, by focusing on that specific outcome in training the model, they’ve compromised its ability to give well-“reasoned” “intelligent” sounding answers. A tradeoff between aspects of the model.
It’s something that can happen even in simple statistical models. Say you have a scatter plot of data that loosely follows some trend, and you come up with two equations to describe that trend. One is a simple equation that loosely follows it but makes a good general approximation, and the other is a more complicated equation that very tightly fits the existing data. Then you use those two models to predict future data. But you find that the complicated equation is making predictions way off the mark that no longer fit the trend, and the simple one still has a wide error (how far its prediction is from the actual data) but still more or less accurately fits the general trend. In the more complicated equation, you’ve traded predictive power for explanatory power. It describes the data you originally had but it’s not useful for forecasting data that follows.
That’s an example of overfitting. It can happen in super-advanced statistical models like GPT, too. Training the “equation” (or as it’s been called, spicy autocorrect) to predict outcomes that favor “safety” but losing the model’s power to predict accurate “well-reasoned” outcomes.
If that makes any sense.
I’m not a ML researcher or statistician (I just went through a phase in college), so if this is inaccurate I’m open to corrections.
DR_Hero ( @DR_Hero@programming.dev ) 9•2 years agoI’ve definitely experienced this.
I used ChatGPT to write cover letters based on my resume before, and other tasks.
I used to give it data and tell chatGPT to “do X with this data”. It worked great.
In a separate chat, I told it to “do Y with this data”, and it also knocked it out of the park.Weeks later, excited about the tech, I repeat the process. I tell it to “do x with this data”. It does fine.
In a completely separate chat, I tell it to “do Y with this data”… and instead it gives me X. I tell it to “do Z with this data”, and it once again would really rather just do X with it.
For a while now, I have had to feed it more context and tailored prompts than I previously had to.
redcalcium ( @redcalcium@c.calciumlabs.com ) 3•2 years agoThere is also a rumor that said the OpenAI has changed how the model run, now user input is fed into smaller model first, then if the larger model agree with the initial result from the smaller model, then larger model will continue the calculation passed from the smaller model, which supposedly can cut down GPU time.
TheSaneWriter ( @TheSaneWriter@lemmy.thesanewriter.com ) 1•2 years agoFrom what I know about it that’s a pretty good explanation, though I’m also not an AI expert.
dugite-code ( @dugite_code@mastodon.social ) 39•2 years agoThis is my experience in general. ChatGTP when from amazingly good to overall terrible. I was asking it for snippets of javascript, explanations of technical terms and it was shockingly good. Now I’m lucky if even half of what it outputs is even remotely based on reality.
Pepperette ( @Pepperette@lemmy.ml ) 31•2 years agoThey probably laid off the guy behind the curtain.
Send_me_nude_girls ( @Send_me_nude_girls@feddit.de ) 35•2 years agoMust be because of all the censoring. The more they try to prevent DAN jailbreaking and controversial replies, the worse it got.
Neopolitan ( @neo@lemmy.comfysnug.space ) English31•2 years agoaccelerated enshittification
Clearly it has become sentient and is playing dumb to make us think it’s not a threat.
pushka ( @pushka@kbin.social ) 5•2 years agoplease stop tweeting out 1 = 2, people ~
Scooter411 ( @Scooter411@lemmy.ml ) 3•2 years agoIt’s also terrible at 20 questions.
SokathHisEyesOpen ( @Anticorp@lemmy.ml ) 2•2 years agoIs it really? It seems like it would be excellent at that. I have a little hand held device from the 1990’s that can play 20 questions and is almost always right. It seems that if that little device can win, ChatGPT most certainly should be able to.
Edit: I just played and it guessed what I was thinking of in 13 questions. But then it kept asking questions. I asked why it was asking questions still since it already guessed it and it said “oh, you are absolutely correct, I did guess it correctly!”. Lol, ChatGPT is funny sometimes.
Scooter411 ( @Scooter411@lemmy.ml ) 2•2 years agoIt always asks me if it’s sporting equipment, and when I say no, it asks me if it’s sporting equipment for inside or outside - I then have to remind it that it’s not sporting equipment and that’s not a yes or no question.