- cross-posted to:
- elp@lemmy.intai.tech
Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
- corytheboyd ( @corytheboyd@kbin.social ) 28•1 year ago
In terms of hype it’s the crypto gold rush all over again, with all the same bullshit.
At least the tech is objectively useful this time around, whereas crypto adds nothing of value to the world. When the dust settles we will have spicier autocomplete, which is useful (and hundreds of useless chatbots in places they don’t belong…)
- SSUPII ( @SSUPII@sopuli.xyz ) 17•1 year ago
For something that is showing to be useful, there is no way it will simply fizzle out. The exact same thing was said for the whole internet, and look where we are now.
The difference between crypto and AI, is that as you said crypto didn’t show anything tangible to the average person. AI, instead, is spreading like wildfire in software and research and being used by people even without knowing worldwide.
I’ve seen my immediate friends use chatbots to help them from passing boring yearly trainings at work, make speeches for weddings, and make rough draft lesson plans
- BobKerman3999 ( @BobKerman3999@feddit.it ) 4•1 year ago
Eh it is useful for doing stuff like “hello world” anything more complex and it falls apart
- ThunderingJerboa ( @ThunderingJerboa@kbin.social ) 10•1 year ago
Why are we in the fallacy that we assume this tech is going to be stagnant? At the current moment it does very low tier coding but the idea we are even having a conversation about a computer even having the possibility of writing code for itself (not in a machine learning way at least) was mere science fiction just a year ago.
- FaceDeer ( @FaceDeer@kbin.social ) 9•1 year ago
And even in its current state it is far more useful than just generating “hello world.” I’m a professional programmer and although my workplace is currently frantically forbidding ChatGPT usage until the lawyers figure out what this all means I’m finding it invaluable for whatever projects I’m doing at home.
Not because it’s a great programmer, but because it’ll quickly hammer out a script to do whatever menial task I happen to need done at any given moment. I could do that myself but I’d have to go look up new APIs, type it out, such a chore. Instead I just tell ChatGPT “please write me a python script to go through every .xml file in a directory tree and do <whatever>” and boom, there it is. It may have a bug or two but fixing those is way faster than writing it all myself.
I have the same job and my company opened the floodgates on AI recently. So far it’s been assistive tools, but I can see the writing on the wall. These tools will be able to do much more given enough context.
- FaceDeer ( @FaceDeer@kbin.social ) 1•1 year ago
The writing is definitely on the wall for the entry-level intern programmer type, certainly. I think the next couple of levels of programmer will hang on for a while longer, though. At that level it’s less about being able to program stuff than it is about knowing what needs to be programmed. AI will get there too eventually but I’m not updating my resume just yet.
- BobKerman3999 ( @BobKerman3999@feddit.it ) 2•1 year ago
Because I think we are over the “s” curve for this kind of technology
- ABoxOfNeurons ( @ABoxOfNeurons@lemmy.one ) 1•1 year ago
Genuine question: Based on what? GPT4 was a huge improvement on GPT3, and came out like three months ago.
- CoWizard ( @CoWizard@kbin.social ) 6•1 year ago
I’ve gotten it to give boiler plate for converting one library to another for certain embedded protocols for different platforms. It creates entry level code, but nothing that’s too hard to clean up or to get the gist of how a library works.
- aksdb ( @aksdb@feddit.de ) 3•1 year ago
Exactly my experience as well. Seeing CoPilot suggestions often feels like magic. Far from perfect, sure, but it’s essentially a very context “aware” snippet generator. It’s essentially code completion ++.
I have the feeling that people who laugh about this and downplay it either haven’t worked with it and/or are simply stubborn and don’t want to deal with new technology. Basically the same kind of people who, when IDEs with code completion came to be, laughed at it and proclaimed only vim and emacs users to be true programmers.
- TWeaK ( @TWeaK@lemm.ee ) 22•1 year ago
- 🐝bownage [they/he] ( @bownage@beehaw.org ) 20•1 year ago
By now, most of us have heard about the survey that asked AI researchers and developers to estimate the probability that advanced AI systems will cause “human extinction or similarly permanent and severe disempowerment of the human species”. Chillingly, the median response was that there was a 10% chance.
How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides – except that these upsides are, for the most part, hallucinatory.
Ummm how about the obvious answer: most AI researchers won’t think they’re the ones working on tools that carry existential risks? Good luck overthrowing human governance using ChatGPT.
- alexdoom ( @alexdoom@beehaw.org ) 20•1 year ago
The chance of Fossil Fuels causing human extinction carries a much higher chance, yet the news cycle is saturated with fears that a predictive language model is going to make calculators crave human flesh. Wtf is happening
- LoamImprovement ( @LoamImprovement@beehaw.org ) 11•1 year ago
Capitalism. Be afraid of this thing, not of that thing. That thing makes people lots of money.
I agree that climate change should be our main concern. The real existential risk of AI is that it will cause millions of people to not have work or be underemployed, greatly multiplying the already huge lower class. With that many people unable to take care of themselves and their family, it will make conditions ripe for all of the bad parts of humanity to take over unless we have a major shift away from the current model of capitalism. AI would be the initial spark that starts this but it will be human behavior that dooms (or elevates) humans as a result.
The AI apocalypse won’t look like Terminator, it will look like the collapse of an empire and it will happen everywhere that there isn’t sufficient social and political change all at once.
- alexdoom ( @alexdoom@beehaw.org ) 7•1 year ago
I dont disagree with you, but this is a big issue with technological advancements in general. Whether AI replaces workers or automated factories, the effects are the same. We dont need to boogeyman AI to drive policy changes that protect the majority of the population. Just frustrated with AI scares dominating the news cycle while completely missing the bigger picture.
- cnnrduncan ( @cnnrduncan@beehaw.org ) 3•1 year ago
Yeah - green energy puts coal miners and oil drillers out of work (as the right likes to constantly remind us) but that doesn’t make green energy evil or not worth pursuing, it just means that we need stronger social programs. Same with AI in my opinion - the potential benefits far outweigh the harm if we actually adequately support those whose jobs are replaced by new tech.
- Phantom_Engineer ( @Phantom_Engineer@lemmy.ml ) 1•1 year ago
That’s only a problem because of our current economic system. The AI isn’t the problem, the society that fails to adapt is.
- fsniper ( @fsniper@kbin.social ) 4•1 year ago
I think that the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.
- Spzi ( @Spzi@lemm.ee ) 2•1 year ago
the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.
Yes, the current state is not that intelligent. But that’s also not what the expert’s estimate is about.
The estimates and worries concern a potential future, if we keep improving AI, which we do.
This is similar to being in the 1990s and saying climate change is of no concern, because the current CO2 levels are no big deal. Yeah right, but they won’t stay at that level, and then they can very well become a threat.
- aksdb ( @aksdb@feddit.de ) 2•1 year ago
Not directly, no. But the tools we have already that allow to imitate voice and faces in video streams in realtime can certainly be used by bad actors to manipulate elections or worse. Things like that - especially if further refined - could be used to figuratively pour oil into already burning political fires.
- Niello ( @Niello@kbin.social ) 2•1 year ago
Because both you and the article are taking it out of context. The 10% chance refers to general AI (referred to as advance AI in the article), not chatGPT. Also, the actual statistics is 50% of AI safety researchers believe there is a 10% or greater chance humans go extinct from being unable to control AI. It’s something meant for the future, not current development.
I recommend The AI Dilemma episode from the podcast Your Undivided Attention for anyone who wants to learn more.
- pixelpop3 ( @pixelpop3@programming.dev ) 1•1 year ago
The less obvious answer is Roko’s Basilisk.
- SSUPII ( @SSUPII@sopuli.xyz ) 13•1 year ago
It will, and is helping humanity in different fields already.
We need to diverge PR speech from reality. AI is already being used in pharmaceutical fields, aviation, tracking (of the air, of the ground, of the rains…), production… And there is absolutely no way you can’t say these are not helping humanity in their own way.
AI will not solve the listed issues on its own. AI as a concept is a tool that will help, but it will always end up on how well its used and with what other tools.
Also, saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that doing so its not profitable.
- Spzi ( @Spzi@lemm.ee ) 7•1 year ago
saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that its not profitable.
The economic incentives to churn out the next powerful beast as quickly as possible are obvious.
Making it safe costs extra, so that’s gonna be a neglected concern for the same reason.
We also notice the resulting AIs are being studied after they are released, with sometimes surprising emergent capabilities.
So you would be right if we would approach the topic with a rational overhead view, but we don’t.
- tinselpar ( @tinselpar@feddit.nl ) 13•1 year ago
AI bots don’t ‘hallucinate’ they just make shit up as they go along mixed with some stuff that they found in google, and tell it in a confident manner so that it looks like they know what they are talking about.
Techbro CEO’s are just creeps. They don’t believe their own bullshit, and know full well that their crap is not for the benefit of humanity, because otherwise they wouldn’t all be doomsday preppers. It all a perverse result of American worship of self-made billionaires.
See also The super-rich ‘preppers’ planning to save themselves from the apocalypse
- soiling ( @soiling@beehaw.org ) 13•1 year ago
“hallucination” works because everything an LLM outputs is equally true from its perspective. trying to change the word “hallucination” seems to usually lead to the implication that LLMs are lying which is not possible. they don’t currently have the capacity to lie because they don’t have intent and they don’t have a theory of mind.
- variaatio ( @variaatio@sopuli.xyz ) 6•1 year ago
Well neither can it hallucinate by the “not being able to lie” standard. To hallucinate would mean there was some other correct baseline behavior from which hallucinating is deviation.
LLM is not a mind, one shouldn’t use words like lie or hallucinate about it. That antromorphises a mechanistic algorhitm.
This is simply algorhitm producing arbitrary answers with no validity to reality checks on the results. Since neither are those times it happens to produce correct answer “not hallucinating”. It is hallucinating or not hallucinating exactly as much regardless of the correctness of the answer. Since its just doing it’s algorhitmic thing.
The evolution is fast. We have AI with a theory of mind:
https://www.discovermagazine.com/mind/ai-chatbot-spontaneously-develops-a-theory-of-mind
- Mirodir ( @Mirodir@lemmy.fmhy.ml ) English3•1 year ago
Do we have a AI with a theory of mind or just a AI that answers the questions in the test correctly?
Now whether or not there is a difference between those two things is more of a philosophical debate. But assuming there is a difference, I would argue it’s the latter. It has likely seen many similar examples during training (the prompts are in the article you linked, it’s not unlikely to have similar texts in a web-scraped training set) and even if not, it’s not that difficult to extrapolate those answers from the many texts it must’ve read where a character was surprised at an item missing that that character didn’t see being stolen.
Good point. How will we be able to tell the difference?
- newde ( @newde@beehaw.org ) 3•1 year ago
You can make an educated guess if you would understand the intricacies of the programming. In this case, it’s most likely blurting out words and phrases that statistically most adequately fit the (perhaps somewhat leading) questions.
- hglman ( @hglman@lemmy.ml ) English1•1 year ago
The issue is you have nothing to differentiate your two possibilities other than it doesn’t seem like you. Which will, of course, always fail for a machine.
- tinselpar ( @tinselpar@feddit.nl ) 1•1 year ago
Misinformation is misinformation, whether it is intentional or not. And it’s not farfetched that soon someone will launch a propaganda bot with biased training data that intentionally spreads fake news.
- mrnotoriousman ( @mrnotoriousman@kbin.social ) 1•1 year ago
I’m not sure you get just how much money and resources goes into making a good LLM. Some random dude isn’t gonna whip up an AI out of nowhere in their basement. If someone tells you can, they’re lying.
- tinselpar ( @tinselpar@feddit.nl ) 1•1 year ago
I was not talking about some random guy in his basement.
- exscape ( @exscape@kbin.social ) 10•1 year ago
AI bots don’t ‘hallucinate’ they just make shit up as they go along mixed with some stuff that they found in google, and tell it in a confident manner so that it looks like they know what they are talking about.
The technical term for that is “hallucinate” though, like it or not.
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) - DEADBEEF ( @DEADBEEF@beehaw.org ) 1•1 year ago
See also The super-rich ‘preppers’ planning to save themselves from the apocalypse
Cory Doctorow wrote a pretty good short story about this.
- fiasco ( @fiasco@possumpat.io ) 12•1 year ago
I guess the important thing to understand about spurious output (what gets called “hallucinations”) is that it’s neither a bug nor a feature, it’s just the nature of the program. Deep learning language models are just probabilities of co-occurrence of words; there’s no meaning in that. Deep learning can’t be said to generate “true” or “false” information, or rather, it can’t be meaningfully said to generate information at all.
So then people say that deep learning is helping out in this or that industry. I can tell you that it’s pretty useless in my industry, though people are trying. Knowing a lot about the algorithms behind deep learning, and also knowing how fucking gullible people are, I assume that—if someone tells me deep learning has ended up being useful in some field, they’re either buying the hype or witnessing an odd series of coincidences.
- flatbield ( @furrowsofar@beehaw.org ) 7•1 year ago
The thing is, this is not “intelligence” and so “AI” and “hallucinations” are just humanizing something that is not. These are really just huge table lookups with some sort of fancy interpolation/extrapolation logic. So lot of the copyright people are correct. You should not be able to take their works and then just regurgitate them out. I have problem with copyright and patents myself too because frankly lot of it is not very creative either. So one can look at it from both ends. If “AI” can get close to what we do and not really be intelligent at all, what does that say about us. So we may learn a lot about us in the process.
- fiasco ( @fiasco@possumpat.io ) 3•1 year ago
I guess there’s a sense in which all computer science is table lookups, but if you want a nauseatingly technical summary of deep learning—it’s high-dimensional nonlinear regression with all the methodological seatbelts left unfastened.
The only thing this says about us is that philosophical illiteracy is a big problem in the sciences, and that computer science is the most embarrassing field in all STEM. Otherwise, you know, people find beauty in randomness (or in stochasticity, if you prefer) all the time. This is no different.
- flatbield ( @furrowsofar@beehaw.org ) 1•1 year ago
The other thing about humans. We are imitators. This means at some level we are also table looks. One thing that has always seemed kind of strange to me is that we are all imitators yet IP laws are such that your not suppose to imitate on one hand and on the other hand we have legal definitions of what is “original” to show we are not imitating. This just seems wrong and only a contrivance to gain fame, power, and money rather then anything that sane.
- hglman ( @hglman@lemmy.ml ) 1•1 year ago
I would agree that either you have to start saying the ai is smart or we are not.
- the_wise_man ( @the_wise_man@kbin.social ) 4•1 year ago
Deep learning can be and is useful today, it’s just that the useful applications are things like classifiers and computer vision models. Lots of commercial products are already using those kinds of models to great effect, some for years already.
What do you think of the AI firms who are saying it could help with making policy decisions, climate change, and lead people to easier lives?
- GizmoLion ( @GizmoLion@kbin.social ) 4•1 year ago
Absolutely. Computers are great at picking out patterns across enormous troves of data. Those trends and patterns can absolutely help guide policymaking decisions the same way it can help guide medical diagnostic decisions.
The article was skeptical about this. It said that the problem with expecting it to revolutionize policy decisions isn’t that we don’t know what to do, it’s that we don’t want to do it. For example, we already know how to solve climate change and the smartest people on the planet in those fields have already told us what needed to be done. We just don’t want to make the changes necessary.
- manitcor ( @manitcor@lemmy.intai.tech ) English1•1 year ago
Thats been the case time and again, how many disruptions from the tech bros came to industries that had been stagnant or moving at a snails pace when it came to adopting new technology (esp to lock into more expensive legacy systems).
Most of those industries disrupted could have been secured by the players in those markets instead the allowed a disruptor to appear unchallenged.
Remember the market is not as rational as some might think, you start filling gaps and people often won’t ask about the fallout and many of these services did have people warning against these things.
We are for the most part, in a nation that lets you do whatever you want until the effects have hit people, this is even more a thing if you are a business. I don’t know an easy answer, in some of these cases, old gaurd needed a smack, in others a more controlled entry may have been better. As of now “controlled” is jut about the size of ones cash pile.
Cue the ethical corporations discussion…
- GizmoLion ( @GizmoLion@kbin.social ) 1•1 year ago
I mean… no argument there. Politicians are famous for needing to be dragged, kicking and screaming, to do the right thing.
Just in case one decides to, however, I’m all for having the most powerful tools and complete information possible.
- Arnerob ( @Arnerob@kbin.social ) 2•1 year ago
I think it can be useful. I have used it myself, even before chatgpt was there and it was just gpt 3. For example I take a picture, OCR it and then look for mistakes with gpt because it’s better than a spell check. I’ve used it to write code in a language I wasn’t familiar with and having seen the names of the commands needed I could fix it to do what I wanted. I’ve also used it for some inspiration, which I could also have done with an online search. The concept just blew up and people were overstating what it can do, but I think now a lot of people know the limitations.
- Turkey_Titty_city ( @Turkey_Titty_city@kbin.social ) 2•1 year ago
I mean AI is already generating lots of bullshit ‘reports’. Like you know, stuff that reports ‘news’ with zero skill. It’s glorified copy-pasting really.
If you think about how much language is rote, in like law and etc. Makes a lot of sense to use AI to auto generate it. But it’s not intelligence. It’s just creating a linguistic assembly line. And just like in a factory, it will require human review to for quality control.
- 🐝bownage [they/he] ( @bownage@beehaw.org ) 12•1 year ago
The thing is - and what’s also annoying me about the article - AI experts and computational linguists know this. It’s just the laypeople that end up using (or promoting) these tools now that they’re public that don’t know what they’re talking about and project intelligence onto AI that isn’t there. The real hallucination problem isn’t with deep learning, it’s with the users.
- mrnotoriousman ( @mrnotoriousman@kbin.social ) 1•1 year ago
Spot on. I work on AI and just tell people “Don’t worry, we’re not anywhere close to terminator or skynet or anything remotely close to that yet” I don’t know anyone that I work with that wouldn’t roll their eyes at most of these “articles” you’re talking about. It’s frustrating reading some of that crap lol.
The article really isn’t about the hallucinations though. It’s about the impact of AI. its in the second half of the article.
- 🐝bownage [they/he] ( @bownage@beehaw.org ) 1•1 year ago
I read the article yes
- shoelace ( @shoelace@kbin.social ) 1•1 year ago
It drives me nuts about how often I see the comments section of an article have one smartass pasting the GPT summary of that article. The quality of that content is comparable to the “reply girl” shit from 10 years ago.
- fiasco ( @fiasco@possumpat.io ) 1•1 year ago
This is the curation effect: generate lots of chaff, and have humans search for the wheat. Thing is, someone’s already gotten in deep shit for trying to use deep learning for legal filings.
- MagicShel ( @MagicShel@programming.dev ) 1•1 year ago
In a way, NLP is just sort of an exercise in mental muscle-memory. The AI can’t do the math that 1+1=2, but if you ask it what 1+1 equals, it will give you a two. Pretty much like any human would do - we don’t hold up one finger and another finger and count them.
So in a way, AI embodies a sort of “fuzzy common sense” knowledge. You can ask it questions it hasn’t seen before and it can give answers that haven’t been given before, but conceptually it will spit out “basically the answer” to “basically that question”. For a lot of things that don’t require truly novel thinking, it does sort of know things.
Of course, just like we can misunderstand a question or phrase an answer badly or even just misremember an answer, the AI can be wrong. I’d say it can help out quite a bit, but I think it works best as a sort of brainstorming partner to bounce ideas off of. As a software developer, I find it a useful coding partner. It definitely doesn’t have all the answers, but you can ask it something like, “why the hell doesn’t his code work?” and it might give you a useful answer. It might not, of course, but nothing ventured, nothing gained.
It’s best to not think of it or use it like a database, but more like a conversational partner who is fallible like any other, but can respond at your level on just about any subject. Any job that cannot benefit from discussing ideas and issues is probably not a good fit for AI assistants.
- Niello ( @Niello@kbin.social ) 1•1 year ago
What I hate most about it is people are doing very poorly at checking their own information intake for accuracy and misinformation already. This comes at one of the worst time to make things go south. It’s going to challenge the stability of society in a lot of ways and with how crypto went I have 0% trust that techbros and corporates will not sabotage efforts to get things right for the sake of their own profit.
- ABoxOfNeurons ( @ABoxOfNeurons@lemmy.one ) 10•1 year ago
I don’t know exactly where to start here, because anyone who claims to know the shape of the next decade is kidding themself.
Broadly:
AI will decocratize creation. If technology continues on the same pace that it has for the last few years, we will soon start to see movies and TV with hollywood-style production values being made by individual people and small teams. The same will go for video games. It’s certainly disruptive, but I seriously doubt we will want to go back once it happens. To use the article’s examples, most people prefer a world with street view and Uber to one without them.
The same goes for engineering.
- HeartyBeast ( @HeartyBeast@kbin.social ) 6•1 year ago
The same goes for engineering.
I can’t wit to drive over a bridge where the contruction parameters and load limits were creatively autocompleted by a generative AI
- rustyspoon ( @rustyspoon@beehaw.org ) 5•1 year ago
There’s a guy at this maker-space I work out of who’s been using ChatGPT to do engineering work for him. There was some issue with residue being left in the parking lot on the pavement and came forward saying it had to do with “ChatGPT giving him a bad math number,” whatever the hell that means. This is also not the first time he’s said something like this, and its always hilarious.
- ABoxOfNeurons ( @ABoxOfNeurons@lemmy.one ) 3•1 year ago
Generative design is already a mature technology. NASA already uses it for spaceship parts. It’ll probably be used for bridges when large-format 3D printers that can manage the complexity it introduces.
- rustyspoon ( @rustyspoon@beehaw.org ) 2•1 year ago
It’s still just a tool for engineers though. Half of the job is determining what the design requirements are, another quarter is figuring out what general scheme (i.e. water vs air cooling) works best to meet those requirements. Things like this are great, but all they really do is effectively connect point A to point B in order to free up some man-hours for more high-level work.
That’s putting millions of people out of a job with no real replacement. The ones that aren’t unemployed will be commanding significantly smaller salaries.
- PenguinTD ( @PenguinTD@lemmy.ca ) English6•1 year ago
It’s actually not as easy as you think, it “looks” easy because all you seen is the result of survivorship bias. Like instagram people, they don’t post their failed shots. Like seriously, go download some stable diffusion model and try input your prompt, and see how good the result you can direct that AI to get things you want, it’s fucking work and I bet a good photographer with a good model can do whatever and quicker with director.(even with greenscreen+etc).
I dab the stable diffusion a bit to see how it’s like, with my mahcine(16GB vram), 30 count batch generation only yields maybe about 2~3 that’s considered “okay” and still need further photoshopping. And we are talking about resolution so low most game can’t even use as texture.(slightly bigger than 512x512, so usually mip 3 for modern game engine). And I was already using the most popular photoreal model people mixed together.(now consider how much time people spend to train that model to that point.)
Just for the graphic art/photo generative AI, it looks dangerous, but it’s NOT there yet, very far from it. Okay, so how about the auto coding stuff from LLM, welp, it’s similar, the AI doesn’t know about the mistake it makes, especially with some specific domain knowledge. If we have AI that trained with specific domain journals and papers, plus it actually understand how math operates, then it would be a nice tool, cause like all generative AI stuff, you have to check the result and fix them.
The transition won’t be as drastic as you think, it’s more or less like other manufacturing, when the industry chase lower labour cost, local people will find alternatives. And look at how creative/tech industry tried outsource to lower cost countries, it’s really inefficient and sometimes cost more + slower turn around time. Now, if you have a job posting that ask an artist to “photoshop AI results to production quality” let’s see how that goes, I can bet 5 bucks that the company is gonna get blacklisted by artists. And you get those really desperate or low skilled that gives you subpar results.
- ABoxOfNeurons ( @ABoxOfNeurons@lemmy.one ) English5•1 year ago
Somehow the same artist:
- PenguinTD ( @PenguinTD@lemmy.ca ) English1•1 year ago
It’s like the google dream with dogs for the hand one. lol, I’ve seen my fair share of anatomy “inspirations” when I experiment the posing prompts. (then later learn there are 3D posing extensions.) If it’s a uphill battle for more technical people like me, it would be really hard for artists. The ones I know that use mid journey just think it’s fun and not something really worth worrying about. A good hybrid gaping tools for fast prototype/iteration with specific guidance rules would be neat in the future.
ie. 3D DCC for base model posing and material selection/lighting -> AI generate stuff -> photogrammetry(pretty hard to generate cause AI doesn’t know how to generate same thing from different angle, lol) to convert generated images back to 3D models and textures-> iterate.
There are people working on other part like building replacement or actor replacement, I bet there are people working on above as well.
- ABoxOfNeurons ( @ABoxOfNeurons@lemmy.one ) 3•1 year ago
I seriously doubt this technology will pass by without a complete collapse of the labor market. What happens after is pretty much a complete unknown.
- hglman ( @hglman@lemmy.ml ) 3•1 year ago
I think its fair to assert that society will shift dramatically. Though the climate will have as much to do with that as AI.
- FaceDeer ( @FaceDeer@kbin.social ) 1•1 year ago
Yup. We should start preparing ideas for how we’re going to deal with that.
One thing we can’t do is stop it, though. Legislation prohibiting AI is only going to slow the transition down a bit while companies move themselves to other jurisdictions that aren’t so restrictive.
- hglman ( @hglman@lemmy.ml ) 2•1 year ago
It will shift a lot of human effort from generative to review. For example the core role of an engineer in many ways already is validation of a plan. Well that will become nearly the only role.
- rustyspoon ( @rustyspoon@beehaw.org ) 3•1 year ago
the core role of an engineer in many ways already is validation of a plan.
I disagree, this implies that AI are doing a lot more than they actually are. Before you design the physical layout of some thing, you have to identify a problem, and identify guidelines and empirical metrics against which you can compare your design to determine efficacy. This is half the job for engineers.
There’s one step of the design process that I see current AI completing autonomously (implementation), and I view it as nontrivial to get the technology working higher up on the “V”.
- hglman ( @hglman@lemmy.ml ) 1•1 year ago
Agreed. Its a more impactful on software than physical engineering (untill robots can build more arbitrary objects) but that is my point, implementation is only a small part of the job.
- ABoxOfNeurons ( @ABoxOfNeurons@lemmy.one ) 1•1 year ago
That assumes that the classes of problems that AI’s can solve remains stagnant. I don’t think that’s a good assumption, especially given that GPT4 can already self-review and refine its output.
- hglman ( @hglman@lemmy.ml ) 1•1 year ago
It will take a very long time for people to believe and trust AI. That’s just the nature of trust. It may well surpass humant in always soon, but trust will take much more time. What would be required for an AI designed bridge be accepted without review by a human engineer?
- ABoxOfNeurons ( @ABoxOfNeurons@lemmy.one ) 1•1 year ago
We’ll probably see sooner or later.
This is my favorite perspective on AI and it’s impact. I am curious as to what your thoughts are.
- Lells ( @Lells@kbin.social ) 6•1 year ago
Comments are heavily focused on the title of the article and the opening paragraphs. I’m more interested in peoples’ takes on the second half of the article, that highlights how the goals companies are touting are at odds with the most likely consequences of this trend.
- ABoxOfNeurons ( @ABoxOfNeurons@lemmy.one ) 4•1 year ago
I see both sides.
They’re probably going to completely (and intentionally) collapse the labor market. This has never happened before, so there is no historical prescedent to look at. The closest thing we have was the industrial revolution, but even that was less disruptive because it also created a lot of new factory jobs. This doesn’t.
The public hope is that this catastrophic widening of the gap between the rich and poor will force labor to organize and take some of the gains through legislation as an altenative to starving in the streets. Given that the technology will also make coercing people to work mostly pointless, there may not be as much pressure against it as there historically has been. Altman seems to be publically thinking in this direction, given the early basic income research and the profit cap for OAI. I can’t pretend to know his private thoughts, but most people with any shred of empathy would be pushing for that in his shoes.
Of course, if this fails, we could also be headed for a permanent, robotically-enforced nightmare dystopia, which is a genuine concern. There doesn’t seem to be much middle-ground, and the train has no brakes.
The IP theft angle from the end of the article seems like a pointless distraction though. All human knowledge and innovation is based on what came before, whether AI is involved or not. By all accounts, the remixing process it applies is both mechanically and functionally similar to the remixing process that a new generation of artists applies to its forebears, and I’ve not seen any evidence that they are fundamentally different enough to qualify as theft, except in the normal Picasso sense.
Interesting times.
- Lells ( @Lells@kbin.social ) 2•1 year ago
…but most people with any shred of empathy would be pushing for that in his shoes.
Empathy? In late-stage capitalism? 😏
I mean, so… I’m a software engineer who used to specialize in automation. I ended up having a crisis of conscience decades back, realizing that I was putting people out of work. “Hey, good job on that project, our client can afford to let 30 people go now!” never really felt like great praise to me. It actually felt really really shitty knowing the work I was doing was making it possible for the “nobility” to further gain back control of the “serfs”.
I figured that the only way this could ever benefit society as a whole instead of shareholders and owners would be if we moved more to a society with things like UBI, with perhaps the people who end up getting something extra being the ones who actually DO the dirty jobs and provide actual worth to society, instead of becoming obscenely wealthy at the expense of empathy and good human spirit. Unfortunately, at least here in the states, anything that smacks of “socialism” automatically equals dictatorship (glossing over that capitalism offers just as many examples of being abused by the “ruling” class). So there’s the whole zeitgeist to battle against before the comfortable and less-informed majority will even listen to anything that’s in their best interest.
As you say, interesting times indeed. I’m not hopeful that we’ll see that sort of shift in my lifetime however, sigh…
Yes, the second half is where the conversation gets interesting, by far.
- Spzi ( @Spzi@lemm.ee ) English5•1 year ago
The article complains the usage of the word “hallucinations” would be …
feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species.
Wether that is true or not depends on wether we eventually create human-level (or beyond) machine intelligences. No one can read the future. Personally I think it’s just a matter of time, but there are good arguments for both sides.
I find the term “hallucinations” fitting, because it conveys to uneducated people that a claim by ChatGPT should not be trusted, even if it sounds compelling. The article suggests “algorithmic junk”, or “glitches” instead. I believe naive users would refuse to accept an output as junk or a glitch. These terms suggest something is broken, althought the output still seems sound. “Hallucinations” is a pretty good term for that job, and also already established.
The article instead suggests the creators are hallucinating in their predictions of how useful the tools will be. Again no one can read the future, but maybe. But mostly: It could be both.
Reading the rest of the article required a considerable amount of goodwill on my part. It’s a bit too polemical for my liking, but I can mostly agree with the challenges and injustices it sees forthcoming.
I mostly agree with #1, #2 and #3. #4 is particularly interesting and funny, as I think it describes Embrace, Extend, Extinguish.
I believe AI could help us create a better world (in the large scopes of the article), but I’m afraid it won’t. The tech is so expensive to develop, the most advanced models will come from people who already sit on top of the pyramid, and foremost multiply their power, which they can use to deepen the moat.
On the other hand, we haven’t found a solution to alignment and control problem, and aren’t certain we will. It seems very likely we will continue to empower these tools without a plan for what to do when one model actually shows near-human or even super-human capabilities, but can already copy, backup, debug and enhance itself.
The challenges to economy and society along the way are profound, but I’m afraid that pales in comparison to the end game.
- esc27 ( @esc27@kbin.social ) 4•1 year ago
Merits of the tech aside, It is amazing to see how many people are becoming ludites in response to this technology, especially those in industries who thought they were safe from automation. I feel like there has always been a sense of hubris between the creative industries and general labor, and AI is now forcing us to look in a computer generated mirror and reassess how special we really are.
- NetHandle ( @NetHandle@kbin.social ) 4•1 year ago
I think there’s a problem with people wanting a fully developed brand new technology right out the gate. The cell phones of today didn’t happen overnight, it started with a technology that had limitations and people innovated.
AI is a technology that has limitations, people will innovate it. Hopefully.
I think my favorite potential use case for AI is academics. There are countless numbers of journal articles that get published by students, grad students and professors, and the vast majority of those articles don’t make an impact. Very few people read them, and they get forgotten. Vast amounts of data, hypotheses and results that might be relevant to someone trying to do something good, important or novel but they will never be discovered by them. AI can help with this.
Of course there’s going to be problems that come up. Change isn’t good for everyone involved, but we have to hope that there is a net good at the end. I’m sure whoever was invested in the telegram was pretty choked when the phone showed up, and whoever was invested in the carrier pigeon was upset when the telegram showed up. People will adapt, and society will benefit. To think otherwise is the cynical take on the same subject. The glass is both half full and half empty. You get to choose your perspective on it.
- brasilikum ( @brasilikum@slrpnk.net ) English4•1 year ago
In my opinion, both can be true and it’s not either one or the other:
ML has surprised even many experts, in so far as a very simple mechanism at huge scale is able to produce some aspects of human abilities. It does not seem strange to me that it also reproduces other human abilities, like hallucinations. Maybe they are closer related then we think.
Company leaders and owners are doing what the capitalistic system incentives them to do: raise their companies value by any means possible, call that hallucinating or just marketing.
IMO it’s the responsibility of government to make sure AI does not become another capital concentration scheme like many other technologies have, widening the gap between rich and poor.
- DudePluto ( @DudePluto@lemmy.ml ) English1•1 year ago
Agreed. Private-owned AI competing against humans for limited jobs in a capital based market is a nightmare.
Public-owned AI producing and providing for all is not.
AI was trained on the work of millions and is inhuman in its productive capabilities. It has no business being private owned
- Evoke3626 ( @Evoke3626@lemmy.fmhy.ml ) English1•1 year ago
What an awesome article, couldn’t agree me
- kiku123 ( @kiku123@feddit.de ) 1•1 year ago
Thanks for sharing this article. I agree that those points mentioned are not possible for GenAI. It is a pipe dream that GenAI is capable of global governance, because it can’t really understand the implications of what it means. It’s a Clever Hans and just outputs what it thinks that you want to see.
I think that with GenAI there are some job classes that are in danger (tech support continues to shrink for common cases, etc.), but mostly the entry-level positions. Ultimately, someone who actually knows what’s going on would need to intervene.
Similarly for things like writing or programming, GenAI can produce okay work, but it needs to be prompted by someone who can understand the bigger picture and check it’s work. Writing becomes more editing in this case, and programming becomes more code review.
- Dadifer ( @Dadifer@kbin.social ) 2•1 year ago
I truly believe that multiple medical specialties will be taken over by AI.
- goldenbug ( @goldenbug@kbin.social ) 1•1 year ago
Assisted diagnosis? Yes… The rest? Not for many years.
- FaceDeer ( @FaceDeer@kbin.social ) 1•1 year ago
There have been studies that show patients already prefer the bedside manner of ChatGPT over human physicians, so that’s another thing we’ll likely see soon.
- ABoxOfNeurons ( @ABoxOfNeurons@lemmy.one ) 1•1 year ago