- IsoSpandy ( @IsoSpandy@lemm.ee ) English14•8 hours ago
I don’t get the ai hate sentiment. In fact I want ai to be so good that it steals all our jobs. Every single “worker” on the planet. The only job I don’t think they can steal is that of middle management because I don’t think we have digitized data on how to suck your own dick. After everybody is jobless, then we would be free. We won’t need the rich. They can be made into a fine broth.
Sarcasm aside, I really believe we should automate all menial jobs, crunch more data and make this world a better place, not steal creative content made by humans and make second rate copies.
- Nightwatch Admin ( @nightwatch_admin@feddit.nl ) English1•2 hours ago
Had me in the first half, not gonna lie
- Sas [she/her] ( @Sas@beehaw.org ) English7•4 hours ago
The problem is that it will be the rich that are the owners of the AI that stole your job so suddenly we peasants are no longer needed. We won’t be free, we will be broth.
- IsoSpandy ( @IsoSpandy@lemm.ee ) English2•3 hours ago
Well you see 100 people won’t be able to make soup of trillions. But you know what we a trillion people can do? Run the guillotine for a 100 times
- daniskarma ( @daniskarma@lemmy.dbzer0.com ) English6•4 hours ago
Then you have a choice.
Option 1. Halt scientific and technological progress and be robbed anyway because if capitalists do not get more money out of tech they are getting it out of making you work more hours for less money.
Option 2. End capitalism.
- daniskarma ( @daniskarma@lemmy.dbzer0.com ) English55•20 hours ago
The whole “all AI bad” is disconnected and primitivism.
John J. Hopfield work is SCIENCE with caps. A decade of investigations during the 80s when computational power couldn’t really do much with their models. And now it has been shown that those models work really good given proper computational power.
Also not all AI is generative AI that takes money out of fanfic drawers pockets or an useless hallucinating chatbot. Neural networks are commonly used in science as a very useful tool for many tasks. Also image recognition is nowadays practically a solved issue thanks to their research. Proteins folding. Dataset reduction. Fluent text to speech. Speech recognition… AI may be getting more track nowadays because the generative AIs (that also have their own merit, like or not) but there is much more to it.
As any technological advance there are shitty use cases and good use cases. You cannot condemn a whole tech just for the shitty uses of some greedy capitalists. Well… you can condemn it. But then I will classify you as a primitivist.
Scientific theory that resulted in practical applications useful to people is why the nobel prize was created to begin with. So it is a well given prize. More so than many others.
- Tja ( @Tja@programming.dev ) English6•7 hours ago
You sir or madam give me hope that there are still reasonable people on the internet. Well written.
Wait, are you an AI bot defending itself…?
- msage ( @msage@programming.dev ) English13•19 hours ago
‘AI hate’ is usually connected with insane claims like ‘we have “reasoning” model’.
That shit needs to die in fire.
I’m still waiting for the full-planet weather model. That will be something.
- Godort ( @Godort@lemm.ee ) English64•22 hours ago
To be fair, the protein folding thing is legitimately impressive and an actual good use for the technology that isnt just harvesting people’s creativity for profit.
- GreatAlbatross ( @GreatAlbatross@feddit.uk ) English42•21 hours ago
The way to tell so often seems to be if someone has called it AI or Machine Learning.
AI? “I put this through chatgpt” (or “The media department has us by the balls”)
ML? “I crunched a huge amount of data in a huge amount of ways, and found something interesting”
- keepthepace ( @keepthepace@slrpnk.net ) English4•19 hours ago
Actually I endorse the fact that we are less shy of calling “AI” algorithms that do exhibit emergent intelligence and broad knowledge. AI uses to be a legitimate name for the field that encompasses ML and we do understood a lot of interesting things about intelligence thanks to LLMs nowadays, like the fact that training on next-word-prediction is enough to create pretty complex world models, that transformer architectures are capable of abstraction or that morality arise naturally when you try to acquire all the pre-requisites to have a normal discussion with a human.