A little experiment to demonstrate how a large language model like ChatGPT can not only write, but can read and judge. That, in turn, could lead to an enormous scaling up of the number of communications that are meaningfully monitored, warns ACLU, a human rights group.
The issue is technology is advancing faster than wisdom.
I think it’s quite a bit more complicated than that. The wisdom is there- I’ve been to a large number of AI/ML ethics talks in the last several years, including entire conferences, but the people putting on these conferences and the people actually creating and pushing these models don’t always overlap. Even when they do, people disagree on how these should be implemented and how much ethics really matters.
It’s usually more complicated than what a catchphrase could convey, but I think it’s pretty close.
Anyone can get access to pretty powerful ML, just with a credit card. But it’s harder to get a handle on ethical implications, privacy implications, and the way the model is inaccurate, biased. This require caution, wisdom, which too few people have.
I know basics in the area, probably more than the average person, but not enough to use ML safely and ethically in practical applications. So it’s probably too early to make powerful ML accessible to the general public, not without better safeguard built-in.
This is not at all unique to AI. Reminds me of some of the samples from this track: https://www.youtube.com/watch?v=4Uu6mW3y5Iw (which is a sick beat) which I looked up and paste here:
https://vocal.media/futurism/the-greada-treaty
Well stated, completely agreed.
@Gaywallet @Hirom ethics are damned because money talks. As usual, it is not problem with technology or understanding potential issues per se, but how it is all is getting blatantly ignored due of get there first gold rush, real or imagined. We have to remember that training most of LLMs are very questionable from copyright / authorship POV already, and companies try really hard to make everyone to ignore it. Because winner takes it all.