A little experiment to demonstrate how a large language model like ChatGPT can not only write, but can read and judge. That, in turn, could lead to an enormous scaling up of the number of communications that are meaningfully monitored, warns ACLU, a human rights group.
It’s usually more complicated than what a catchphrase could convey, but I think it’s pretty close.
Anyone can get access to pretty powerful ML, just with a credit card. But it’s harder to get a handle on ethical implications, privacy implications, and the way the model is inaccurate, biased. This require caution, wisdom, which too few people have.
I know basics in the area, probably more than the average person, but not enough to use ML safely and ethically in practical applications. So it’s probably too early to make powerful ML accessible to the general public, not without better safeguard built-in.
This is not at all unique to AI. Reminds me of some of the samples from this track: https://www.youtube.com/watch?v=4Uu6mW3y5Iw (which is a sick beat) which I looked up and paste here:
https://vocal.media/futurism/the-greada-treaty
Well stated, completely agreed.