A little experiment to demonstrate how a large language model like ChatGPT can not only write, but can read and judge. That, in turn, could lead to an enormous scaling up of the number of communications that are meaningfully monitored, warns ACLU, a human rights group.

  • The biggest ethical issues in AI/ML right now are primarily ones in which judgement is passed or facilitated via AI/ML. Judgement via surveillance is only one of many issues to be concerned about - judgement in healthcare, judgement about who gets access to resources, judgement in the legal system and other forms of judgement are also extremely high value and concerning targets of AI/ML judgement. The use of these models to identify, categorize, or otherwise quantify anything is generally a bad idea because they are trained on fundamentally racist, sexist, homophobic, transphobic, ableist, ageist, and other forms of bigoted text which are a direct representation of our existing society on the internet.

      • I think it’s quite a bit more complicated than that. The wisdom is there- I’ve been to a large number of AI/ML ethics talks in the last several years, including entire conferences, but the people putting on these conferences and the people actually creating and pushing these models don’t always overlap. Even when they do, people disagree on how these should be implemented and how much ethics really matters.

        •  Hirom   ( @Hirom@beehaw.org ) 
          link
          fedilink
          8
          edit-2
          11 months ago

          It’s usually more complicated than what a catchphrase could convey, but I think it’s pretty close.

          Anyone can get access to pretty powerful ML, just with a credit card. But it’s harder to get a handle on ethical implications, privacy implications, and the way the model is inaccurate, biased. This require caution, wisdom, which too few people have.

          I know basics in the area, probably more than the average person, but not enough to use ML safely and ethically in practical applications. So it’s probably too early to make powerful ML accessible to the general public, not without better safeguard built-in.

          • This is not at all unique to AI. Reminds me of some of the samples from this track: https://www.youtube.com/watch?v=4Uu6mW3y5Iw (which is a sick beat) which I looked up and paste here:

            “In the meantime, a race of human-looking aliens contacted the U.S. Government. This alien group warned us against the aliens that were orbiting the Equator and offered to help us with our spiritual development. They demanded that we dismantle and destroy our nuclear weapons as the major condition. They refused to exchange technology citing that we were spiritually unable to handle the technology which we then possessed. They believed that we would use any new technology to destroy each other. This race stated that we were on a path of self-destruction and we must stop killing each other, stop polluting the Earth, stop raping the Earth’s natural resources, and learn to live in harmony.”

            https://vocal.media/futurism/the-greada-treaty

        • @Gaywallet @Hirom ethics are damned because money talks. As usual, it is not problem with technology or understanding potential issues per se, but how it is all is getting blatantly ignored due of get there first gold rush, real or imagined. We have to remember that training most of LLMs are very questionable from copyright / authorship POV already, and companies try really hard to make everyone to ignore it. Because winner takes it all.