Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences

In Russia, two operations created and spread content criticizing the US, Ukraine and several Baltic nations. One of the operations used an OpenAI model to debug code and create a bot that posted on Telegram. China’s influence operation generated text in English, Chinese, Japanese and Korean, which operatives then posted on Twitter and Medium.

Iranian actors generated full articles that attacked the US and Israel, which they translated into English and French. An Israeli political firm called Stoic ran a network of fake social media accounts which created a range of content, including posts accusing US student protests against Israel’s war in Gaza of being antisemitic.

  • I already write one reply to tell my main point. But whatever argument you come up with, I don’t think that’ll match the reality as viewed by AI researchers. If you give me specific short questions I’d be happy to engage in a discussion, with conditions on time.

    In any case, I won’t listen to metaphoric arguments like yours with guns because metaphoric arguments are very difficult to do scientifically. Every situation is different. I mean that anybody can always end the discussion saying “that’s oranges vs apples”, and everything time this happens you’d not have an objective way to counter that.

    •  frog 🐸   ( @frog@beehaw.org ) 
      link
      fedilink
      English
      225 days ago

      The metaphoric argument is exactly on point, though: the answer to “bad actors will use it for evil” is not “so everybody should have unrestricted access to this really dangerous thing.” Sorry, but in no situation you can possibly devise is giving everyone access to a dangerous tool the correct answer to bad people having access to it.

      • I can say it’s both on point and not. For the not, you can ban the gun in the UK and it will be very difficult to bring one from the continent. Peace. But the same is not true for AI. If the UK government bans AI, Russia can still bring it through the internet.

        And then I can still counter-argue that one, and then counter-argue this one also. See what a mess a metaphoric arguments bring.

        •  frog 🐸   ( @frog@beehaw.org ) 
          link
          fedilink
          English
          124 days ago

          Had OpenAI not released ChatGPT, making it available to everyone (including Russia), there are no indications that Russia would have developed their own ChatGPT. Literally nobody has made any suggestion that Russia was within a hair’s breadth of inventing AI and so OpenAI had better do it first. But there have been plenty of people making the entirely valid point that OpenAI rushed to release this thing before it was ready and before the consequences had been considered.

          So effectively, what OpenAI have done is start handing out guns to everyone, and is now saying “look, all these bad people have guns! The only solution is everyone who doesn’t already have a gun should get one right now, preferably from us!”