Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

  • That sounds like a great idea for making an intelligent agent inside a video game, where you control all aspects of it’s environment. But what about an AI that you want to be able to interact with our current shared reality. If I want to know something that involves synthesis of multiple modalities of knowledge how should that information be conveyed? Do humans grow up inside test tubes that only consume content that they themselves have created? Can you imagine the strange society we would have if people were unleashed upon the world without having any shared experiences until they were fully adults?

    I think the OpenAI people have a point here, but I think where they go off the rails is that they expect all of this copyrighted information to be granted to them at zero cost and with zero responsibility to the creators of said content.