Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

    • That’s true, but only in the sense that theft and copyright infringement are fundamentally different things.

      Generating stuff from ML training datasets that included works without permissive licenses is copyright infringement though, just as much as simply copying and pasting parts of those works in would be. The legal definition of a derivative work doesn’t care about the techological details.

      (For me, the most important consequence of this sort of argument is that everything produced by Github Copilot must be GPL.)

        • By the same token, a human can easily be deemed to have infringed copyright even without cutting and pasting, if the result is excessively inspired by some other existing work.

        • AI doesn’t “learn” anything, it’s not even intelligent. If you show a human artwork of a person they’ll be able to recognize that they’re looking at a human, how their limbs and expression works, what they’re wearing, the materials, how gravity should affect it all, etc. AI doesn’t and can’t know any of that, it just predicts how things should look based on images that have been put in it’s database. It’s a fancy Xerox.

          • Why do people who have no idea how some thing works feel the urge to comment on its working? It’s not just AI, it’s pretty much everything.

            AI does learn, that’s the whole shtick and that’s why it’s so good at stuff computers used to suck at. AI is pretty much just a buzzword, the correct abbreviation is ML which stands for Machine Learning - it’s even in the name.

            AI also recognizes it looks at a human! It can also recognize what they’re wearing, the material. AI is also better in many, many things than humans are. It also sucks compared to humans in many other things.

            No images are in its database, you fancy Xerox.

            • And I wish that people who didn’t understand the need for the human element in creative endeavours would focus their energy on automating things that should be automated, like busywork, and dangerous jobs.

              If the prediction model actually “learned” anything, they wouldn’t have needed to add the artist’s work back after removing it. They had to, because it doesn’t learn anything, it copies the data it’s been fed.

              • Just because you repeat the same thing over and over it doesn’t become truth. You should be the one to learn, before you talk. This conversation is over for me, I’m not paid to convince people who behave like children of how things they’re scared of work.

    • Aside from all the artists whose work was fed into the AI learning models without their permission. That art has been stolen, and is still being stolen. In this case very explicitly, because they outright removed his work, and then put it back when nobody was looking.

      • Let me give you a hypothetical that’s close to reality. Say an artist gets very popular, but doesn’t want their art used to teach AI. Let’s even say there’s even legislation that prevents all this artist’s work from being used in AI.

        Now what if someone else hires a bunch of cheap human artists to produce works in a style similar to the original artist, and then uses those works to feed the AI model? Would that still be stolen art? And if so, why? And if not, what is this extra degree of separation changing? The original artist is still not getting paid and the AI is still producing works based on their style.

        • Strictly speaking it wouldn’t exactly be stealing, but I would still consider it as about equal to it, especially with regards to economic benefits. It may not be producing exact copies (which strictly speaking isn’t stealing, but is violating copyright) or actually stealing, but it’s exploiting the style that most people would assume mean that that specific artist made it and thus depriving that artist from benefiting from people wanting art from that artist/in that style.

          Now, I’m not conflicted about people who have made millions off their art having people make imitations or copies, those people live more than comfortably enough. But in your example there are still other human artists benefiting, which is not the case for computationally generated works. It’s great for me to be able to have computers create art for a DnD campaign or something, but I still recognize that it’s making it harder for artists to earn a living from their skills. And to a certain degree it makes it so people who never would have had any such art now can. It’s in many ways like piracy with the same ethical framing. And as with piracy it may be that people that use AI to make them art become greater “consumers” of art made by humans as well, paying it forward. But it may also not work exactly that way.

            • Now you’re making a strawman. Other humans that are actually making art generally don’t fully copy a specific style, they draw inspiration from different sources and that amalgamation is their style.

              Your comment reads as bad-faith to me. If it wasn’t meant as such you’re free to explain your stance properly instead of making strawman arguments.