I’ve generally been against giving AI works copyright, but this article presented what I felt were compelling arguments for why I might be wrong. What do you think?

  • One question I have is that if two people use the same prompt, do they get the same result?

    If they do, how could that result be copyrighted because I can just as well reproduce the prompt, making an original “copy”.

    If they don’t produce the same result, well it’s not the human that’s really doing the “original” part there, which is what copyright aims to protect, right?

    On the other hand if I write an original comic book story and use AI as a tool to create the pictures, that, in my opinion, could be worth copyright protection. But it’s the same as just original story, it’s not really the pictures that are protected.

    (And let’s not forget that AIs are mostly just fed stolen works, that needs to be solved first and foremost.)

    • I think your understanding of AI art tools is a bit limited. Its not all solely based on prompt. Prompts are the part of most of AI art but its not the only part of it. There are things like inpainting, outpainting, img to img, outside guidance (controlnet for SD), loras, etc. Hell that doesn’t even get into doing touch ups in other photo manipulation software where you can maybe get a general gist with art generator then draw over the output to get it closer to your real vision. Right now most people are only talking about the most bottom of the barrel stuff. Even though the user above hates that people are comparing photography to Ai art, the amount of “effort” required for the most bottom tier stuff (you posting a selfie of you doing duck lips or some other stupid trend) is at a similar level. Noone is arguing photographers don’t put a ton of skill and knowledge into their work but it seems unfair we only compare the most shit tier AI art to the true artistic end of photography instead of equating it to you take a picture of your food. Yes there is some level of effort to it but its acting like AI art requires 0 effort. You put as much effort into it as you would anything else. A use case I would love to see as the tech advances, we are seeing a ton of CGI in traditionally animated shows. Wouldn’t it be better to get a model that is trained on that specific character so you create the original scene in cgi, you run a AI art pass frame by frame, once that is done it should look far closer to the traditional style and have normal artists touch up the scene, which they already do with CGI.

      I will also state since SD 2.0, they have respected robot.txt (them ignoring it prior wasn’t great).

      • You put as much effort into it as you would anything else.

        Copyright is not meant to reward effort. This is a common misconception. Thirty years ago there was a landmark SCOTUS case about copyrighting a phone book. Back then, collecting and verifying phone numbers and addresses took a tremendous amount of effort. Somebody immediately copied the phone book, and the creators of the phone book argued that their effort should be rewarded with copyright protection.

        The courts shot that down. Copyright is not about effort, it’s about creative expression. Creative expression can require major effort (Sistine Chapel) or take very little effort (duck lips photo). Either way, it’s rewarded with a copyright.

        Assembling a database is not creative expression. Neither is judging whether an AI generated work is suitable. Nor pointing out what you’d like to see in a new AI generated work. So no matter how much effort one puts into those activities, they are not eligible for copyright.

        To the extent that an artist takes an AI generated work and adds their own creative expression to it, they can claim copyright over the final result. But in all the cases in which AI generated works were ruled ineligible, the operator was simply providing prompts and/or approving the final result.

        • You make a fair point but as I said people keep focusing only on prompts. So when people see the takeaway that “Oh AI art isn’t copyrightable” it means very little since noone in an actual industry is going to give out a raw render (well they shouldn’t). So that is why I pointed out other tools in AI art. Like inpainting and img to img can easily make it or break how much we influence the AI. You are still using all the AI besides scribbling on paint on top of your raw then rerendering it.

          Most of these court cases are primarily on prompt only images. So yes, its great we have the line of well duh if an artist does “sufficient” touch ups its back to being but the question becomes what is sufficient. Would it be sufficient if you still used mostly AI tools especially ones that give the users far more control over what they are rendering (Going to focus mostly on SD since it gives far more versatility than most of its competitors) like if you only use image to image. You posted your own art and it does it thing and bing bang boom is it copyrightable? Img to img only, probably is a low bar and may not get over the hump to be copyrightable but inpainting probably has a far more likely chance to be since it can factor in many things that the “curator” is wanting.

          We are at a time when people are very hot on this topic. I just feel some artists are going a bit too insane with this, I understand their anxieties but its quite easy to lose sight and make draconian demands about this future especially the ones who are suggesting that you can copyright a style. Such suggestions are asinine and will hurt everyone including themselves.

          • The question “what is sufficient” basically amounts to convincing an official that the final work reflects some form of your creative expression.

            So for instance, if you are hired to take AI-generated output and crop it to a 29:10 image, that probably won’t be eligible for copyright. You aren’t expressing your creativity, you are doing something anyone else could do.

            On the other hand, if you take AI-generated output and edit it in photoshop to the point that everyone says “Hey, that looks like a ThunderingJerboa image”, then you would almost certainly be eligible for copyright.

            Everyone else falls in between, trying to convince someone that they are more like the latter case. Which is good, because it means actual artists will be rewarded.

          • These are my thoughts as well. It seems obvious that putting in ‘cat with a silly hat’ as a prompt is basically the creative equivalent of googling for a picture.

            But, as you say, that sort of AI usage is just dumb, bottom tier usage. There’s going to someday be a major, critical piece of art that heavily uses AI assistance in it’s creation and people are going to be surprised that it’s somehow not copyrightable under the laws and rulings they’re working on now.

            I remember in the LOTR behind the scenes they talked about how WETA built a game l like software to simulate the massive battle scenes, giving each soldier a series of attacks and hp, etc. They then used this to build out the final CGI.

            Stuff like that has already been going on for ages and it’s only going to get more murky as to what ‘AI art’ even means and what is enough human creativity and editing added to the process to make it human created rather than AI created.

            • A hand coded simulation in house is the definition of creative expression. Using a product to essentially Google an image is not, I don’t see how this is a hard distinction.

              Perfect example is Corridor Crews second anime battle video. Or Joel animation on YouTube. There is more to those than just using an AI to get an image of short video

              • The distinction is that you need to actually use the product in a realistic use case before you can pass judgement on its use. You’re judging cameras on the basis of selfies. This thread is full of explicit examples of how actual artists use this to assist in their work. Please read that.

      • Same comment with paragraph breaks added by GPT-4:

        I think your understanding of AI art tools is a bit limited. Its not all solely based on prompt. Prompts are the part of most of AI art but its not the only part of it. There are things like inpainting, outpainting, img to img, outside guidance (controlnet for SD), loras, etc.

        Hell that doesn’t even get into doing touch ups in other photo manipulation software where you can maybe get a general gist with art generator then draw over the output to get it closer to your real vision. Right now most people are only talking about the most bottom of the barrel stuff.

        Even though the user above hates that people are comparing photography to Ai art, the amount of “effort” required for the most bottom tier stuff (you posting a selfie of you doing duck lips or some other stupid trend) is at a similar level. Noone is arguing photographers don’t put a ton of skill and knowledge into their work but it seems unfair we only compare the most shit tier AI art to the true artistic end of photography instead of equating it to you take a picture of your food.

        Yes there is some level of effort to it but its acting like AI art requires 0 effort. You put as much effort into it as you would anything else. A use case I would love to see as the tech advances, we are seeing a ton of CGI in traditionally animated shows. Wouldn’t it be better to get a model that is trained on that specific character so you create the original scene in cgi, you run a AI art pass frame by frame, once that is done it should look far closer to the traditional style and have normal artists touch up the scene, which they already do with CGI.

        I will also state since SD 2.0, they have respected robot.txt (them ignoring it prior wasn’t great).

    • One question I have is that if two people use the same prompt, do they get the same result?

      The process is deterministic, but does not rely only on the prompt (or other forms of conditioning), the model itself has to be the same, then the PRNG seed, and, to varying degrees, configuration parameters such as step size, image resolution and configuration (meaning, roughly, “how much should the process weight your prompt over the model”.

      The more detailed the prompt is, the less creativity the process will show, and at least in my book the more can be attributed to the human. What can also be attributed to the human is sifting through multiple seeds with a single prompt until you find one that’s just right. Coming up with various process pipelines. Creating input for the model, such as depth maps.

      Generally speaking you shouldn’t liken the human input to the process to painting, but to art direction. If I tell a painter “draw me a tiger in a forest” then no court ever will give me copyright to the image, if I write a whole novella to describe the image, influence the artists’ output sufficiently, I’d get acknowledged as a co-creator. AIs not being able to hold copyright themselves, once you hit that co-creator threshold you should have sole copyright over the work.

      …and that’s nothing new that’s just how copyright works. In the Anglosphere you have the sweat of the brow doctrine and coming up with the sentence “yo draw me a hot chick with big tiddies” does not produce sufficient amounts of sweat, in the continental tradition it’s threshold of originality, and no that instruction was not sufficiently original.

        • That wouldn’t make sense as the output is derivative of the prompt.

          It would also mean that if you take a picture with your phone you could only claim copyright to the RAW image, not the final output, as an AI has done its colour grading and denoising stuff all over it.

          And as far as I understand it that’s also the position of (US) courts: Sufficient creative input before and/or after the mechanical part.

    • Never used image generators, but usually these generative AIs don’t give the same results due to the random number generator used in the implementation.

      If they do, in theory, you might be able to copyright the prompt if it’s fairly complex.

    • if two people use the same prompt, do they get the same result?

      Usually part of the prompt includes a very large random number that is impossible to guess - so two people cannot use the same prompt. And therefore will get different outputs.

      There are some tools that let you specify a fixed number instead of a random one, and in that case yes the output would be the same. But that’s not the norm.

      Also the prompt is usually very complex. For example if you were to use Stable Diffusion to generate comic book images… you’d normally use a prompt that is close to ten gigabytes in size. Sure, it might include the words “cat sits on a hill looking over the sunset” but it also includes gigabytes of data that tells the model what style of drawing to do. You might also be happy with the cat sitting on the hill, but not the sunset, and can select the sunset in the image and have it draw that again with a different prompt, leaving the cat and hill untouched from the previous prompt.

      I’ve been working on and off for the last month on a single image. AI doesn’t mean the human does no work at all - especially if you want a specific result.

        •  millie   ( @millie@beehaw.org ) 
          link
          fedilink
          2
          edit-2
          8 months ago

          There absolutely is. It’s their process of developing a prompt.

          Compare it to painting a picture by hand versus paint-by-number. Okay, sure. Technically you can go out and get a paint-by-number Starry Night for $20 and paint something approximating it yourself. That doesn’t mean that you can paint it by hand, or even that you can now create your own paint-by-number canvass, it just means that someone gave you instructions on where to put paint with brush to get something similar. Obviously you’re not copying the brush strokes and the exact amount and type of paint used in each one, so it’s probably not like an excellent forgery, but we could apply the same idea to ‘traditional’ digital art and it would be.

          Record my keystrokes and mouse movements while I make something and repeat them and you’ll get the same thing.

          There is a world of difference between being able to take someone else’s progress in prompt development and throw it into a generator and being able to develop that prompt in the first place.

          It’s the process of selection, iteration, and gradual prompt adjustment that actually constitutes the creative process of using AI art, as well as the actual traditional art techniques that go into modifying the input or creating a base to alter your images from.

    • One question I have is that if two people use the same prompt, do they get the same result?

      No. The entire process is explorative and iterative. And prompts are not at all straight-forward. One of my favorite prompts for framing an object that I want to use as an asset is ‘surrounded by mushrooms’, which has nothing at all to do with mushrooms.

      AI art is largely about thinking about the context of what you’re looking for when it comes to digital media. It’s not at all like a star trek replicator that gives you what you ask for, it’s more like digging through a confused alien robot’s fever dream.

      Personally, I absolutely find my experience of using it comparable in terms of effort and creativity to my use of photography and other mediums. It also nearly always involves actually manually drawing and editing things in my use case.

      I think people who are totally sure they know what using AI art is like should go try it to make something actually usable. It’s one thing to use AI prompts to make silly pictures, it’s another to try to use them to generate specific usable assets that you can adjust and reiterate to get what you want out of them.

      • The more specific the prompt, the more uniform the output; the less specific the prompt, the less uniform the output. The models are large enough that it’s little different than offering the same prompt to human artists. The idea was to mimic the input–output flow of human interpretation using massive back catalogs of interpretations made by humans. Humans are, essentially, stable diffusion engines which take everything they’ve read, seen, and learned, and apply it to the task at hand. Some are better than others. Some - rather few - have the abiliy to create exceptionally refined and nuanced versions - we get the classics. Even fewer can extrapolate from the existing human data set, or set it aside to produce results in unexpected ways, and we get the avante-garde…which then gets folded into the “data-set” for future humans.

        To quote Mel Brooks (or his writing team):

        Dole Office Clerk : Occupation?

        Comicus : Stand-up philosopher.

        Dole Office Clerk : What?

        Comicus : Stand-up philosopher. I coalesce the vapors of human experience into a viable and meaningful comprehension.

        Dole Office Clerk : Oh, a bullshit artist!