Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

    • It’s actually not as easy as you think, it “looks” easy because all you seen is the result of survivorship bias. Like instagram people, they don’t post their failed shots. Like seriously, go download some stable diffusion model and try input your prompt, and see how good the result you can direct that AI to get things you want, it’s fucking work and I bet a good photographer with a good model can do whatever and quicker with director.(even with greenscreen+etc).

      I dab the stable diffusion a bit to see how it’s like, with my mahcine(16GB vram), 30 count batch generation only yields maybe about 2~3 that’s considered “okay” and still need further photoshopping. And we are talking about resolution so low most game can’t even use as texture.(slightly bigger than 512x512, so usually mip 3 for modern game engine). And I was already using the most popular photoreal model people mixed together.(now consider how much time people spend to train that model to that point.)

      Just for the graphic art/photo generative AI, it looks dangerous, but it’s NOT there yet, very far from it. Okay, so how about the auto coding stuff from LLM, welp, it’s similar, the AI doesn’t know about the mistake it makes, especially with some specific domain knowledge. If we have AI that trained with specific domain journals and papers, plus it actually understand how math operates, then it would be a nice tool, cause like all generative AI stuff, you have to check the result and fix them.

      The transition won’t be as drastic as you think, it’s more or less like other manufacturing, when the industry chase lower labour cost, local people will find alternatives. And look at how creative/tech industry tried outsource to lower cost countries, it’s really inefficient and sometimes cost more + slower turn around time. Now, if you have a job posting that ask an artist to “photoshop AI results to production quality” let’s see how that goes, I can bet 5 bucks that the company is gonna get blacklisted by artists. And you get those really desperate or low skilled that gives you subpar results.

        • It’s like the google dream with dogs for the hand one. lol, I’ve seen my fair share of anatomy “inspirations” when I experiment the posing prompts. (then later learn there are 3D posing extensions.) If it’s a uphill battle for more technical people like me, it would be really hard for artists. The ones I know that use mid journey just think it’s fun and not something really worth worrying about. A good hybrid gaping tools for fast prototype/iteration with specific guidance rules would be neat in the future.

          ie. 3D DCC for base model posing and material selection/lighting -> AI generate stuff -> photogrammetry(pretty hard to generate cause AI doesn’t know how to generate same thing from different angle, lol) to convert generated images back to 3D models and textures-> iterate.

          There are people working on other part like building replacement or actor replacement, I bet there are people working on above as well.

    • Yup. We should start preparing ideas for how we’re going to deal with that.

      One thing we can’t do is stop it, though. Legislation prohibiting AI is only going to slow the transition down a bit while companies move themselves to other jurisdictions that aren’t so restrictive.