cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

  • The academic name for the field is quite literally “machine learning”.

    You are incorrect that these systems are unable to create/be creative, you are correct that creativity != consciousness (which is an extremely poorly defined concept to begin with …) and you are partially correct about how the underlying statistical models work. What you’re missing is that by defining a probabilistic model to objects you can “think”/“be creative” because these models dont need to see a “blue hexagonal strawberry” in order to think about what that may mean and imagine what it looks like.

    I would recommend this paper for further reading into the topic and would like to point out you are again correct that existing AI systems are far from human levels on the proposed challenges, but inarguably able to “think”, “learn” and “creatively” solve those proposed problems.

    The person you’re responding to isn’t trying to pick a fight they’re trying to help show you that you have bought whole cloth into a logical fallacy and are being extremely defensive about it to your own detriment.

    That’s nothing to be embarrassed about, the “LLMs can’t be creative because nothing is original, so everything is a derivative work” is a dedicated propaganda effort to further expand copyright and capital consolidation.