• It’s my dream that AI takes over middle management and bureaucracy as a whole, and we get rid of all the societal evils that come from corrupt or incompetent management in both - governments and companies. Imagine if every single working person had zero ambiguity in their jobs and complete clarity on when they have to work, and on what. The world would be so much happier!

    • Ideally it would help put real, complicated but achievable solutions forward to some of the world’s toughest issues, like poverty, hunger, war and disaster,. AI is but a tool and in the current trajectory, much of its use is to advance the interest of capitalist moguls. In order to heed answers of improved AI models to achieve ideals of a harmonious world, we need to start with a change with our society to work towards it and accept change away from purely monetary ends.

      • Any problem that can be expressed mathematically, has a huge search space, and where human intuition doesn’t necessarily help.

        For example if a computer can solve chess then that same line of programming should be able to solve quantum physics and gravity.

      • I just read the first two chapters. Yes, it doesn’t paint a pretty picture but the dystopia portrayed in that story started with Manna being an unregulated monopoly that was given power over everything.

        In real life you perhaps won’t take it that far. All decisions would still be made and signed off by humans, AI would just be the planner/scheduler. And no tech services firm would want to get into employability tracking, they’ll quickly get chewed out by regulators if their AI product started discriminating against candidates in hiring.

  • Labour-less advancement. Human pass down centuries of advancement in languages.

    • Version 1: Rough input labour steps automation, in restrictive detail.
    • Version 2: Multiple labour steps input compiled, in restrictive choice. (Less supervise command)
    • Version 3: Task automation. (Communicate wish, whole labour compilations taken care; no production supervise input, only output demands)

    The automation gone from “multiple coffee gadgets” to “1 standard coffee button” to “a warm coffee of less sugar request”. (Now, where are human’s place in this picture? A balloon human such in Wall-E movie?)

    • Aight so I’ve been holding off on making conversation since I generally disagree with most of the negative sentiment towards them. But for real, you think they’re worthless? Legit at their present moment they’ve got so much immediate value; how much have you used them?

      I’ve pulled tremendous value from them. In my personal life, GPT-4 walked me through developing a Kotlin android app for my smart watch so that I could have access to it more easily and conveniently. It’s provided me guidance and knowledge, even teaching me German and Spanish and holding practice conversations with me. At work, it’s helped me write programs to improve my productivity, taught me how to use software like Excel, and just overal helps me be more capable.

      And all that is just one person’s value from it. Just imagine what value it’s creating right now for the millions who use it. Just imagine what it could do in the hands of innumerable virtuous and malicious individuals. It is so far from worthless

          • FYI: It’s the same with german. I think you’re quite alright with the ‘big’ languages. I didn’t spend much time with ChatGPT, but even some smaller language models speak multiple languages well enough. I tend to use english, i think the sentences are a bit more expressive and nuanced. But with ChatGPT that’s probably barely noticable.

      • I will admit I’ve never used them. I’m not keen on providing my email address to huxters for purposes of signing up and they won’t accept a disposable email address. At least not one I’ve been able to find.

        I’ll be honest, though. Running into someone extolling the benefits of LLM’s, I wonder if they have ulterior motives. A lot of the cryptobros are now jumping ship from the blockchain bandwagon to the AI bandwagon. (Because the blockchain bubble has partially burst now and the AI bubble is still going strong.)

        With cryptocurrencies or NFT’s, anyone telling you it was the best thing ever was always misrepresenting their own gains and telling lies about the capabilities of blockchain. Maybe they were themselves deluded, but the ultimate motivation to extoll the benefits of blockchain was not actual benefits, but rather that the extoller was invested. If they could be convinging enough and their audience believed them and invested, the value of the extoller’s investment would go up.

        Now, LLM’s are known to hallucinate. And very confidently and convincingly. None of the content of what LLM’s produce can be trusted for factual accuracy. LLM’s as a technology are just not suitable for producing factual output and will always be inferior to platforms like StackOverflow or… what Reddit used to be.

        So, what you’ve claimed GhatGPT has helped you with: Software development, language aquisition, and learning how to use software (Excel specifically). I really hope you’re not just copying programs out of ChatGPT and using those programs at work without auditing them first. If you have the skills to vet code, then what do you need ChatGPT for? And would plain-old Google not do a better job? And for learning Excel as well?

        And as others have said, I wouldn’t trust any language learning I got from ChatGPT.

        Just imagine what it could do in the hands of innumerable virtuous and malicious individuals.

        So, when Beanie Babies were at the height of their economic bubble, people were robbing stores and engaging in fist fights to get them. I very much believe that the hype around AI lately is causing a lot of terrible things. Big companies are publicly announcing they’re “replacing jobs” with AI. I think some of those cases are just big corporations finding dumb ways to put positive PR spins on “we’re laying off a lot of people” without actually intending to replace them with AI. I think some big businesses are actually swept up in the hype and think “replacing people with AI” is actually going to work out for them. Maybe some companies are somewhere in the middle: laying people off with the intention of getting them back on a part-time contracting basis for lower pay as “editors” of content output by ChatGPT. But really they’ll be doing the same job, just less efficiently and for lower pay.

        Again, look at the effect Beanie Babies had on the world. And that proved to have been a worthless nothing burger all along. The effects the AI hype is having on the world is no proof that it’s anything other than worthless lie-generating machines.

        • My ulterior motives are the same as yours: convey a strong opinion. It’s not like making others as optimistic about this as I am will change anything. Even if we both agreed to forget the concept, the cat is out of the bag; open sourced LLMs are getting better and access is getting cheaper. Everyone is impotent to stop what’s about to happen, it’s as futile as trying to stop torrenting copyrighted media. And more advanced they become, the less people need to be involved to make a large impact with it.

          Also to clarify, ChatGPT and GPT-4 are two different AIs with different capabilities. I used GPT-4, the better AI. There are many different LLM AIs out there now with varied strengths, weaknesses, and attitudes. ChatGPT is old news, so please don’t use it as your sole resource to judge LLMs (especially as you haven’t used it yet).

          The language it’s taught me is valid; the programs successful. I have programming knowledge, but its expertise often surpasses my own and is thus an invaluable resource. There is remarkably limited risk in using it as the tools are limited in scope and I am not a programmer by profession (it just helps); in any case, it writes secure code mostly and is only getting better over time. I imagine soon its kind will take over this domain entirely as their context limits and capabilities continue to grow.

          You’re probably seeing so many crypto bros liking this AI because they’re much more risk tolerant than the average person. These AIs are as much a risk as they are an opportunity. While I am optimistic, I fully recognize that things could go horribly wrong.

          With an opinion as strong as yours, I only ask that you look into it more before being so confident in your dismissal. At least try it out first before you denounce it as worthless and disregard the experiences of others

  • Currently the obvious use is to help people express their thoughts in words. It’s helped me a lot writing out resumes and cover letters. This can be extended to languages other than our main/first one.

    It’s also great to narrow down research on a personal scale in areas where if you have no expertise it would be very hard for you to figure out what you are looking for. I’ve used it to ID plants, insects and diseases successfully. I didn’t get a precise result from ChatGPT, but that’s not what I asked. I just requested pointers in the right direction. It delivered.

    The next obvious implementation is with software interface. I’ve already used it (unsuccessfully) to work with Unreal Engine and other 3d software. I got half baked results because the models were not trained specifically for the software in question. But if they were, it would be very easy to just ask the software how to do something instead of searching everywhere for potential answers. That doesn’t sound too far fetched and I heard it’s a feature that will become standard.

    • I’ve never had much success having Copilot write actual code. Where is been very helpful is in writing documentation, boilerplate, and just being a very smart autocomplete. That alone has saved me so much time and energy already.

    • I’m curious about this. What model were you using? A few people at my game dev company have said similar things about it not producing good code for unity and unreal. I haven’t seen that at all. I typically use GPT4 and Copilot. Sometimes the code has a logic flaw or something, but most of the time it works on the first try. I do at this point have a ton of experience working with LLMs so maybe it’s just a matter of prompting? When Copilot doesn’t read my mind (which tends to happen quite a bit), I just write a comment with what I want it to do and sometimes I have to start writing the first line of code but it usually catches on and does what I ask. I rarely run into a problem that is too hairy for GPT4, but it does happen.

      • I am not sure if my answer is correct- I’ve tried ChatGPT to help me with Unreal in February/March this year. I can’t recall what model.

        As for my query- I’m an artist, not a coder. I found ChatGPT would usually point me in the right direction if I had a simple interface question, but not when dealing with materials… Or the sequencer. I haven’t used Copilot though.

        • Ahh ok that makes sense. I think even with GPT4, it’s still going to be difficult for a non-programmer to use for anything that isn’t fairly trivial. I still have to use my knowledge of stuff to know the right things to ask. In Feb or Mar, you were using GPT3 (4 requires you to pay monthly). 3 is much worse at everything than 4.

  • I see endless possibilities, but it’s questionable if any of them are realistic before we overcome capitalism.

    But one idea I really like is AI helping with the implementation of sortition for democratic decision-making in government.

    Recently, the concept got some attention due to climate protesters demanding it, which I think is nice. So while I don’t want to discuss the concept and where it should be applied, here’s what (future) AIs could do:

    1. Enhanced Random Selection Process: AI can ensure a representative selection from the population for sortition by analyzing demographic data and employing stratified sampling algorithms.

    2. Personalized Education and Communication: Once participants are selected, AI could offer personalized learning paths to prepare them for their role, and adapt communication to suit each participant’s unique circumstances.

    3. Facilitating Communication and Mediation: AI can manage communication among the selected group by setting up secure environments for discussion, and serving as an impartial mediator to promote fairness and respectfulness during deliberations.

    4. Information Provision, Fact-Checking, and Bias Detection: AI can provide relevant, unbiased information on complex topics, perform real-time fact-checking, and monitor discussions for potential biases.

    5. Emotion and Sentiment Analysis: As discussions take place, AI could detect the emotional states and sentiments of participants, ensuring decisions are not overly influenced by emotional reactions.

    6. Advanced Simulation and Scenario Exploration: AI could create sophisticated simulations to help participants understand potential outcomes of the policies they are considering.

    7. Public Accountability and Feedback Collection: After decisions are made, AI can ensure transparency in decision-making by tracking and reporting the progress of the deliberations, and collecting public feedback on the decisions made.

    I should probably add that this list was made with the help of GPT 😅 so a more direct answer to your question might be: AI can help humans lay out their ideas and foster discussions.

  • We could outsource all the bureaucracy to machines. We could have entire data centres applying to things, sending that to another data centre, it get’s denied re-done and so on. Doing contracts, billing people, paying bills by billing yet other people.

    Humankind would just need to supply power and meanwhile i could go hiking in the mountains and have every thursday and friday off, because there is no paperwork around anymore.