First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow ‘rationalists’ are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

The flaw here is that there’s 8 billion people alive right now, and we don’t actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying “fuck em”. This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can’t solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

So I was wondering what the people here generally think. There are “boomer” forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

I also have noticed that the whole rationalist schtick of “what is your probability” seems like asking for “joint probabilities”, aka smoke a joint and give a probability.

Here’s my questions:

  1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

  2. Do you consider it likely, before 2040, those domains will include robotics

  3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

  4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

  5. Is AI system design an issue. I hate to say “alignment”, because I think that’s hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

*“epistemic status”: I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas…

  •  froztbyte   ( @froztbyte@awful.systems ) 
    link
    fedilink
    English
    11
    edit-2
    10 months ago

    ooooookay longpost time

    first off: eh wtf, why is this on sneerclub? kinda awks. but I’ll try give it a fair and honest answer.

    First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses.

    look, congrats on breaking out, but uh… you’re still wearing the prison jumpsuit in the grocery store and that’s why people are looking at you weirdly

    “yay you got out” but you got only half the reason right

    take some time and read this

    This seems deeply flawed

    correct

    But I do think advanced AI is possible

    one note here: “plausible” vs “possible” are very divergent paths and likelihoods

    in the Total Possible Space Of All Things That Might Ever Happen, of course it’s possible, but so are many, many other things

    it seems like the problems current AI can’t solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future

    eh. this ties back to my opener - you’re still too convinced about something on essentially no grounded basis other than industry hype-optimism

    I can link deepmind papers with all of these, published in 2022 or 2023.

    look I don’t want to shock you but that’s basically what they get paid to do. and (perverse) incentives apply - of course goog isn’t just going to spend a couple decabillion then go “oh shit, hmm, we’ve reached the limits of what this can do. okay everyone, pack it in, we’re done with this one!”, they’re gonna keep trying to milk it to make some of those decabillions back. and there’s plenty of useful suckers out there

    And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

    okay this is a weird leap and it’s borderline LW shittery so I’m not going to spend much effort on it, but I’ll give you this

    it doesn’t fucking matter.

    even if we do somehow crack even the smallest bit of computational sentience, the plausibility of rapid acting self-reinforcing runaway self-improvement on such a thing is basically nil. we’re 3 years down the line on the Evergreen getting stuck in the suez and fabs shutting down (with downstream orders being cancelled) and as a result of it a number of chips are still effectively unobtanium (even if and when you have piles and piles of money to throw at the problem). multiple industries, worldwide, are all throwing fucking tons of money at the problem to try recover from the slightest little interruption in supply (and like, “slight”, it wasn’t even like fabs burned down or something, they just stopped shipping for a while)

    just think of the utter scope of doing robotics. first you have to solve a whole bunch of design shit (which by itself involves a lot of from-principles directed innovation and inspiration and shit). then you have to figure out how to build the thing in a lab. then you have to scale it? which involves ordering thousounds of parts and SKUs from hundred of vendors. then find somewhere/somehow to assemble it? and firmware and iteration and all that shit?

    this isn’t fucking age of ultron, and tony’s parking-space fab isn’t a real thing.

    this outcome just isn’t fucking likely on any nearby horizon imo

    So I was wondering what the people here generally think

    we generally think the people who believe this are unintentional suckers or wilful grifters. idk what else to tell you? thought that was pretty clear

    There are “boomer” forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

    wat

    I also have noticed that the whole rationalist schtick of “what is your probability” seems like asking for “joint probabilities”, aka smoke a joint and give a probability.

    okay this gave me a momentary chuckle, and made me remember JRPhttp://darklab.org/jrp.txt (which is a fun little shitpost to know about)

    from here, answering your questions as you asked them in order (and adding just my own detail in areas where others may not already have covered something)

    1. no, not a fuck, not even slightly. definitely not with the current set of bozos at the helm or techniques as the foundation or path to it.

    2. no, see above

    3. who gives a shit? but seriously, no, see above. even if it did, perverse incentives and economic pressures from sweeping hand motion all this other shit stands a very strong chance to completely fuck it all up 60 ways to sunday

    4. snore

    5. if any of this happens at some point at all, the first few generations of it will probably look the same as all other technology ever - a force-multiplier with humans in the loop, doing things and making shit. and whatever happens in that phase will set the one on whatever follows so I’m not even going to try predict that

    *“epistemic status”: I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas…

    …okay? congrats? is that fulfilling for you? does it make you happy?

    not really sure why you mentioned the gf thing at all? there’s no social points to be won here

    closing thoughts: really weird post yo. like, “5 yud-steered squirrels in a trenchcoat” weird.

    •  self   ( @self@awful.systems ) 
      link
      fedilink
      English
      1110 months ago

      look I don’t want to shock you but that’s basically what they get paid to do. and (perverse) incentives apply - of course goog isn’t just going to spend a couple decabillion then go “oh shit, hmm, we’ve reached the limits of what this can do. okay everyone, pack it in, we’re done with this one!”, they’re gonna keep trying to milk it to make some of those decabillions back. and there’s plenty of useful suckers out there

      a lot of corporations involved with AI are doing their damndest to damage our relationship with the scientific process by releasing as much fluff disguised as research as they can manage, and I really feel like it’s a trick they learned from watching cryptocurrency projects release an interminable amount of whitepapers (which, itself, damaged our relationship with and expectations from the engineering process)

      •  Steve   ( @fasterandworse@awful.systems ) 
        link
        fedilink
        English
        8
        edit-2
        10 months ago

        As someone who went from high school directly into a publishing company as a “web designer” in 1998 I spent the next 20 years assuming that academic work was completely uninfluenced by commercial interests. HCI was academic, UX was commercial. Wasn’t till around 2019 that I started reading ACM papers about HCI from the 70s up. Fuck me was I surprised with how mixed up it all is. ACM interactions magazine published monthly case studies for Apple or did profiles on Jef Raskin talking about HCI for brand loyalty.

        Anyway. Point is a published paper doesn’t mean shit if you just read a few because an article pointed you to them. I don’t know. This thread sucks

        • Preach, as someone inside academia, the bullcrap is real. I very rarely read a paper that hasn’t got a major stats issue—an academic paper is only worth something if you understand it enough to know how wrong it is or there’s plenty of replication/related work building on it, ideally both. (And it’s a technical field with an objective measure of truth but don’t let my colleagues in humanities hear me say that—its not that their work is worthless, its just its not reliable.)

      • “shitcoiners or oil companies… who wore it best?”

        but the rest of your reply reminds me that someone (I think steve or blake?) mentioned a thing here recently about a book on blaming guthenberg for this state of fucking everything up. I want to go read that, and I really need to get around to writing my rantpost about the “the problem of information transfer at scale is that scale is lossy, and this is why … [handwaves at many problems, examples continue]” thing that at least 8 friends of mine have had to put up with in DM over the last few years

    • take some time and read this

      I read it. I appreciated the point that human perception of current AI performance can scam us, though this is nothing new. People were fooled by Eliza.

      It’s a weak argument though. For causing an AI singularity, functional intelligence is the relevant parameter. Functional intelligence just means “if the machine is given a task, what is the probability it completes the task successfully”. Theoretically an infinite chinese room can have functional intelligence (the machine just looks up the sequence of steps for any given task).

      People have benchmarked GPT-4 and it’s got general functional intelligence at tasks that can be done on a computer. You can also just go pay up $20 a month and try it. It’s below human level overall I think, but still surprisingly strong given it’s emergent behavior from computing tokens.

    • Just I think to summarize your beliefs: rationalists are wrong about a lot of things and assholes. And also the singularity (which predates yuds existence) is not in fact possible by the mechanism I outlined.

      I think this is a big crux here. It’s one thing if its a cult around a false belief. It’s kind of a problem to sneer at a cult if the core S of it happens to be a true law of nature.

      Or an analogy. I think gpt-4 is like the data from the Chicago pile. That data was enough to convince the domain experts then a nuke was going to work to the point they didn’t test Fat Man, you believe not. Clearly machine generality is possible, clearly it can solve every problem you named including, with the help of humans, ordering every part off digikey and loading the pick and place and inspecting the boards and building the wire harnesses and so on.

      • Just I think to summarize your beliefs

        don’t be puttin’ words in my mouth yo

        rationalists

        this is a big set of very many people and lots of details

        are wrong about a lot of things

        many of them about many things, yes

        and assholes

        some, provably

        And also the singularity (which predates yuds existence) is not in fact possible by the mechanism I outlined

        whether it’s the wet dream of kurzweil or yud or whoever else, doesn’t matter? but as to the details… you’re engaging with this like the rats do (yes, told you, you only half escaped). you “set the example”, and then “test the details”

        just … don’t?

        the siren song of this is “okay what if I change the details of the experiment slightly?”

        we’ve had the trolley problem for ages, doesn’t mean it’s just “solved”. you won’t manage to “solve” whether the singularity can happen or not here, for the same reason

      • Or an analogy. I think gpt-4 is like the data from the Chicago pile. That data was enough to convince the domain experts then a nuke was going to work to the point they didn’t test Fat Man, you believe not.

        Are you mixing up Fat Man and Little Boy? Because Fat Man was an implosion-type bomb, just like the Trinity device. Little Boy was a gun-type. From vague memories of Rhode’s book, they wanted implosion types to maximize Pu weight to kiloton ratio, but it was much less straightforward than a gun-type bomb.

      • I think gpt-4 is like the data from the Chicago pile. That data was enough to convince the domain experts then a nuke was going to work to the point they didn’t test Fat Man, you believe not.

        Whoa whoa whoa there! I’m the contrarian that thinks that gpt is clearly more that just plagiarizing things, but it’s still just a step above Mad Libs in terms of intelligence. It’s not clear that you could get it to be smarter than a goldfish, let alone a human being. It’s just really good at stringing words together in a way that sounds good.

    • I don’t really see much likelihood in a singularity though, there’s probably a bunch of useful shit you could work out if you analysed the right extant data in the right way but there’s huge amounts of garbage data that it’s not obvious is garbage.

      My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.

      Physics is a bitch and there are just sort of limits on how awesome technology can be. Maybe I’m wrong but it seems like digital intelligence would be more useful for stuff like finding new antibiotics than making flying nanomagic fabricator paperclip drones.

      • My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.

        That’s right. Eliezer’s LSD vision of the future where a smart enough AI just figures it all out with no new data is false.

        However, you could…build a fuckton of robots. Have those robots do experiments for you. You decide on the experiments, probably using a procedural formula. For example you might try a million variations of wing design, or a million molecules that bind to a target protein, and so on. Humans already do this actually in those domains, this is just extending it.

        • For example you might try […] a million molecules that bind to a target protein

          well not millions but tens of thousands, yes we have that, it’s called high throughput screening. it’s been around for some time

          have you noticed some kind of medical singularity? is every human disease curable by now? i don’t fucking think so

          that’s because you’re automating glorified liquid transfer from eppendorf A to eppendorf B, followed by simple measurement like fluorescence. you still have to 1. actually make all of this shit and make sure it’s pure and what you ordered, then 2. you have to design an experiment that will tell you something that you measure, and be able to interpret it correctly, then 3. you need to be sure that you’re doing the right thing in the first place, like not targeting the wrong protein (more likely than you think), and then 4. when you have some partial result, you latch to it and improve it piece by piece, making sure that it will actually get where it needs to, won’t shred patient’s liver instantly and so on (more likely than you think)

          while 1 is at initial stages usually subcontracted to poor sods at entity like enamine ltd, 1, 4 are infinite career opportunities for medicinal/organic chemists and 2, 3 for molecular biologists, because all AI attempts at any of that that i’ve seen were spectacular failures and the only people that were satisfied with it were people who made these systems and published a paper about them. especially 4 is heavily susceptible to garbage in garbage out situations, and putting AI there only makes matters worse

          is HTS a good thing? if you can afford it, it relieves you from the most mind numbing task out there. if you can’t you still do all of this by hand. (it seems to me that it escapes you that all of this shit costs money) is this a new thing? also no. since 90s you can buy automated flash chromatographic column, it’s a box where you put dirty compound in one tube and get purified compound in other tubes. guess what took me entire yesterday? yes, it’s flash columns by hand because my uni doesn’t have a budget for that. would my paper come up faster if i had a combiflash? maybe, would it be any better if i had 5? no, because all the hard bits aren’t automated away, shit breaks all the time, things work different than you think and sometimes it’s that what makes it noticeable, and so on and so on

          • and btw if you try to bypass all of that real world non-automatable effort, just wing it and try to do it all in silico, that is simulate binding of unspecified compound to some protein it gets even worse, because search space is absurdly large, molecular mechanics + some qm method where it matters scales poorly, and then in absence of real world data you get some predictions, scored by some number, that gets you the illusion of surety but is entirely wrong

            i’ve seen this happening in real time over some months, this shit was quietly buried and removed from website and real thing was pieced together by humans, based on real world data acquired by other humans. yet still, company claims to be “ai-powered”. it has probably something to do with ai bros holding money in that place

            • rapid automated drug development != solving medicine, while that would be a good thing, these are not remotely similar. first one is partially engineering problem, the other requires much more theory building

              solving medicine would be more of a problem for biologists, and biology is a few magnitudes harder to simulate than chemistry. from my experience with computational chemists, this shit is hard, scales poorly (like n^7), and because of a large search space predictive power is limited. if you try to get out of wet lab despite all of this anyway and simulate your way to utopia, you get into rapidly compounding garbage in garbage out issues, and this is in fortunate case where you know what are you doing, that is, when you are sure that you have right protein at hand. this is the bigger problem, and this requires lots of advanced work from biologists. sometimes it’s interaction between two of proteins, sometimes you need some unusual cofactor (like cholesterol in membrane region for MOR, which was discovered fairly recently) some proteins have unknown functions, there are orphan receptors, some signalling pathways are little known. this is also far from given and more likely than you think https://www.science.org/content/blog-post/how-antidepressants-work-last good luck automating any of that

              that said, sane drug development has that benefit of providing some new toys for biologists, so that even if a given compound will shred liver of patient that might be fine for some cell assay. some of the time, that makes their work easier

              as a chemist i sometimes say that in some cosmic sense chemistry is solved, that is, when we want to go from point A to point B we don’t beat the bush wildly but instead most of the time there’s some clear first guess that works, some of the time. this seems to be a controversial opinion and even i became less sure of that sometime halfway through my phd, partially because i’ve found a counterexampleS

              there’s a reason why drug development takes years to decades

              i’m not saying that solving medicine will take thousands of years, whatever that even means. things are moving rapidly, but any advancement that will make it work even faster will come from biologists, not from you or any other AI bros

              • going off a tangent with these antidepressant thingy: if this paper holds up and it’s really how things work under the hood, we have a situation where for 40 years people were dead wrong about how antidepressants work, and now they do know. turns out, all these toys we give to biologists are pretty far from perfect and actually hit more than intended, for example all antidepressants in clinical use hit some other, now turns out unimportant target + TrkB. this is more common than you think, some receptors like sigma catch about everything you can throw at them, there are also orphan receptors with no clear function that maybe catch something and we have no idea. even such a simple compound like paracetamol works in formely unknown way, now we have a pretty good guess that it’s really cannabinoid, and paracetamol is a prodrug to that. then there are very similar receptors that are just a little bit different but do completely different things, and sometimes you can even differentiate between the same protein on basis of whether is bound to some other protein or not. shit’s complicated but we’re figuring it out

                catching up this difference was only possible by using tools - biological tools - that were almost unthinkable 20 years ago, and is far outside of that “just think about it really hard and you’ll know for sure” school of thought popular at LW, even if you offload the “thinking” part to chatgpt. my calculus prof used to warn: please don’t invent new mathematics during exam, maybe some of you can catch up and surpass 3000 years of mathematics development in 2h session, but it’s a humble thing to not do this and learn what was done in the past beforehand (or something to that effect. it was a decade ago)

    •  earthquake   ( @earthquake@lemm.ee ) 
      link
      fedilink
      English
      710 months ago

      You know, I thought that moving sneerclub onto lemmy meant we probably would not get that familiar mix of rationalists, heterodox rationalists, and just-left-but-still-mired-in-the-mindset ex-rationalists that swing by and want to quiz sneerclub. Maybe we’re just that irresistible.

    • from 2011-2013 i was getting these guys email me directly about roko’s basilisk because lesswrong had banned discussion and rationalwiki was the only place even mentioning it

      now they work hard to seek us out even here

      i hope the esteemed gentleposter realises that there are no recoverable good parts and it’s dumbassery all the way down sooner rather than later, preferably before posting again

      • Jesus fuck. Idk about no good parts, the bits that are unoriginal are sometimes interesting (e.g. distance between model and reality, metacognition is useful sometimes etc) it would just be more useful if they like produced reading lists instead of pretending to be smort

      • Hi David. Reason I dropped by was the whole concept of knowing the distant future with too much certainty seemed like a deep flaw, and I have noticed lesswrong itself is full of nothing but ‘cultist’ AI doomers. Everyone kinda parrots a narrow range of conclusions, mainly on the imminent AGI killing everyone, and this, ironically, doesn’t seem very rational…

        I actually work on the architecture for current production AI systems and whenever I mention approaches that do work fine and suggest we could control more powerful AI this way, I get downvoted. So I was trying to differentiate between:

        A. This is a club of smart people, even smarter than lesswrongers who can’t see the flaws!

        B. This is a club of well, the reason I called it boomers was I felt that the current news and AI papers make each of the questions I asked a reasonable and conservative outcome. For example posters here are saying for (1), “no it won’t do 25% of the jobs”. That was not the question, it was 25% of the tasks. Since for example Copilot already writes about 25% of my code, and GPT-4 helps me with emails to my boss, from my perspective this is reasonable. The rest of the questions build on (1).

        •  Evinceo   ( @Evinceo@awful.systems ) 
          link
          fedilink
          English
          910 months ago

          I actually work on the architecture for current production AI systems and whenever I mention approaches that do work fine and suggest we could control more powerful AI this way, I get downvoted.

          LW isn’t looking for technical practical solutions. They want plausible sci-fi that fits their narrative. Actually solving the problems they worry about would mean there’s no reason for the cult to exist, so why would they upvote that?

          Overall LW seems to be dead wrong about predicting modern AI systems. They anticipated that there was this general intelligence quality that would enable problem solving, escape, instrumental convergence, etc. However what ended up working was approximating functions really hard. The existence of ChatGPT without a singularity is a crisis for LW. No longer can they safely pontificate and write Harry Potter/The Culture fanfiction; now they must confront the practical reality of the monsters under their bed looking an awful lot more like dust bunnies.

  • wrong place for this. joint probabilities joke was kinda fire though

    1.

    Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

    There is no set of domains over which we can quantify to make statements like this. “at least 25% of the domains that humans can do” is meaningless unless you willfully adopt a painfully modernist view that we really can talk about human ability in such stunningly universalist terms, one that inherits a lot of racist, ableist, eugenicist, white supremacist, … history. Unfortunately, understanding this does not come down to sitting down and trying to reason about intelligence from techbro first principles. Good luck escaping though.

    Rest of the questions are deeply uninteresting and only become minimally interesting once you’re already lost in the AI religion.

    • And just to be clear, for one to be “lost in the AI religion”, the claims have to be false, correct? We will not see the things I mentioned within the timeframe I gave (7 years, 17 years, and implicitly if there is not immediate progress towards the nearer deadline within 1 year it’s not going to happen).

      Google’s Gemini will not be multimodal, be capable of learning to do tasks by reinforcement learning to human level, right? Robotics foundation models will not work.

  •  swlabr   ( @swlabr@awful.systems ) 
    link
    fedilink
    English
    910 months ago

    I will answer these sincerely in as much detail as necessary. I will only do this once, lest my status amongst the sneerclub fall.

    1. I don’t think this question is well-defined. It implies that we can qualify all the relevant domains and quantify average human performance in those domains.
    2. See above.
    3. I think “AI systems” already control “robotics”. Technically, I would count kids writing code for a simple motorised robot to satisfy this. Everywhere up the ladder, this is already technically true. I imagine you’re trying to ask about AI-controlled robotics research, development and manufacturing. Something like what you’d see in the Terminator franchise- Skynet takes over, develops more advanced robotic weapons, etc. If we had Skynet? Sure, Skynet formulated in the films would produce that future. But that would require us to be living in that movie universe.
    4. This is a much more well-defined question. I don’t have a belief that would point me towards a number or probability, so no answer as to “most.” There are a lot of factors at play here. Still, in general, as long as human labour can be replaced by robotics, someone will, at the very least, perform economic calculations to determine if that replacement should be done. The more significant concern here for me is that in the future, as it is today, people will still only be seen as assets at the societal level, and those without jobs will be left by the wayside and told it is their fault that they cannot fend for themselves.
    5. Yes, and we already see that as an issue today. Love it or hate it, the partisan news framework produces some consideration of the problems that pop up in AI development.

    Time for some sincerity mixed with sneer:

    I think the disconnect that I have with the AGI cult comes down to their certainty on whether or not we will get AGI and, more generally, the unearned confidence about arbitrary scientific/technological/societal progress being made in the future. Specifically with AI => AGI, there isn’t a roadmap to get there. We don’t even have a good idea of where “there” is. The only thing the AGI cult has to “convince” people that it is coming is a gish-gallop of specious arguments, or as they might put it, “Bayesian reasoning.” As we say, AGI is a boogeyman, and its primary use is bullying people into a cult for MIRI donations.

    Pure sneer (to be read in a mean, high-school bully tone):

    Look, buddy, just because Copilot can write spaghetti less tangled than you doesn’t mean you can extrapolate that to AGI exploring the stars. Oh, so you use ChatGPT to talk to your “boss,” who is probably also using ChatGPT to speak to you? And that convinces you that robots will replace a significant portion of jobs? Well, that at least convinces me that a robot will replace you.

    • 1, 2 : since you claim you can’t measure this even as a thought experiment, there’s nothing to discuss 3. I meant complex robotic systems able to mine minerals, truck the minerals to processing plants, maintain and operate the processing plants, load the next set of trucks, the trucks go to part assembly plants, inside the plant robots unload the trucks and feed the materials into CNC machines and mill the parts and robots inspect the output and pack it and more trucks…culminating in robots assembling new robots.

      It is totally fine if some human labor hours are still required, this cheapens the cost of robots by a lot.

      1. This is deeply coupled to (3). If you have cheap robots, if an AI system can control a robot well enough to do the task as well as a human, obviously it’s cheaper to have robots do the task than a human in most situations.

      Regarding (3) : the specific mechanism would be AI that works like this:

      Millions of hours of video of human workers doing tasks in the above domain + all video accessible to the AI company -> tokenized compressed description of the human actions -> llm like model. The llm like model thus is predicting “what would a human do”. You then need a model to transform the what to robotic hardware that is built differently than humans, and this is called the “foundation model”: you use reinforcement learning where actual or simulated robots let the AI system learn from millions of hours of practice to improve on the foundation model.

      The long story short of all these tech bro terms is robotic generality - the model will be able to control a robot to do every easy or medium difficulty task, the same way it can solve every easy or medium homework problem. This is what lets you automate (3), because you don’t need to do a lot of engineering work for a robot to do a million different jobs.

      Multiple startups and deepmind are working on this.

      • since you claim you can’t measure this even as a thought experiment, there’s nothing to discus

        You’re going to have to lose the LessWrongy superstition that you have to be able to assign numbers to something for it to be meaningful. Sometimes when talking about this big, messy, complicated world, your error bars are so large that assigning any number at all would be meaningless and lead to error. That doesn’t mean you can’t talk qualitatively about what you do know or believe.

      •  swlabr   ( @swlabr@awful.systems ) 
        link
        fedilink
        English
        9
        edit-2
        10 months ago
        1. +2, You haven’t made the terms clear enough for there to even be a discussion.
        2. see above (placeholder for list formatting)
        3. Uh, OK? Then no (pure sneer: the plot thins). Robots building robots probably already happens in some sense, and we aren’t in the Singularity yet, my boy.
        4. Sure, why not.

        (pure sneer response: imagine I’m a high school bully, and that I assault you in the manner befitting someone of my station, and then I say, “How’s that for a thought experiment?”)

  • Needling in on point 1 - no I don’t, largely because AI techniques haven’t surpassed humans in any given job ever :P. Yes, I am being somewhat provocative, but no AI has ever been able to 1:1 take over a job that any human has done. An AI can do a manual repetitive task like reading addresses on mail, but it cannot do all of the ‘side’ work that bottlenecks the response time of the system: it can’t handle picking up a telephone and talking to people when things go wrong, it can’t say “oh hey the kids are getting more into physical letters, we better order another machine”, it can’t read a sticker that somebody’s attached somewhere else on the letter giving different instructions, it definitely can’t go into a mail center that’s been hit by a tornado and plan what the hell it’s going to do next.

    The real world is complex. It cannot be flattened out into a series of APIs. You can probably imagine building weird little gizmos to handle all of those funny side problems I laid out, but I guarantee you that all of them will then have their own little problems that you’d have to solve for. A truly general AI is necessary, and we are no closer to one of those than we were 20 years ago.

    The problem with the idea of the singularity, and the current hype around AI in general, is a sort of proxy Dunning-Kruger. We can look at any given AI advance and be impressed but it distracts us from how complex the real world is and how flexible you need to be a general agent that actually exists and can interact and be interacted upon outside the context of a defined API. I have seen no signs that we are anywhere near anything like this yet.

    • And just briefly, because the default answer to this point is “yes but we’ll eventually do it”: once we do come up with a complex problem solver, why would we actually get it to start up the singularity? Nobody needs infinite computing power forever, except for Nick Bostrom’s ridiculous future humans and they aren’t alive to be sad about it so I’m not giving them anything. A robot strip mining the moon to build a big computer doesn’t really do that much for us here on Earth.

    • The counter argument is GPT-4. For the domains this machine has been trained on it has a large amount of generality - a large amount of capturing that real world complexity and dirtiness. Reinforcement learning can make it better.

      Or in essence, if you collect colossal amounts of information, yes pirated from humans, and then choose what to do next by ‘what would a human do’, this does seem to solve the generality problem. You then fix your mistakes with RL updates when the machine fails on a real world task.

      • No it’s not. GPT-4 is nowhere near suitable for general interaction. It just isn’t.

        “Just do machine learning to figure out what a human would do and you’ll be able to do what a human does!!1!”. “Just fix it when it goes wrong using reinforcement learning!!11!”.

        GPT-4 has no structured concept of understanding. It cannot learn-on-the-fly like a human can. It is a stochastic parrot that badly mimics a the way that people on the internet talk, and it took an absurd amount of resources to get it to do even that. RL is not some magic process that makes a thing do the right thing if it does the wrong thing enough and it will not make GPT-4 a general agent.

    1. no
    2. no, (follows from 1)
    3. no, but space exploration by drones with semi-autonomous decision making might be feasible. The power levels for such tech will have to go way down though.
    4. define “mass transition”. I believe a lot of jobs that require humans now (like customer support) will be enthusiastically robotized, but not that that outcome will be postive for either the workers or consumers. I doubt it will be more that maybe 10% of the total workforce though.
    5. like someone mentioned, we can see “artificial intelligences” (corporations) do bad things right now and we aren’t stopping them. Considering everybody in AI research subconsciously subscribes to the California ideology, there’s no way they have the introspection to truly design an “aligned” AI.
  •  corbin   ( @corbin@awful.systems ) 
    link
    fedilink
    English
    810 months ago

    I’m being explicitly NSFW in the hopes that your eyes will be opened.

    The Singularity was spawned in the 1920s, with no clear initiating event. Its first two leaps forward are called “postmodernism” and “the Atomic age.” It became too much for any human to grok in the late 1940s, and by the 1960s it was in charge of terraforming and scientific progress.

    I find all of your questions irrelevant, and I say this as a machine-learning practitioner. We already have exponential growth in robotics, leading to superhuman capabilities in manufacturing and logistics.

    • I actually really liked this reply purely on the fact that it walked a different avenue of response

      Because yeah indeed, under the lens of raw naïve implementation, the utter breadth of scope involved in basically anything is so significantly beyond useful (or even tenuous) human comprehension it’s staggering

      We are, notably, remarkably competent at abstraction[0], and this goes a hell of a long way in affordance but it’s also not an answer

      I’ll probably edit this later to flesh the post out a bit, because I’m feeling bad at words rn

      [0] - this ties in with the “lossy at scale” post I need to get to writing (soon.gif)

      • Yeah, this post (edit: “comment”, the original post does not spark joy) sparked joy for me too (my personal cult lingo is from Marie Kondo books, whatcha gonna do)

        One of my takes is that the “AI alignment” garbage is way less of a problem than “Human Alignment” i.e. how to get humans to work together and stop being jerks all the time. Absolutely wild that they can’t see that, except perhaps when it comes to trying to get other humans to give them money for the AIpocalype.

    • Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won’t be in a week or a month, energy requirements alone limit how fast it can happen.

      Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.

      Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you “priced in” this possibility in your world view?

    • Consider a flying saucer cult. Clearly a cult, great leader, mothership coming to pick everyone up, things will be great.

      …What if telescopes show a large object decelerating into the solar system, the flaw from the matter annihilation engine clearly visible. You can go pay $20 a month and rent a telescope and see the flare.

      The cult uh points out their “sequences” of writings by the Great Leader and some stuff is lining up with the imminent arrival of this interstellar vehicle.

      My point is that lesswrong knew about GPT-3 years before the mainstream found it, many OpenAI employees post there etc. If the imminent arrival of AI is fake - like the hyped idea of bitcoin going to infinity or replacing real currency, or NFTs - that would be one thing. But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it’s mistakes and had the vision module deployed…

      Oh and I guess the other plot twist in this analogy : the Great Leader is saying the incoming alien vehicle will kill everyone, tearing up his own Sequences of rants, and that’s actually not a totally unreasonable outcome if you could see an alien spacecraft approaching earth.

      And he’s saying to do stupid stuff like nuke each other so the aliens will go away and other unhinged rants, and his followers are eating it up.

      •  Evinceo   ( @Evinceo@awful.systems ) 
        link
        fedilink
        English
        9
        edit-2
        10 months ago

        Look more carefully at what the cult leader is asking for. He was asking for money for his project before, now he’s tearing his hair out in despair because we haven’t spent enough money on his project, we’d better tell the aliens to give us another few months so we can spend more money on the cult project.

        He has been very careful not to say that we should do anything bad to the aliens, just people who don’t agree with him about how we should talk to the aliens.

      •  self   ( @self@awful.systems ) 
        link
        fedilink
        English
        710 months ago

        …What if telescopes show a large object decelerating into the solar system, the flaw from the matter annihilation engine clearly visible. You can go pay $20 a month and rent a telescope and see the flare.

        if the only telescopes showing this object are the ones that must be rented from the cult and its offshoots, then it’s pretty obvious some bullshit is up, isn’t it? maybe the institution designed and optimized to trick your human brain into wholeheartedly believing things that don’t match with reality has succeeded, because it has poured a lot more time and money into tricking you than you could possibly know

        My point is that lesswrong knew about GPT-3 years before the mainstream found it, many OpenAI employees post there etc. If the imminent arrival of AI is fake - like the hyped idea of bitcoin going to infinity or replacing real currency, or NFTs - that would be one thing. But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it’s mistakes and had the vision module deployed…

        didn’t lesswrong bank on an entire different set of AI technology until very recently, and a lot of the tantrums we’re seeing from yud stem from his failure to predict or even understand LLMs?

        I keep seeing this idea that all GPT needs to be true AI is more permanence and (this is wild to me) a robotic body with which to interact with the world. if that’s it, why not try it out? you’ve got a selection of vector databases that’d work for permanence, and a big variety of cheap robotics kits that speak g-code, which is such a simple language I’m very certain GPT can handle it. what happens when you try this experiment?

        a final point I guess — there’s a lot of overlap here with the anti-cryptocurrency community. it sounds like we’re in agreement that cryptocurrency tech is a gigantic scam; that the idea of number going up into infinity is bunk. but something I’ve noticed is that folk with cryptocurrency jobs could not come to that realization, that when your paycheck relies on internalizing a set of ideas that contradict reality, most folk will choose the paycheck (at least for a while — cognitive dissonance is a hard comedown and a lot of folks exited the cryptocurrency space when the paycheck no longer masked the pain)

        • I keep seeing this idea that all GPT needs to be true AI is more permanence and (this is wild to me) a robotic body with which to interact with the world. if that’s it, why not try it out? you’ve got a selection of vector databases that’d work for permanence, and a big variety of cheap robotics kits that speak g-code, which is such a simple language I’m very certain GPT can handle it. what happens when you try this experiment?

          ??? I don’t believe GPT-n is ready for direct robotics control at a human level because it was never trained on it, and you need to use a modification on transformers for the architecture, see https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action . And a bunch of people have tried your experiment with some results https://github.com/GT-RIPL/Awesome-LLM-Robotics .

          In addition to tinker with LLMs you need to be GPU-rich, or have the funding of about 250-500m. My employer does but I’m a cog in the machine. https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini

          What I think is the underlying technology that made GPT-4 possible can be made to drive robots to human level at some tasks, though if you note I think it may take to 2040 to be good, and that technology mostly just includes the idea of using lots of data, neural networks, and a mountain of GPUs.

          Oh and RSI. That’s the wildcard. This is where you automate AI research, including developing models that can drive a robot, using current AI as a seed. If that works, well. And yes there are papers where it does work. .

          •  self   ( @self@awful.systems ) 
            link
            fedilink
            English
            710 months ago

            ??? I don’t believe GPT-n is ready for direct robotics control at a human level because it was never trained on it, and you need to use a modification on transformers for the architecture, see https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action . And a bunch of people have tried your experiment with some results https://github.com/GT-RIPL/Awesome-LLM-Robotics .

            yeah you don’t come on here, play with words, and then fucking ??? me. what you said was:

            But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it’s mistakes and had the vision module deployed…

            and I told you to go ahead. now you’re gonna sit and pretend you didn’t mean the $20 a month model, you meant some other bullshit

            and when I look at those other models, what I see is some deepmind marketing fluff and some extremely disappointing results. namely, we’ve got some utterly ordinary lab robots doing utterly ordinary lab robot things. and absolutely none of it looks like a singularity, which was the point of the discussion, right?

            In addition to tinker with LLMs you need to be GPU-rich, or have the funding of about 250-500m. My employer does but I’m a cog in the machine.

            you don’t see this as a problem, vis-a-vis the whole “only the cult’s telescopes seem to see the spaceship” thing?

            Oh and RSI. That’s the wildcard. This is where you automate AI research, including developing models that can drive a robot, using current AI as a seed. If that works, well. And yes there are papers where it does work. .

            please don’t talk about my wrists like that

            nah but seriously I think I’ve seen those results too! and they’re extremely disappointing.

        • Just to be clear, you can build your own telescope now and see the incoming spacecraft.

          Right now you can go task GPT-4 with solving a problem about equal to undergrad physics, let it use plugins, and it will generally get it done. It’s real.

          Maybe this is the end of the improvements, just like maybe the aliens will not actually enter orbit around earth.

  • Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

    Domains that humans can do are not quantifiable. Many fields of human endeavor (e.g. many arts and sports) are specifically only worthwhile because of the limits of human minds and bodies. Weightlifting is a thing even though we have cranes and forklifts. People enjoy paintings and drawing even though we have cameras.

    I do not find likely that 25% of currently existing occupations are going to be effectively automated in this decade and I don’t think generative machine learning models like LLMs or stable diffusion are going to be the sole major driver of that automation.

    Do you consider it likely, before 2040, those domains will include robotics

    Humans are capable of designing a robot, procuring the components to build the robot, assembling it and using the robot to perform a task. I don’t expect (or desire) a computer program to be able to do the same independently during any of our expected lifetime. It is entirely plausible that tools which apply ML techniques will be used more and more in robotics and other industries, but my money is on those tools being ultimately wielded by humans for the foreseeable future.

    If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

    No. Even if Skynet had full control of a robot factory, heck, all the robot factories, and staffed them with a bunch of sleepless foodless always motivated droids, it would still face many of the constraints we do. Physical constraints (a conveyor belt can only go so fast without breaking), economic constraints (Where do the robot parts and the money to buy them come from? Expect robotics IC shortages when semiconductor fabs’ backlogs are full of AI accelerators), even basic motivational constraints (who the hell programmed Skynet to be a paperclip C3PO maximizer?)

    Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

    No. A transition like that brought by mechanization and industrialization of agriculture, or the outsourcing of manufacturing industry accompanied by the shift to a service economy, seems plausible, but not by 2040 and it won’t be driven by just machine learning alone.

    Is AI system design an issue. I hate to say “alignment”, because I think that’s hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

    Yes, system design is an important issue with all technology. We are already seeing real damage from “AI” technology getting to make important decisions: self-driving vehicle accidents, amplified marginalization of minorities due to feedback of bias into the models, unprecedented opportunities for spam and propaganda, bottlenecks of technology supply chains and much more.

    Automation will absolutely continue to replace more and more different kinds of human labor. While this does and will drive unemployment to some extent, there is a more subtle issue with it as well. Productivity of human labor per capita has been soaring decade by decade, but median wages and work hours have stagnated. AI, like many other technologies before and after, is probably gonna end up creating more bullshit jobs, with some people coming into them from already bullshit jobs. If AI can replace half of human labor, that should then mean the average person has to work half as hard, but instead they will have to deliver double the results.

    I just think the threat model of autonomous robot factories making superhuman android workers and replicas of itself at an exponential rate is pure science fiction.

    • Having trouble with quotes here **I do not find likely that 25% of currently existing occupations are going to be effectively automated in this decade and I don’t think generative machine learning models like LLMs or stable diffusion are going to be the sole major driver of that automation. **

      1. I meant 25% of the tasks, not 25% of the jobs. So some combination of jobs where AI systems can do 90% of some jobs, and 10% of others. I also implicitly was weighting by labor hour, so if 10% of all the labor hours done by US citizens are driving, and AI can drive, that would be 10% automation. Does this change anything in your response?

      No. Even if Skynet had full control of a robot factory, heck, all the robot factories, and staffed them with a bunch of sleepless foodless always motivated droids, it would still face many of the constraints we do. Physical constraints (a conveyor belt can only go so fast without breaking), economic constraints (Where do the robot parts and the money to buy them come from? Expect robotics IC shortages when semiconductor fabs’ backlogs are full of AI accelerators), even basic motivational constraints (who the hell programmed Skynet to be a paperclip C3PO maximizer?)

      1. I didn’t mean ‘skynet’. I meant, AI systems. chatGPT and all the other LLMs are an AI system. So is midjourney with controlnet. So humans want things. They want robots to make the things. They order robots to make more robots (initially using a lot of human factory workers to kick it off). Eventually robots get really cheap, making the things humans want cheaper and that’s where you get the limited form of Singularity I mentioned.

      At all points humans are ordering all these robots, and using all the things the robots make. An AI system is many parts. It has device drivers and hardware and cloud services and many neural networks and simulators and so on. One thing that might slow it all down is that the enormous list of IP needed to make even 1 robot work and all the owners of all the software packages will still demand a cut even if the robot hardware is being built by factories with almost all robots working in it.

      **I just think the threat model of autonomous robot factories making superhuman android workers and replicas of itself at an exponential rate is pure science fiction. **

      1. So again that’s a detail I didn’t give. Obviously there are many kinds of robotic hardware, specialized for whatever task they do, and the only reason to make a robot humanoid is if it’s a sexbot or otherwise used as a ‘face’ for humans. None of the hardware has to be superhuman, though obviously industrial robot arms have greater lifting capacity than humans. Just to give a detail what the real stuff would look like : most robots will be in no way superhuman in that they will lack sensors where they don’t need it, won’t be armored, won’t even have onboard batteries or compute hardware, will miss entire modalities of human sense, cannot replicate themselves, and so on. It’s just hardware that does a task, made in factory, and it takes many factories with these machines in it to make all the parts used.

      think:

  •  Evinceo   ( @Evinceo@awful.systems ) 
    link
    fedilink
    English
    6
    edit-2
    10 months ago

    Content Warning: Ratspeak

    spoiler

    Let’s say that tomorrow, they build AGI on HP/Cray Frontier. It’s human equivalent. Mr Frontier is rampant or whatever and wants to improve himself. In order to improve himself he will need to create better chips. He will need approximately 73 thousand copies of himself just to match the staff of TSMC, but there’s only one Frontier. And that’s to say nothing of the specialized knowledge and equipment required to build a modern fab, or the difficulty of keeping 73 thousand copies of himself loyal to his cause. That’s just to make a marginal improvement on himself, and assuming everyone is totally ok with letting the rampant AI get whatever it wants. And that’s just the ‘make itself smarter’ part, which everything else is contingent on; it assumes that we’ve solved Moravec’s paradox and all of the attendant issues of building robots capable of operating at the extremes of human adaptability, which we have not. Oh and it’s only making itself smarter at the same pace TSMC already was.

    The practicalities of improving technology are generally skated over by aingularatians in favor of imagining technology as a magic number that you can just throw “intelligence” at to make it go up.

    •  self   ( @self@awful.systems ) 
      link
      fedilink
      English
      1110 months ago

      What I’m trying to get at is that the practicalities of improving technology are generally skated over by aingularatians in favor of imagining technology as a magic number that you can just throw “intelligence” at to make it go up.

      this is where the singularity always lost me. like, imagine, you build an AI and it maxes out the compute in its server farm (a known and extremely easy to calculate quantity) so it decides to spread onto the internet where it’ll have infinite compute! well congrats, now the AI is extremely slow cause the actual internet isn’t magic, it’s a network where latency and reliability are gigantic issues, and there isn’t really any way for an AI to work around that. so singulatarians just handwave it away

      or like when they reach for nanomachines as a “scientific” reason why the AI would be able to exert godlike influence on the real world. but nanomachines don’t work like that at all, it’s just a lazy soft sci-fi idea that gets taken way too seriously by folks who are mediocre at best at understanding science

      •  swlabr   ( @swlabr@awful.systems ) 
        link
        fedilink
        English
        1010 months ago

        (To be read in the voice of an elementary schooler who is a sore loser at make believe): Nuh-uh! My AGI has quantum computers, so it doesn’t get slow from the internet, and, and, and, it builds robots, with jetpacks, and those robots have tiny robots that can go in your brain and and and make your brain explode, and if you say anything mean about me or the AGI it’ll take your brain and clone it and put wires in it and make you think youre getting like, wedgied and stuff, but really youre not but you think you are because it’s really good at making you think it

      • but nanomachines don’t work like that at all, it’s just a lazy soft sci-fi idea that gets taken way too seriously by folks who are mediocre at best at understanding science

        Let’s call this Crichtonitis.

      • Serious answer not from yudnowsky: the AI doesn’t do any of that. It helps people cheat on their homework, write their code and form letters faster, and brings in revenue. AI owner uses the revenue and buys gpus. With the GPUs they make the AI better. Now it can do a bit more than before and then they buy more GPUs and theoretically this continues until the list of tasks the AI can do includes “most of the labor in a chip fab” and GPUs become cheap and then things start to get crazy.

        Same elementary school logic but I mean this is how a nuke works.

      • I agree completely. This is exactly where I break with Eliezer’s model. Yes obviously an AI system that can self improve can only do so until it’s either (1) the best algorithm that can run on the server farm (2) finding a better algorithm takes more compute than is worth the investment in current compute

        That’s not a god. You do this in an AI experiment now and it might crap out at double or less the starting performance and not even be above the SOTA.

        But if robots can build robots, and the current AI progress shows a way to do it (foundation model on human tool manipulation), then…

        Genuinely asking, I don’t think it’s “religion” to suggest that a huge speedup in global GDP would be a dramatic event.