These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

“Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto”, per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

“(Jeff) Macpherson is a director and co-founder at Xagency.AI”, a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s “over 7 years in the tech sector” which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

“Illustrator Martin Deschatelets” whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

“Ottawa economist Armine Yalnizyan”, per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the “we” who have to adapt here?

AI is apparently “something that can tell you how many cows are in the world” (J.M.). Detecting a lack of results validation here again.

“At the end of the day that’s what it’s all for. The efficiency, the productivity, to put profit in all of our pockets”, from J.M.

“You now have the opportunity to become a Prompt Engineer”, from J.M. to the author and illustrator. (It’s worth watching the video to listen to this person.)

Me about the article:

I’m feeling that same underwhelming “is this it” bewilderment again.

Me about the video:

Critical thinking and ethics and “how software products work in practice” classes for everybody in this industry please.

  • Well, you know, you don’t want to miss out! You don’t want to miss out, do you? Trust me, everyone else is doing this hot new thing, we promise. So you’d better start using it too, or else you might get left behind. What is it useful for? Well… it could make you more productive. So you better get on board now and, uh, figure out how it’s useful. I won’t tell you how, but trust me, it’s really good. You really should be afraid that you might miss out! Quick, don’t think about it so much! This is too urgent!

    • Pretty much this. I work in support services in an industry that can’t really use AI to resolve issues due to the myriad of different deployment types and end user configurations.

      No way in hell will I be out of a job due to AI replacing me.

      •  self   ( @self@awful.systems ) 
        link
        fedilink
        English
        169 months ago

        your industry isn’t alone in that — just like blockchains, LLMs and generative AI are a solution in search of a problem. and like with cryptocurrencies, there’s a ton of grifters with a lot of money riding on you not noticing that the tech isn’t actually good for anything

        •  TehPers   ( @TehPers@beehaw.org ) 
          link
          fedilink
          English
          39 months ago

          Unlike blockchains, LLMs have practical uses (GH copilot, for example, and some RAG usecases like summarizing aggregated search results). Unfortunately, everyone and their mother seems to think it can solve every problem they have, and it doesn’t help when suits in companies want to use LLMs just to market that they use them.

          Generally speaking, they are a solution in search of a problem though.

          •  self   ( @self@awful.systems ) 
            link
            fedilink
            English
            149 months ago

            GH copilot, for example, and some RAG usecases like summarizing aggregated search results

            you have no idea how many engineering meetings I’ve had go off the rails entirely because my coworkers couldn’t stop pasting obviously wrong shit from copilot, ChatGPT, or Bing straight into prod (including a bunch of rounds of re-prompting once someone realized the bullshit the model suggested didn’t work)

            I also have no idea how many, thanks to alcohol

            • Haha they are, in fact, solutions that solve potential problems. They aren’t searching for problems but they are searching for people to believe that the problems they solve are going to happen if they don’t use AI.

            •  TehPers   ( @TehPers@beehaw.org ) 
              link
              fedilink
              English
              49 months ago

              That sounds miserable tbh. I use copilot for repetitive tasks, since it’s good at continuing patterns (5 lines slightly different each time but otherwise the same). If your engineers are just pasting whatever BS comes out of the LLM into their code, maybe they need a serious talking to about replacing them with the LLM if they can’t contribute anything meaningful beyond that.

              •  self   ( @self@awful.systems ) 
                link
                fedilink
                English
                99 months ago

                as much as I’d like to have a serious talk with about 95% of my industry right now, I usually prefer to rant about fascist billionaire assholes like altman, thiel, and musk who’ve poured a shit ton of money and resources into the marketing and falsified research that made my coworkers think pasting LLM output into prod was a good idea

                I use copilot for repetitive tasks, since it’s good at continuing patterns (5 lines slightly different each time but otherwise the same).

                it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

                •  TehPers   ( @TehPers@beehaw.org ) 
                  link
                  fedilink
                  English
                  39 months ago

                  Yes, the marketing of LLMs is problematic, but it doesn’t help that they’re extremely demoable to audiences who don’t know enough about data science to realize how unfeasable it is to have a service be inaccurate as often as LLMs are. Show a cool LLM demo to a C-suite and chances are they’ll want to make a product out of it, regardless of the fact you’re only getting acceptable results 50% of the time.

                  it’s time to learn emacs, vim, or (best of all) an emacs distro that emulates vim

                  I’m perfectly fine with vscode, and I know enough vim to make quick changes, save, and quit when git opens it from time to time. It also has multi-cursor support which helps when editing multiple lines in the same way, but not when there are significant differences between those lines but they follow a similar pattern. Copilot can usually predict what the line should be given enough surrounding context.

                •  TehPers   ( @TehPers@beehaw.org ) 
                  link
                  fedilink
                  English
                  2
                  edit-2
                  9 months ago

                  It’s not that uncommon when filling an array with data or populating a YAML/JSON by hand. It can even be helpful when populating something like a Docker Compose config, which I use occasionally to spin up local services while debugging like DBs and such.

  •  Steve   ( @fasterandworse@awful.systems ) 
    link
    fedilink
    English
    23
    edit-2
    9 months ago

    “learn AI now” is interesting in how much it is like the crypto “build it on chain” and how they are both different from something like “learn how to make a website”.

    Learning AI and Building on chain start with deciding which product you’re going to base your learning/building on and which products you’re going to learn to achieve that. Something that has no stability and never will. It’s like saying “learn how to paint” because in the future everyone will be painting. It doesn’t matter if you choose painting pictures on a canvas or painting walls in houses or painting cars, that’s a choice left up to you.

    “Learn how to make a website” can only be done on the web and, in the olden days, only with HTML.

    “Learn AI now”, just like “build it on chain” is nothing but PR to make products seem like legitimised technologies.

    Fuckaduck, ai is the ultimate repulseware

    •  deur   ( @deur@feddit.nl ) 
      link
      fedilink
      English
      99 months ago

      What’s worse is these people who shill AI and genuinely are convinced Chat GPT and stuff are going to take over the world will not feel an ounce of shame once AI dies just like the last fad.

      If I was wrong about AI being completely useless and how its not going to take over the world, I’d feel ashamed at my own ignorance.

      Good thing I’m right.

    • I wanna expand on this a bit because it was a rush job.

      This part…

      Learning AI and Building on chain start with deciding which product you’re going to base your learning/building on and which products you’re going to learn to achieve that. Something that has no stability and never will.

      …is a bit wrong. The AI environment has no stability now because it’s a mess of products fighting for sensationalist attention. But if it ever gains stability, as in there being a single starting point for learning AI, it will be because a product, or a brand, won. You’ll be learning a product just like people learned Flash.

      Seeing people in here talk about CoPilot or ChatGPT and examples of how they have found it useful is exactly why we’re going to find ourselves in a situation where software products discourage any kind of unconventional or experimental ways of doing things. Coding isn’t a clean separation between mundane, repetitive, pattern-based, automatable tasks and R&D style, hacking, or inventiveness. It’s a recipe for applying the “wordpress theme” problem to everything where the stuff you like to do, where your creativity drives you, becomes a living hell. Like trying to customise a wordpress theme to do something it wasn’t designed to do.

      The stories of chatgpt helping you out of a bind are the exact stories that companies like openAI will let you tell to advertise for them, but they’ll never go all in on making their product really good at those things because then you’ll be able to point at them and say “ahah! it can’t do this stuff!”

        • It’s my own name I made up from a period in the late 2000s, early 2010s when I’d have a lot of freelance clients ask me to build their site “but it’s easy because I have already purchased an awesome theme, I just need you to customise it a bit”

          It’s the same as our current world of design systems and component libraries. They get you 95% of the way and assume that you just fill in the 5% with your own variations and customisations. But what really happens is you have 95% worth of obstruction from making what would normally be the most basic CSS adjustment.

          It’s really hard to explain to someone that it’d be cheaper and faster if they gave me designs and I built a theme from scratch than it would be to panel-beat their pre-built theme into the site they want.

          “customise” is the biggest lie in dev ever told

          •  self   ( @self@awful.systems ) 
            link
            fedilink
            English
            99 months ago

            I’d have a lot of freelance clients ask me to build their site “but it’s easy because I have already purchased an awesome theme, I just need you to customise it a bit”

            oh my god, this was all of my clients when I was in college

          •  froztbyte   ( @froztbyte@awful.systems ) 
            cake
            link
            fedilink
            English
            69 months ago

            ah. yeah. I know what you mean.

            I have a set of thoughts on a related problem in this (which I believe I’ve mentioned here before (and, yes, still need to get to writing)).

            the dynamics of precision and loss, in communication, over time, socially, end up resulting in some really funky setups that are, well, mutually surprising to most/all parties involved pretty much all of the time

            and the further down the chain of loss of precision you go, well, godspeed soldier

    • I haven’t paid that much attention to the software and platforms behind all this. Now that you mention it, yes, they are all products not underlying technologies. A bit like if somebody was a Zeus web server admin versus AOL web server admin without anybody being just a web server admin. Or like if somebody had to choose between Windows or Solaris without just considering operating systems.

      Then again, what with all the compute and storage and ongoing development needed I’m not convinced that AI currently can be a gratis (free as in beer) thing in the same way that they just hand out web servers.

    •  maol   ( @maol@awful.systems ) 
      link
      fedilink
      English
      49 months ago

      Bingo. “Learn AI” is an even more patronizing and repellent version of “learn to code”, which was already not much of a solution to changes in the jobs market.

      • good point. “learn to code” is such an optimistically presented message of pessimism. It’s like those youtube remixes people would do of comedy movie trailers as horror movies. “learn to code” like “software is eating the world” works so much better as a claustrophobic, oppressive, assertion.

        •  maol   ( @maol@awful.systems ) 
          link
          fedilink
          English
          79 months ago

          The blasé spite with which some people would say “just learn to code” was a precursor to the glee with which these arrogant bozos are predicting that commercial AI generators will ruin the careers of artists, journalists, filmmakers, authors, who they seem to hate.

          •  self   ( @self@awful.systems ) 
            link
            fedilink
            English
            69 months ago

            and as we’ve seen in this thread, they don’t mind if it ruins the career of every junior dev who’s not onboard either. these bloodthirsty assholes want everyone they consider beneath them to not have gainful employment

            •  maol   ( @maol@awful.systems ) 
              link
              fedilink
              English
              59 months ago

              their apparently sincere belief that not being in poverty is a privilege that people should have to earn —by doing the right kind of job, and working the right kind of way, and having the right kind of politics, is genuinely very strange and dark. The worst of vicious “stay poor” culture.

              •  self   ( @self@awful.systems ) 
                link
                fedilink
                English
                59 months ago

                in spite of what they claim, most tech folk are extremely conservative. that’s why it’s so easy for some of them to drop the pretense of being an ally when it becomes inconvenient, or when there’s profit in adopting monstrous beliefs (and there often is)

  • You now have the opportunity to become a Prompt Engineer

    No way man I heard the AIs were coming for those jobs. Instead I’m gonna become a prompt writing prompt writer who writes prompts to gently encourage AIs to themselves write prompts to put J.M. out of a job. Checkmate.

  •  swlabr   ( @swlabr@awful.systems ) 
    link
    fedilink
    English
    7
    edit-2
    9 months ago

    Ugh, fuck this punditry. Luckily, many of the views in this article are quickly dispatched through media literacy. I hate that, for the foreseeable future, AI will be the boogeyman whispered about in all media circles. But knowing that it is a boogeyman makes it very easy to tell when it’s general sensationalist hype/drivel for selling papers vs. legitimate concerns about threats to human livelihoods. In this case, it’s more the former.

    •  swlabr   ( @swlabr@awful.systems ) 
      link
      fedilink
      English
      109 months ago

      Isn’t it great how they aren’t saying how to “learn” or “accept” AI? They aren’t saying: “learn what a neural network is” or anything close to that. It’s not even: “Understand what AI does and its output and what that could be good or bad for”. They’re just saying, “Learn how to write AI prompts. No, I don’t care if it’s not relevant or useful, and it’s your fault if you can’t leverage that into job security.” They’re also saying: “be prepared to uproot your entire career in case your CEO tries to replace you, and be prepared to change careers completely. When the AI companies we run replace you, it’s not our fault because we warned you.” It’s so fucking sad that these people are allowed to have opinions. Also this:

      For people like Deschatelets, it doesn’t feel that straightforward.

      “There’s nothing to adapt to. To me, writing in three to four prompts to make an image is nothing. There’s nothing to learn. It’s too easy,” he said.

      His argument is the current technology can’t help him — he only sees it being used to replace him. He finds AI programs that can prompt engineered images, for example, useful when looking for inspiration, but aside from that, it’s not much use.

      “It’s almost treating art as if it’s a problem. The only problem that we’re having is because of greedy CEOs [of Hollywood studios or publishing houses] who make millions and millions of dollars, but they want to make more money, so they’ll cut the artists completely. That’s the problem,” he said.

      A king. This should be the whole article.

  •  Steve   ( @fasterandworse@awful.systems ) 
    link
    fedilink
    English
    6
    edit-2
    9 months ago

    The great* Jakob Nielsen is all in on AI too btw. https://www.uxtigers.com/post/ux-angst

    I expect the AI-driven UX boom to happen in 2025, but I could be wrong on the specific year, as per Saffo’s Law. If AI-UX does happen in 2025, we’ll suffer a stifling lack of UX professionals with two years of experience designing and researching AI-driven user interfaces. (The only way to have two years of experience in 2025 is to start in 2023, but there is almost no professional user research and UX design done with current AI systems.) Two years is the bare minimum to develop an understanding of the new design patterns and user behaviors that we see in the few publicly available usability studies that have been done. (A few more have probably been done at places like Microsoft and Google, but they aren’t talking, preferring to keep the competitive edge to themselves.)

    *sarcasm