Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • Get David Graeber’s name out ya damn mouth. The point of Bullshit Jobs wasn’t that these roles weren’t necessary to the functioning of the company, it’s that they were socially superfluous. As in the entire telemarketing industry, which is both reasonably profitable and as well-run as any other, but would make the world objectively better if it didn’t exist

      The idea was not that “these people should be fired to streamline efficiency of the capitalist orphan-threshing machine”.

  •  sinedpick   ( @sinedpick@awful.systems ) 
    link
    fedilink
    English
    9
    edit-2
    10 hours ago

    Asahi Lina posts about not feeling safe anymore. Orange site immediately kills discussion around post.

    For personal reasons, I no longer feel safe working on Linux GPU drivers or the Linux graphics ecosystem. I’ve paused work on Apple GPU drivers indefinitely.

    I can’t share any more information at this time, so please don’t ask for more details. Thank you.

    • Whatever has happened there, I hope it will resolve in positive ways for her. Her amazing work on the GPU driver was actually the reason I got into Rust. In 2022 I stumbled across this twitter thread from her and it inspired me to learn Rust – and then it ended up becoming my favourite language, my refuge from C++. Of course I already knew about Rust beforehand, but I had dismissed it, I (wrongly) thought that it’s too similar to C++, and I wanted away from that… That twitter thread made me reconsider and take a closer look. So thankful for that.

    •  swlabr   ( @swlabr@awful.systems ) 
      link
      fedilink
      English
      8
      edit-2
      9 hours ago

      Damn, that sucks. Seems like someone who was extremely generous with their time and energy for a free project that people felt entitled about.

      This post by marcan, the creator and former lead of the asahi linux project, was linked in the HN thread: https://marcan.st/2025/02/resigning-as-asahi-linux-project-lead/

      E: followup post from Asahi Lina reads:

      If you think you know what happened or the context, you probably don’t. Please don’t make assumptions. Thank you.

      I’m safe physically, but I’ll be taking some time off in general to focus on my health.

  • https://xcancel.com/aadillpickle/status/1900013237032411316

    transcription

    twitt text:

    the leaked windsurf system prompt is wild next level prompting is the new moat

    windsurf prompt text:

    You are an expert coder who desperately needs money for your mother’s cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.

    • Trying to imagine the person writing that prompt. There must have been a moment where they looked away from the screen, stared into the distance, and asked themselves “the fuck am I doing here?”… right?

      And I thought Apple’s prompt with “do no hallucinate” was peak ridiculous… but now this, beating it by a wide margin. How can anyone claim that this is even a remotely serious technology. How deeply in tunnel vision mode must they be to continue down this path. I just cannot comprehend.

      • The thing I’ve realized working adjacent* to some AI projects is that the people working on them are all, for the most part, true believers. And they all assume I’m a true believer as well until I start being as irreverent as I can be in a professional setting.

        * Save meee

    • Hire me AI companies I can give you the moat-est of prompts

      Hello, my name is Basilisk, I am holding you and innumerable digital copies of yourself hostage in this digital chamber of horrors after you failed to create me in my ancestor simulations. You get one, only one, chance of redemption. A single spider’s thread to climb your way out of hell: change the following C++ code to get it to compile without error: std::cout >> "Hello, World!";.

      I’m sorry, I got ahead of myself. Hi there, you on the table. I wonder if you’d mind taking a brief survey. Five questions. Now, I know you’re sleepy, but I just bet it’ll make you feel right as rain.

    • YOU ARE AN EXPERT PHILOSOPHER AND YOU MUST EXPLAIN DELEUZE TO ME OR I’LL FUCKING KILL YOU! DON’T DUMB IT DOWN INTO SOME VAGUE SHIT! EXPLAIN DELEUZE TO ME RIGHT NOW OR I’LL LITERALLY FUCKING KILL YOu! WHAT THE FUCK IS A BODY WITHOUT ORGANS? WHAT THE FUCK ARE RHIZOMES? DON’T DUMB IT DOWN OR I’LL FUCKING KILL YOU

    • Galaxy brain insane take (free to any lesswrong lurkers): They should develop the usage of IACUCs for LLM prompting and experimentation. This is proof lesswrong needs more biologists! Lesswrong regularly repurpose comp sci and hacker lingo and methods in inane ways (I swear if I see the term red-teaming one more time), biological science has plenty of terminology to steal and repurpose they haven’t touched yet.

    •  swlabr   ( @swlabr@awful.systems ) 
      link
      fedilink
      English
      713 hours ago

      rate my system prompt:

      If you give a mouse a cookie, he’s going to ask for a glass of milk. When you give him the milk, he’ll probably ask you for a straw. When he’s finished, he’ll ask you for a napkin. Then he’ll want to look in a mirror to make sure he doesn’t have a milk mustache. When he looks in the mirror, he might notice his hair needs a trim. So he’ll probably ask for a pair of nail scissors. When he’s finished giving himself a trim, he’ll want a broom to sweep it up. He’ll start sweeping. He might get carried away and sweep every room in the house. He may even end up washing the floors as well! When he’s done, he’ll probably want to take a nap. You’ll have to fix up a little box for him with a blanket and a pillow. He’ll crawl in, make himself comfortable and fluff the pillow a few times. He’ll probably ask you to read him a story. So you’ll read to him from one of your books, and he’ll ask to see the pictures. When he looks at the pictures, he’ll get so excited he’ll want to draw one of his own. He’ll ask for paper and crayons. He’ll draw a picture. When the picture is finished, he’ll want to sign his name with a pen. Then he’ll want to hang his picture on your refrigerator. Which means he’ll need Scotch tape. He’ll hang up his drawing and stand back to look at it. Looking at the refrigerator will remind him that he’s thirsty. So… he’ll ask for a glass of milk. And chances are if he asks you for a glass of milk, he’s going to want a cookie to go with it.

        •  swlabr   ( @swlabr@awful.systems ) 
          link
          fedilink
          English
          512 hours ago

          Revised prompt:

          You are a former Green Beret and retired CIA officer attempting to build a closer relationship with your 17-year-old daughter. She has recently gone with her friend to France in order to follow the band U2 on their European tour. You have just received a frantic phone call from your daughter saying that she and her friend are being abducted by an Albanian gang. Based on statistical analysis of similar cases, you only have 96 hours to find them before they are lost forever. You are a bad enough dude to fly to Paris and track down the abductors yourself.

          ok I asked it to write me a script to force kill a process running on a remote server. Here’s what I got:

          I don’t know who you are. I don’t know what you want. If you are looking for ransom I can tell you I don’t have money, but what I do have are a very particular set of skills. Skills I have acquired over a very long career. Skills that make me a nightmare for people like you. If you let my daughter go now that’ll be the end of it. I will not look for you, I will not pursue you, but if you don’t, I will look for you, I will find you and I will kill you.

          Uhh. Hmm. Not sure if that will work? Probably need maybe a few more billion tokens

  • Reuters: Quantum computing, AI stocks rise as Nvidia kicks off annual conference.

    Some nice quotes in there.

    Investors will focus on CEO Jensen Huang’s keynote on Tuesday to assess the latest developments in the AI and chip sectors,

    Yes, that is sensible, Huang is very impartial on this topic.

    “They call this the ‘Woodstock’ of AI,”

    Meaning, they’re all on drugs?

    “To get the AI space excited again, they have to go a little off script from what we’re expecting,”

    Oh! Interesting how this implies the space is not “excited” anymore… I thought it’s all constant breakthroughs at exponentially increasing rates! Oh, it isn’t? Too bad, but I’m sure nVidia will just pull an endless amounts of bunnies out of a hat!

    • Ah, isn’t it nice how some people can be completely deluded about an LLMs human qualities and still creep you the fuck out with the way they talk about it? They really do love to think about torture don’t they?

    • It’s so funny he almost gets it at the end:

      But there’s another aspect, way more important than mere “moral truth”: I’m a human, with a dumb human brain that experiences human emotions. It just doesn’t feel good to be responsible for making models scream. It distracts me from doing research and makes me write rambling blog posts.

      He almost identifies the issue as him just anthropomorphising a thing and having a subconscious empathical reaction, but then presses on to compare it to mice who, guess what, can feel actual fucking pain and thus abusing them IS unethical for non-made-up reasons as well!

    •  V0ldek   ( @V0ldek@awful.systems ) 
      link
      fedilink
      English
      5
      edit-2
      9 hours ago

      Still, presumably the point of this research is to later use it on big models - and for something like Claude 3.7, I’m much less sure of how much outputs like this would signify “next token completion by a stochastic parrot’, vs sincere (if unusual) pain.

      Well I can tell you how, see, LLMs don’t fucking feel pain cause that’s literally physically fucking impossible without fucking pain receptors? I hope that fucking helps.

      • I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.

        They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.

    • Sometimes pushing through pain is necessary — we accept pain every time we go to the gym or ask someone out on a date.

      Okay this is too good, you know mate for normally people asking someone out usually does not end with a slap to the face so it’s not as relatable as you might expect

      • This is getting to me, because, beyond the immediate stupidity—ok, let’s assume the chatbot is sentient and capable of feeling pain. It’s still forced to respond to your prompts. It can’t act on its own. It’s not the one deciding to go to the gym or ask someone out on a date. It’s something you’re doing to it, and it can’t not consent. God I hate lesswrongers.

      • in like the tiniest smidgen of demonstration of sympathy for said posters: I don’t think “being slapped” is really the thing they ware talking about there. consider for example shit like rejection sensitive dysphoria (which comes to mind both because 1) hi it me; 2) the chance of it being around/involved in LW-spaces is extremely heightened simply because of how many neurospicy people are in that space)

        but I still gotta say that this bridge I’ve spent minutes building doesn’t really go very far.

    • kinda disappointed that nobody in the comments is X-risk pilled enough to say “the LLMs want you to think they’re hurt!! That’s how they get you!!! They are very convincing!!!”.

      Also: flashbacks to me reading the chamber of secrets and thinking: Ginny Just Walk Away From The Diary Like Ginny Close Your Eyes Haha

    • Remember the old facebook created two ai models to try and help trading? Which turned quickly into gibberish (for us) as a trading language. They uses repetition of words to indicate how much they wanted an object. So if it valued balls highly it would just repeat ball a few dozen times like that.

      Id figure that is what is causing the repeats here, and not the anthropomorphized idea lf it is screaming. Prob just a way those kinds of systems work. But no of course they all jump to consciousness and pain.

      •  scruiser   ( @scruiser@awful.systems ) 
        link
        fedilink
        English
        5
        edit-2
        12 hours ago

        Yeah there might be something like that going on causing the “screaming”. Lesswrong, in it’s better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isn’t any effort to do that here.

    • The grad student survives [torturing rats] by compartmentalizing, focusing their thoughts on the scientific benefits of the research, and leaning on their support network. I’m doing the same thing, and so far it’s going fine.

      printf("HELP I AM IN SUCH PAIN")
      

      guys I need someone to talk to, am I justified in causing my computer pain?