I’ve seen this term thrown around a lot lately and I just wanted to read your opinion on the matter. I feel like I’m going insane.

Vibe coding is essentially asking AI to do the whole coding process, and then checking the code for errors and bugs (optional).

  • Somewhat impressive, but still not quite a threat to my professional career, as it cannot produce reliable software for business use.

    It does seem to open up for novices to create ‘bespoke software’ where they previously would not have been able to, or otherwise unable to justify the time commitment, which is fun. This means more software gets created which otherwise would not have existed, and I like that.

  •  A1kmm   ( @A1kmm@lemmy.amxl.com ) 
    link
    fedilink
    English
    421 hours ago

    As an experiment / as a bit of a gag, I tried using Claude 3.7 Sonnet with Cline to write some simple cryptography code in Rust - use ECDHE to establish an ephemeral symmetric key, and then use AES256-GCM (with a counter in the nonce) to encrypt packets from client->server and server->client, using off-the-shelf RustCrypto libraries.

    It got the interface right, but it got some details really wrong:

    • It stored way more information than it needed in the structure tracking state, some of it very sensitive.
    • It repeatedly converted back and forth between byte arrays and the proper types unnecessarily - reducing type safety and making things slower.
    • Instead of using type safe enums it defined integer constants for no good reason.
    • It logged information about failures as variable length strings, creating a possible timing side channel attack.
    • Despite having a 96 bit nonce to work with (-1 bit to identify client->server and server->client), it used a 32 bit integer to represent the sequence number.
    • And it “helpfully” used wrapping_add to increment the 32 sequence number! For those who don’t know much Rust and/or much cryptography: the golden rule of using ciphers like GCM is that you must never ever re-use the same nonce for the same key (otherwise you leak the XOR of the two messages). wrapping_add explicitly means when you get up to the maximum number (and remember, it’s only 32 bits, so there’s only about 4.3 billion numbers) it silently wraps back to 0. The secure implementation would be to explicitly fail if you go past the maximum size for the integer before attempting to encrypt / decrypt - and the smart choice would be to use at least 64 bits.
    • It also rolled its own bespoke hash-based key extension function instead of using HKDF (which was available right there in the library, and callable with far less code than it generated).

    To be fair, I didn’t really expect it to work well. Some kind of security auditor agent that does a pass over all the output might be able to find some of the issues, and pass it back to another agent to correct - which could make vibe coding more secure (to be proven).

    But right now, I’d not put “vibe coded” output into production without someone going over it manually with a fine-toothed comb looking for security and stability issues.

  •  Kache   ( @Kache@lemm.ee ) 
    link
    fedilink
    2
    edit-2
    20 hours ago

    IMO it will “succeed” in the early phase. Pre-seed startups will be able demo and get investors more easily, which I hear is already happening.

    However, it’s not sustainable, and either somebody figures out a practical transition/rewrite strategy as they try to go to market, or the startup dies while trying to scale up.

    We’ll see a lower success rate from these companies, in a bit of an I-told-you-so-moment, which reduces over-investment in the practice. Under a new equilibrium, vibe coding remains useful for super early demos, hackathons, and throwaway explorations, and people learn to do the transition/rewrite either earlier or not at all for core systems, depending on the resources founders have available at such an early stage.

  • Seems like a recipe for subtle bugs and unmaintainable systems. Also those Eloi from the time machine, where they don’t know how anything works anymore.

    Management is probably salivating at the idea of firing all those expensive engineers that tell them stuff like “you can’t draw three red lines all perpendicular in yellow ink”

    I’m also reminded of that ai-for-music guy that was like “No one likes making art!”. Soulless husk.

    • ^ this

      Using AI leads to code churn and code churn is bad for the health of the project.

      If you can’t keep the code comprehensible and maintainable then you end up with a worse off product where either everything breaks all the time, or the time it takes to release each new feature becomes exponentially longer, or all of your programmers become burnt out and no one wants to touch the thing.

      You just get to the point where you have to stop and start the project all over again, while the whole time people are screaming for the thing that was promised to them back at the start.

      It’s exactly the same thing that happens when western managers try to outsource to “cheap” programming labor overseas, it always ends up costing more, taking longer, and ending in disaster

    • I agree with you.

      The reason I wrote this post in the first place was because I heard people I respect a lot at work talk about this as being the future of programming. Also the CEO has acknowledged this and is actively riding the “vibe-coding” train.

      I’m tired of these “get rich quick the easy way” buzz-words and ideas, and the hustle culture that perpetuates them.

  • For personal projects, I don’t really care what you do. If someone who doesn’t know how to write a line of code asks an LLM to generate a simple program for them to use on their own, that doesn’t really bother me. Just don’t ask me to look at the code, and definitely don’t ask me to use the tool.

  • If it wasn’t for the fact that even an AI trained on only factually correct data can conflagrate those data points into entirely novel data that may no longer be factually accurate, I wouldn’t mind the use of AI tools for this or much of anything.

    But they can literally just combine everything they know to create something that appears normal and correct, while being absolutely fucked. I feel like using AI to generate code would just give you more work and waste time, because you’ll still need to fucking verify that it didn’t just output a bunch of unusable bullshit.

    Relying on these things is absolutely stupid.

    • Completely agree. My coworkers spend more time prompting and trying to get useful text from ChatGPT and then fixing that text than the time it’d take them to actually write the thing in the first place. It’s nonsense.

  • I probably wouldn’t do it. I do have AI help at times, but it is more for bouncing ideas off of, and occasionally it’ll mention a library or tech stack I haven’t heard of that allegedly accomplishes what I’m looking to do. Then I go research the library or tech stack and determine if there is value.