• 0 Posts
  • 76 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
rss
  • It’s the last one, the “wait a day” option and the “pay $20” options aren’t equivalent. If it’s still a day away from viability, it isn’t viable yet, but if it’s $20 away, it is. You may be of the opinion that waiting a day isn’t a big deal, or is only $20 worth of hardship, but that’s not your choice to make for others.

    You’d think ending a doomed pregnancy would be a simple matter even for pro-lifers, yes. They often don’t consider the issue, or assume that it’ll always be clear-cut and obvious in every circumstance, or worry that any exception will be used as a loophole.


  • I can’t believe this word doesn’t seem to have made it into any part of this thread, but I think you’re looking for viability: the point where a fetus can live outside of the womb. This isn’t a hard line, of course, and technology can and has changed where that line can be drawn. Before that point, the fetus is entirely dependent on one specific person’s body, and after that point, there are other options for caring for it. That is typically where pro-choice folks will draw the line for abortion as well; before that point, an abortion ban is forced pregnancy and unacceptable, after that point there can be some negotiation and debate (though that late into a pregnancy, if an abortion is being discussed it’s almost certainly a health crisis, not a change of heart, so imposing restrictions just means more complications for an already difficult and dangerous situation).


  • I empathize with that frustration. The process of thinking you’re right, learning you’re wrong, and figuring out why is very fundamentally what coding is. You’re taking an idea in one form (the thing you want to happen in your mind) and encoding it into another, very different form, a series of instructions to be executed by a computer, and your first try is almost always slightly wrong. Humans aren’t naturally well-adapted to this task because we’re optimized for instructing other humans, who will usually do what they think you mean and not always what you actually said, can gloss over or correct small mistakes or inconsistencies, and will act in their own self-interest when it makes sense, but a computer won’t behave that way, it requires you to bend completely to how it works. It probably makes me a weirdo, but I actually like that process, it’s a puzzle-solving game for me, even when it’s frustrating.

    I do think asking an AI for help with something is a useful way to use it, that really isn’t all that different from checking a forum (in fact, those forums are probably what it’s drawing from in the first place), and hallucinations aren’t too damaging because you’ll be checking the AI’s answer when you try what it says and see if it works. It’s more the blindly accepting code that it produces that I think is harmful (and you aren’t doing that, it sounds like.) In an IDE it’s really easy to quickly make pages of code without engaging the brain, and it works well enough to be very tempting, but not, as I’m sure you know, well enough to do the whole thing.


  • Yeah, totally fair. I’ll note that you’re kind of describing the typical software development process of a customer talking to the developer and developing requirements collaboratively with them, then the developer coming back with a demo, the customer refining by going “oh, that won’t work, it needs to do it this way” or “that reminds me, it also needs to do this”, and so on. But you’re closer to playing the role of the customer in this scenario, and acting like more of an editor or manager on the development side. The organizers of a game jam could make a reasonable argument that doing it this way is akin to signing up for the game jam, coming up with an idea, then having your friend who isn’t signed up for the game jam implement it for you, when the point is to do it all in person, quickly, in a fun and energetic environment. The people doing a game jam like coding, that’s the fun part for them, so someone signing up and skipping all that stuff does have a little bit of a “why are you even here then” aspect to it. Of course it depends on the degree the AI is being used, how much editorial control or tweaking you’re doing, it’s a legitimate debate and I don’t think you’re wrong to want to participate.


  • I’ll acknowledge that there’s definitely an element of “well I had to do it the hard way, you should too” at work with some people, and I don’t want to make that argument. Code is also not nearly as bad as something like image generation, where it’s literally just typing a thing and getting a not-very-good image back that’s ready to go; I’m sure if you’re making playable games, you’re putting in more work than that because it’s just not possible to type some words and get a game out of it. You’ll have to use your brain to get it right. And if you’re happy with the results you get and the work you’re doing, I’m definitely not going to tell you you’re doing it wrong.

    (If you’re trying to make a career of software engineering or have a desire to understand it at a deeper level, I’d argue that relying heavily on AI might be more of a hindrance to those goals than you know, but if those aren’t your goals, who cares? Have fun with it.)

    What I’m talking about is a bigger picture thing than you and your games; it’s the industry as a whole. Much like algorithmic timelines have had the effect of turning the internet from something you actively explored into something you passively let wash over you, I’m worried that AI is creating a “do the thinking for me” button that’s going to be too tempting for people to use responsibly, and will result in too much code becoming a bunch of half-baked AI slop cobbled together by people who don’t understand what they’re really doing. There’s already enough cargo culting around software, and AI will just make it more opaque and mysterious if overused and over-relied on. But that’s a bigger picture thing; just like I’m not above laying back and letting TikTok wash over me sometimes, I’m glad you’re doing things you like with the assistance you get. I just don’t want that to become the only way things happen either.


  • The irony is that most programmers were just googling and getting answers from stackoverflow, now they don’t even need to Google.

    That’s the thing, though, doing that still requires you to read the answer, understand it, and apply it to the thing you’re doing, because the answer probably isn’t tailored to your exact task. Doing this work is how you develop an understanding of what’s going on in your language, your libraries, and your own code. An experienced developer has built up those mental muscles, and can probably get away with letting an AI do the tedious stuff, but more novice developers will be depriving themselves of learning what they’re actually doing if they let the AI handle the easy things, and they’ll be helpless to figure out the things that the AI can’t do.

    Going from assembly to C does put the programmer at some distance from the reality of the computer, and I’d argue that if you haven’t at least dipped into some assembly and at least understand the basics of what’s actually going on down there, your computer science education is incomplete. But once you have that understanding, it’s okay to let the computer handle the tedium for you and only dip down to that level if necessary. Or learning sorting algorithms, versus just using your standard library’s sort() function, same thing. AI falls into that category too, I’d argue, but it’s so attractive that I worry it’s treating important learning as tedium and helping people skip it.

    I’m all for making programming simpler, for lowering barriers and increasing accessibility, but there’s a risk there too. Obviously wheelchairs are good things, but using one simply “because it’s easier” and not because you need to will cause your legs to atrophy, or never develop strength in the first place, and I’m worried there’s a similar thing going on with AI in programming. “I don’t want to have to think about this” isn’t a healthy attitude to have, a program is basically a collection of crystallized thoughts and ideas, thinking it through is a critical part of the process.




  • I see this as an accessibility problem, computers have incredible power but taking advantage of it requires a very specific way of thinking and the drive to push through adversity (the computer constantly and correctly telling you “you’re doing it wrong”) that a lot of people can’t or don’t want to do. I don’t think they’re wrong or lazy to feel that way, and it’s a barrier to entry just like a set of stairs is to a wheelchair user.

    The question is what to do about it, and there’s so much we as an industry should be doing before we even start to think about getting “normies” writing code or automating their phones. Using a computer sucks ass in so many ways for regular people, you buy something cheap and it’s slow as hell, it’s crapped up with adware and spyware out of the box, scammers are everywhere ready to cheat you out of your money… anyone here is likely immune to all that or knows how to navigate it but most people are just muddling by.

    If we got past all that, I think it’d be a question of meeting users where they are. I have a car but I couldn’t replace the brakes, nor do I want to learn or try to learn, but that’s okay. My car is as accessible as I want it to be, and the parts that aren’t accessible, I go another route (bring it to a mechanic who can do the things I can’t). We can do this with computers too, make things easy for regular people but don’t try to make them all master programmers or tell them they aren’t “really” using it unless they’re coding. Bring the barrier down as low is it can go but don’t expect everyone to be trying to jump over it all the time, because they likely care about other things more.




  • Back in the olden days, if you wrote a program, you were punching machine codes into a punch card and they were being fed into the computer and sent directly to the CPU. The machine was effectively yours while your program ran, then you (or more likely, someone who worked for your company or university) noted your final results, things would be reset, and the next stack of cards would go in.

    Once computers got fast enough, though, it was possible to have a program replace the computer operator, an “operating system”, and it could even interleave execution of programs to basically run more than one at the same time. However, now the programs had to share resources, they couldn’t just have the whole computer to themselves. The OS helped manage that, a program now had to ask for memory and the OS would track what was free and what was in use, as well as interleaving programs to take turns running on the CPU. But if a program messed up and wrote to memory that didn’t belong to it, it could screw up someone else’s execution and bring the whole thing crashing down. And in some systems, programs were given a turn to run and then were supposed to return control to the OS after a bit, but it was basically an honor system, and the problem with that is likely clear.

    Hardware and OS software added features to enforce more order. OSes got more power, and help from the hardware to wield it. Now instead of asking politely to give back control, the hardware would enforce limits, forcing control back to the OS periodically. And when it came to memory, the OS no longer handed out addresses matching the RAM for the program to use directly, instead it could hand out virtual addresses, with the OS tracking every relationship between the virtual address and the real location of the data, and the hardware providing Memory Management Units that can do things like store tables and do the translation from virtual to physical on its own, and return control to the OS if it doesn’t know.

    This allows things like swapping, where a part of memory that isn’t being used can be taken out of RAM and written to disk instead. If the program tries to read an address that was swapped out, the hardware catches that it’s a virtual address that it doesn’t have a mapping for, wrenches control from the program, and instead runs the code that the OS registered for handling memory. The OS can see that this address has been swapped out, swap it back in to real RAM, tell the hardware where it now is, and then control returns to the program. The program’s none the wiser that its data wasn’t there a moment ago, and it all works. If a program messes up and tries to write to an address it doesn’t have, it doesn’t go through because there’s no mapping to a physical address, and the OS can instead tell the program “you have done very bad and unless you were prepared for this, you should probably end yourself” without any harm to others.

    Memory is handed out to programs in chunks called “pages”, and the hardware has support for certain page size(s). How big they should be is a matter of tradeoffs; since pages are indivisible, pages that are too big will result in a lot of wasted space (if a program needs 1025 bytes on a 1024-byte page size system, it’ll need 2 pages even though that second page is going to be almost entirely empty), but lots of small pages mean the translation tables have to be bigger to track where everything is, resulting in more overhead.

    This is starting to reach the edges of my knowledge, but I believe what this is describing is that RISC-V chips and ARM chips have the ability for the OS to say to the hardware “let’s use bigger pages than normal, up to 64k”, and the Linux kernel is getting enhancements to actually use this functionality, which can come with performance improvements. The MMU can store fewer entries and rely on the OS less, doing more work directly, for example.




  • A VPN is just a way to say “wrap up my normal internet packets and ship them somewhere specific before they continue the normal way.” The normal way is you want to get a message to some other server, and as a part of setting up the network you’re on, your machine should already have a list of other devices it’s physically connected to (“physically” could be “via radio waves” so not just wired) and they should have already advertised “hey, I’ve got access to these places too” for your information. Your router is likely the only one in your home network advertising anything that is on the larger internet, so all your outgoing messages will have to go that way to get to their destination. For example, I’ve got a phone, a wifi access point, a router, and my ISP’s box; my phone knows the WiFi access point is two hops away from internet because the access point said so, that’s the best one it can see, so it sends it that way and hopes it makes it. Each machine in between does the same thing until hopefully it gets where it is supposed to.

    With a VPN, the same messages are wrapped in a second message that is addressed to the other end of the VPN. When it gets to the VPN provider, it’s unwrapped, then the inside message is sent off to wherever it’s supposed to go. If a message comes back to the VPN provider addressed to you (ish, this is simplifying a bit), it’s wrapped up the same way and sent back to you.

    Big companies often put resources “behind” the VPN, so you can’t send messages from the outside addresses to the office printer, they’ll get blocked, but you can request a connection to the VPN, and messages that come in through that path do get allowed. The VPN can be one central place where you make sure everything coming in is allowed, then on the other side the security can be a little less tight.

    VPNs also encrypt the internal message as a part of wrapping them up, which means that if you’re torrenting via a VPN, all anyone else can see is a message addressed to your VPN provider and then an encrypted message inside. And anyone you were exchanging messages with only ever saw traffic to and from the VPN provider, they never saw where it was going after your VPN provider got it. Only you and the VPN provider know what was happening on both ends, and hopefully they don’t look too closely or keep records.

    Hopefully now it’s clear that Mullvad and similar won’t help you access your own things from outside, they’re only good for routing your stuff through them and then out into the rest of the internet. However, this isn’t secret magic tech: you can run your own VPN that goes in the other direction, allowing you into your own home network and then able to connect to things as if you were physically there. Tailscale is probably the easiest thing for things like that nowadays, it’ll set up a whole system where your devices can find each other and set up a mesh of secure, direct connections no matter where they are physically located. By default, just the direct device-to-device connections are re-routed, but you can also make a device an “exit node” that can route all your traffic like a traditional VPN.

    Of course, that will be the exact opposite of what you want for privacy while torrenting, as it’s all devices that you clearly own and not hiding their identities whatsoever. But it’s very cool for home networking and self-hosting stuff.


  • Bluesky’s more like an aspirationally decentralized platform, you can keep your own data on your own server and use your own domain name as a user name, but most of the rest of it is “centralized, but we’re designing it in such a way that we can open it up later.” Even then, though, it’s heavily influenced by the original idea of “let’s make something decentralized that Twitter can switch to once it’s worked out” which means that even when they do open things up, it’s likely that a lot of Bluesky will only be practical at “big tech company scale” to run yourself, whereas Mastodon or Lemmy you can just spin up on a server and it’ll be fine until you get a lot of users.


  • I as a human being have grown up and learned from experience and the experiences of previous humans that were documented or directly communicated to me. I can see no inherent difference with an artificial intelligence learning on the same data.

    It’s a massive difference in scale. For one, before you even leave the womb you have millions of years of evolution shaping the initial structure of your brain. Then your “training” begins, but it’s infinitely richer than anything we’re giving to these LLMs. Sights, sounds, smells, feelings, so many that part of what your brain is learning is what it must ignore. You’re also benefitting from the interactivity of your environment, you can experiment with things and get feedback for what happens. As you get older and develop more skills, you can start integrating them together to do even more complex things, and the people around you will use their own incredible intelligence to specifically tailor your training to what you need as you learn and grow.

    Meanwhile, an LLM is getting fed words, and learning how to predict the next word. It’s a pale shadow of the complex lives humans live. Words are one of the more powerful things we have for thinking and reasoning, so if you’re going to go all in on one skill, it’s a rich environment for learning and in theory the contents of all of humanity’s writing probably contains all the information necessary to recreate human intelligence, but our current technology doesn’t even come close to wringing every ounce of knowledge from the training sets.




  • “The leverage” to do what exactly? Put in someone who will be way worse? How does that help the left accrue power or accomplish our goals? If you think the Democratic Party’s takeaway from the left tanking a major election will be “we need to move left more” I have a bridge to sell you. We are not a majority, which means we need to form coalitions. We can’t do that with a reputation of blowing up everyone’s shit when we don’t get our way. We do it by showing how successful the party is when they listen to us and include us. No, this time we don’t have a particularly left candidate to vote for. Yes, it all sucks. But I have yet to see a concrete explanation of how picking or allowing “far right fascist” over “moderate” has any benefit in the short or long term. To my eyes, it just causes vulnerable people here and around the world to suffer.