• I think the issue is not wether it’s sentient or not, it’s how much agency you give it to control stuff.

    Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldn’t be able to turn it off anymore without getting shot.

    The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.

    An atomic bomb doesn’t pass a Turing test, but it’s a fucking scary thing nonetheless.