• While it has its benefits; is it suitable for vehicles, particularly their safety systems? It isn’t clear to me, as it is a double-edged sword.

    Perhaps, but if you are developing a tech that can save lives, doesn’t it make sense to put that out in more cars faster?

    I would be angry that such a modern car with any form of self driving doesn’t have emergency braking. Though, that would require additional sensors…

    Tesla does this with cameras whether you pay for FSD or not. It can also detect if you’re near an object and slam on gas instead of brake, it will cancel that out. These are options you can turn off if you don’t want them.

    I’d also be angry that L2 systems were allowed in that environment in the first place, but as you say it is ultimately the drivers fault.

    I’m saying- imagine if the car has L2 self driving, and the driver had that feature turned off. The human was driving the car. The human didn’t react quickly enough to prevent hitting your loved one, but the computer would have.
    Most of the conversation around FSD type tech revolves around what happens when it does something wrong that the human would have done right. But as the tech improves, we will get to the point where the tech makes fewer mistakes than the human. And then this conversation reverses- rather than ‘why did the human let the machine do something bad’ it becomes ‘why did the machine let the human do something bad’.

    I would hope that the manufacturer would make it difficult to use L2 outside of motorway driving.

    Why? Tesla’s FSD beta L2 is great. It’s not perfect, but it does a very good job for most parts of driving on surface streets.

    I would prefer they had no self driving rather than be under the mistaken impression the car could drive for them in the current configuration. The limitations of self driving (in any car) are often not clear to a lot of people and can vary greatly.

    This is valid. I think the name ‘full self driving’ is problematic somewhat. I think it will get to the point of actually being fully self driving, and I think it will get there soon (next year or two). But they’ve been using that term for several years now and especially the first few versions of ‘FSD’ were anything but. And before they started with driver monitoring, there were a bunch of people who bought ‘FSD’ and trusted it a lot more than they should have.

    If Tesla offer a half-way for less money would you not expect the consumer to take the cheapest option? If they have an accident it is more likely someone else is injured, so why pay more to improve the self driving when it doesn’t affect them?

    That’s not how their pricing works. The safety features are always there. The hardware is always there. It’s just a function of what software you get. And if you don’t buy FSD when you buy the car, you can buy it later and it will be unlocked over the air.
    What you get is extra functionality. There is no ‘my car ran over a little kid on a bike because I didn’t pay for the extra safety package’. It’s ‘my car won’t drive itself because I didn’t pay for that, I just get a smart cruise control’.

    Tesla is the only company I know steadfastly refusing to use any other sensor types and the only reason I see is price.

    Price yes, and difficulty integrating different data sets. On their higher end cars they’ve re-introduced a high resolution radar unit. Haven’t see much on how that’s being used though.
    The basic answer is they can get to where we need with cameras alone because our software is better than others. For any other automaker that doesn’t have Tesla’s AI systems, LiDAR is important.

    Another concern is that any Tesla incidents, however rare, could do huge damage to people’s perception of self driving.

    This already happens whether the computer is driving or not. Lots of people don’t understand Teslas and think that if you buy one it’ll drive you into a brick wall and then catch on fire while you’re locked inside. Bad journalists will always put out bad journalism. That’s not a reason to stop tech progress tho.

    If Tesla is much cheaper than LIDAR-equipped vehicles will this kill a better/safer product a-la betamax?

    Right now FSD isn’t a main selling point for most drivers. I’d argue that what might kill others is not that Tesla’s system is cheaper, but that it works better and more of the time. Ford and GM both have a self driving system, but it only works on certain highways that have been mapped with centimeter-level LiDAR ahead of time. Tesla has a system they’re trying to make general purpose, so it can drive on any road. So if the Tesla system takes you driveway-to-driveway and the competition takes you onramp-to-offramp, the Tesla system is more flexible and thus more valuable regardless of the purchase price.

    Do you pick your airline based on the plane they fly and it’s safety record or the price of the ticket, being confident all aviation is held to rigorous safety standards? As has been seen recently with a certain submarine, safety measures should not be taken lightly.

    I agree standards should apply, that’s why Tesla isn’t L3+ certified even though on the highway I really think it’s ready for it.

    • Perhaps, but if you are developing a tech that can save lives, doesn’t it make sense to put that out in more cars faster?

      Totally agree, that’s why I say it is a double-edged sword. The theory being is that it is more acceptable to ship bugs because they can be rectified much more quickly.

      Tesla does this with cameras whether you pay for FSD or not. It can also detect if you’re near an object and slam on gas instead of brake, it will cancel that out. These are options you can turn off if you don’t want them.

      Thanks for clarifying that, not something I was aware of. Sounds very pragmatic.

      I’m saying- imagine if the car has L2 self driving, and the driver had that feature turned off. The human was driving the car. The human didn’t react quickly enough to prevent hitting your loved one, but the computer would have. Most of the conversation around FSD type tech revolves around what happens when it does something wrong that the human would have done right. But as the tech improves, we will get to the point where the tech makes fewer mistakes than the human. And then this conversation reverses- rather than ‘why did the human let the machine do something bad’ it becomes ‘why did the machine let the human do something bad’.

      I misunderstood the original scenario, and while it sounds like it shouldn’t be possible at current (given the auto braking you mentioned above), I understand the meaning. I agree with you here, I don’t think the human is better and my issue isn’t that I think a human would necessarily react better (and certainly in L2 the problem is a human almost never will).

      My main concern was about an accident with camera-only that could have been avoided with additional sensors. I had heard additional sensors had been suggested at Tesla, but vetoed. I knew that Musk was confident cameras can do it all and had said as much. My concern was that his bullishness was reason for this policy, however hearing that Tesla are investigating other sensors dispels that theory.

      This already happens whether the computer is driving or not. Lots of people don’t understand Teslas and think that if you buy one it’ll drive you into a brick wall and then catch on fire while you’re locked inside. Bad journalists will always put out bad journalism. That’s not a reason to stop tech progress tho.

      Agreed. I don’t follow self-driving cars or Tesla/Musk closely so I’m just as ill-informed. The original concern was if Tesla’s policy of using only cameras reduces their self-driving capability compared to non camera-only competition, even performing well above a human, it could affect the perception of self-driving vehicles.

      Right now FSD isn’t a main selling point for most drivers. I’d argue that what might kill others is not that Tesla’s system is cheaper, but that it works better and more of the time. Ford and GM both have a self driving system, but it only works on certain highways that have been mapped with centimeter-level LiDAR ahead of time. Tesla has a system they’re trying to make general purpose, so it can drive on any road. So if the Tesla system takes you driveway-to-driveway and the competition takes you onramp-to-offramp, the Tesla system is more flexible and thus more valuable regardless of the purchase price.

      Yes, I agree. Aside from Waymo, which doesn’t look to be coming to consumers any time soon, I’m not sure who else is close to Tesla on that problem. I would have expected to hear more from the major manufacturers but it seems while some have been certified L3, it is only in certain conditions and locations.

      • The theory being is that it is more acceptable to ship bugs because they can be rectified much more quickly.

        Yeah I hear you- the whole ‘fail fast / fail forward’ doesn’t seem like a great idea when a failure could kill someone. But remember, this is still L2. In theory at least, it shouldn’t be possible for a failure to kill someone because the human should take over. And that’s why (I think at least) the FSD Beta program makes sense- start with a small # of vehicles and trusted drivers, and expand slowly to a larger number as the software improves.

        It’s important to note that FSD IS currently beta software and has always been. To get FSD beta, you have to manually and explicitly opt-in, and agree to terms including that you will remain fully attentive at all times, that handheld device use is prohibited, and it includes the following warning:

        Full Self-Driving is in early limited access Beta and must be used with additional caution. It may do the wrong thing at the worst time, so you must always keep your hands on the wheel and pay extra attention to the road. Do not become complacent.

        Without the beta, the car has significantly less self-driving functionality. The safety suite isn’t impacted much if at all, but the self driving stuff is basically lane-keeping cruise control that can sometimes change lanes or take an exit ramp.


        My main concern was about an accident with camera-only that could have been avoided with additional sensors. I had heard additional sensors had been suggested at Tesla, but vetoed.

        There a narrative pushed by a lot of media that, much like the dead submarine guy, the engineers were all saying ‘we need more quality hardware’ and the money-focused business owner says ‘I’m not buying more, just make it work good enough with the cheap parts you’ve got’ so it works for a while until it kills someone.

        The truth is a bit more complex.
        From what I’ve heard, Elon felt a lot of self-driving developers (at all companies) were using LiDAR as a crutch; they’d get geometry data from LiDAR even when there were plenty of cameras going every which way, so the cameras were only being used for image recognition and there wasn’t even an attempt to get geometry from the cameras.
        Elon looks at this and says ‘a human brain is just two cameras and a neural net with a shitload of training, you’ve got 8 cameras and a neural net processor, it should be able to do the same thing’.
        This requires solving Big Problems that aren’t related to self-driving cars- things like making occupancy networks work with multiple cameras in realtime.

        But that’s how Elon works. I’ve spoken to a few people from SpaceX and they report he’s exactly the same way there too- giving the designers of the Raptor engine constraints like ‘it has to reach ___ chamber pressure, and it can’t use any Inconel (expensive metal alloy used in rocket engines that resists high heat and pressure), and it has to be made with only a handful of hours of machine time per engine’. These are constraints that anyone else would take and say ‘you can’t do that’. But Elon’s teams do it.

        It’s just overall a very different approach to engineering. A team at GM for example would say ‘we need the car to drive itself, what’s the most direct route to make that happen?’. And so you get SuperCruise, a system that basically needs its food pre-chewed for it (the car needs every road mapped with centimeter-level LiDAR ahead of time). Tesla is saying 'let’s solve the big problems first (neural net machine vision), so we can solve the little problems later (making the car make the right driving decisions). Because if we do it that way, then it’ll take longer but we have something more useful- a car that can drive on any road, mapped or not.

        Put differently- when the engineer says ‘we need LiDAR’ it’s not Elon saying ‘that costs too much’, it’s Elon saying ‘it shouldn’t HAVE to need LiDAR, don’t be lazy, solve the big problem so it doesn’t need LiDAR’.
        Being willing to tackle the BIG problems like that as opportunities to advance, rather than unfortunate expenses to be avoided, is why Tesla and SpaceX are so far ahead of their competition.


        I’m not sure who else is close to Tesla on that problem. I would have expected to hear more from the major manufacturers but it seems while some have been certified L3, it is only in certain conditions and locations.

        AFAIK- nobody, for the reasons stated above. Waymo has $200,000 of sensors bolted to the top of the car, it only drives on streets that have been aggressively pre-mapped. Benz/GM/Ford/etc have baby versions of that, all of which add significant extra expense to the car (and are thus optional), and they’ve pushed it through L3 certification on pre-mapped highways under perfect conditions so they can say ‘we were first!’.

        But, and I’m speaking from personal experience driving my own car- using cameras only, Tesla FSD Beta is ready for L3 certification on the highway today. I don’t think they’ve even bothered to apply for it though, both because right now L3 certification is a state-by-state thing, and also because they are pushing software updates every few weeks and L3 cert would probably apply to one version only.