Did Tesla Autopilot Just Kill Someone?

25

A recent fatal accident could have major consequences for self-driving cars.

Tesla has confirmed that a recent fatal crash occurred in a Model S while Autopilot was activated, calling into question whether the technology is truly safe to be used in today’s roadways.

The National Highway Traffic Safety Administration (NHTSA) has opened a preliminary evaluation into the accident to determine whether Tesla Autopilot worked according to expectations. This is the first known fatality in just over 130-million miles where Autopilot was activated, and unfortunately it was just a matter of time before it happened.

According to the American electric automaker, the vehicle was traveling on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against what Tesla calls a “brightly lit sky,” so the brake was not applied on the vehicle. Combined with the high ride height of the trailer and its positioning across the road, the Model S passed under the trailer, with the bottom of the trailer hitting the windshield.

SEE ALSO: Volvo Engineer Calls Tesla’s Autopilot an ‘Unsupervised Wannabe’

It’s likely we will never know whether or not the accident would have occurred even if Autopilot wasn’t activated, but the unfortunate accident will shine a bright light on autonomous and semi-autonomous driving technologies. Swedish automaker Volvo has called out Tesla’s Autopilot saying that it is an “unsupervised wannabe,” while others have questioned whether it’s really safe to be “beta testing” a feature like Autopilot on today’s roads.

Tesla, of course, defended its Autopilot reiterating that the feature is disabled by default and that drivers acknowledge that they “need to maintain control and responsibility” for their vehicle at all times.

Read Tesla’s full statement below.

“We learned yesterday evening that NHTSA is opening a preliminary evaluation into the performance of Autopilot during a recent fatal crash that occurred in a Model S. This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles. It is important to emphasize that the NHTSA action is simply a preliminary evaluation to determine whether the system worked according to expectations.

Following our standard practice, Tesla informed NHTSA about the incident immediately after it occurred. What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S. Had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents.

It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot ‘is an assist feature that requires you to keep your hands on the steering wheel at all times,’ and that ‘you need to maintain control and responsibility for your vehicle’ while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.’ The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.

We do this to ensure that every time the feature is used, it is used as safely as possible. As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing. Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert. Nonetheless, when used in conjunction with driver oversight, the data is unequivocal that Autopilot reduces driver workload and results in a statistically significant improvement in safety when compared to purely manual driving.

The customer who died in this crash had a loving family and we are beyond saddened by their loss. He was a friend to Tesla and the broader EV community, a person who spent his life focused on innovation and the promise of technology and who believed strongly in Tesla’s mission. We would like to extend our deepest sympathies to his family and friends.”

Discuss this story on our Tesla Forum

  • Mark Tucker

    no it did not, but they now have more perameters to go into the system.

  • Felix James

    Kinda sounds like it could have. They don’t seem to know yet.

  • Jamal

    Makes you wonder if Tesla rolled out this new technology too early?

  • Rickers

    Seems like every Tesla story on this site is negative.

  • smartacus

    VW lied but noone died

  • DClark

    Actually the VW emissions scandal has been estimated by scientists to have caused or will cause around 60 people to die 10 to 20 years prematurely.

  • gravitylover

    maybe the driver shouldn’t have been watching a movie…

  • smartacus

    scientists claim the earth is a giant egg for a space bird.

  • smartacus

    if he’s not watching a movie, how is he supposed to brag his car allows him to watch a movie while driving? 🙂

  • I thought that was the moon.

  • The driver was watching an audiobook?? 😉

  • flashpoint

    1 death and everyone is going crazy… Drunk drivers kill thousands people a day and no one bats an eye.

  • smartacus

    i see what you did there

  • James Klapper

    Unless they have a lot of training in a failure recovery simulator the average driver will not be able to save themselves from an autopilot fault. They simply will almost never be able to respond in time, analyze the problem, make a decision on what to do and then act on it.

    Because of this anything that promotes driver inattention is going to be very dangerous and that includes all the autopilot/self drive concepts unless they can be made failure proof.

  • JohnMiller2013

    Failure proof is impossible. The very dangerous system on the road today is human drivers. Autonomous driving systems are already proving to be safer than human drivers, and as long as they continue that track record, they should be regarded as a safer option.

  • JohnMiller2013

    Yeah, and I wonder about the proximity sensors. Are they positioned to “see” high enough to ensure the car will clear any overhead obstacles, especially at freeway speeds?

  • RussellL

    People are expecting too much from it.

    Level 4 automation is where the vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles.

    The Tesla’s are Level 2 where at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.

    nhtsa gov/About+NHTSA/Press+Releases/U.S.+Department+of+Transportation+Releases+Policy+on+Automated+Vehicle+Development

  • James Klapper

    The whole idea would be a lot safer if it was a warning system plus an emergency avoidance/brake application to help prevent human mistakes instead of taking over too much and causing the inattention that led to this accident.

    Those autonomous systems are only safer than humans in controlled and simple situations. But given the near infinity of possible problems on the road it’s going to be impossible for any automated system to anticipate and act properly unless it begins to approach the complexity of a trained human brain. We are a long, long way from that today.

  • Mark Tucker

    no it did not kill , but it failed to save. the truck was higher than the tesla was looking along with lighting and…..I remember seeing a truck out of gas on a long bridge heading west around 5 o’clock. the 3 mile bridge was concreate.the truck was dirty white, it was stopped right where the bridge starts gong up for marine traffic.barges&such,and the 2 people on the motor cycle never saw what they hit till…splat the truck was a moving truck and the tail lights were bent down on the effed over bumper frame like somany are.
    shit happens, thats what influences innovation and better safety equipment. just like the “safer barriers” nascar now has, sadly but you just dont know everything till shit hits the fan then you address that issue and wait till the next.then figure it out.

  • JohnMiller2013

    The proof is in the autonomous systems that have already logged millions of miles on real roads (not “controlled and simple situations”) causing less accidents and fatalities than human drivers.

    These systems don’t need to be as complex as human brains because they don’t need to do most of what humans do. Human brains evolved as many neural connections as they did to survive life on earth, which involves far more complex processing than just driving. The systems just need to drive safely and use their sensors to prevent accidents. Luckily they pay attention 100% of the time and don’t send text messages on freeways.

  • James Klapper

    Besides the ability to interpret the sensor inputs correctly – which apparently caused the Tesla crash, a tough job inn itself, the higher order decision making is the problem. We’re a long way from being able to do that with an autonomous system as it requires a true artificial intelligence that seems to recede into the future as fast as time passes.

    You are being deluded by the simplistic controlled conditions under which the developers are testing their systems. real world they will cause accidents and the only reason they haven’t happened yet is that the only commercially available ones are like Tesla’s, supposedly just a driving warning/aid. But they have the potential to promote driver inattention no matter how many warnings Tesla puts in their manual.

  • JohnMiller2013

    One thing I learned while studying cognitive neuroscience in college is the near impossibility of artificial human intelligence. This we probably agree on. Human intelligence requires a human body – in particular, a nervous system (that relies on so many nuanced and interrelated genetic expressions that we are still a very long ways off from identifying most of them) and a body to interact with the world (without which we could not develop intelligence). Developing a genuine artificial human intelligence requires first understanding how our bodies develop intelligence, which itself “seems to recede into the future as fast as time passes”. New neural subsystems are being discovered all the time, and Eric Kandel (et al)’s highly reductionist and concise encyclopedia on the subject gets thicker and more complex with each edition (Principles of Neural Science)

    Tesla’s system is not fully autonomous, and therefore irrelevant in discussing the safety of autonomous driving systems.

    I definitely am deluded in countless ways, and for either of us to pretend to have a knowledgeable position on this subject is probably a delusion of grandeur. But my feeling is that an autonomous driving system that’s safer than human drivers does not require a system nearly as complex as a human being, and that humans are notoriously inattentive drivers already, making the task of developing a safer-than-human system more feasible.

  • James Klapper

    Many years ago I read an article on automobile accident reduction that stuck in my memory. I do not remember the exact number but the author had an estimate of the average number of interactions a driver had that required a judgement and it was substantial – 6 figures in the time span under consideration. He was pleasantly surprised that the accident rate was so low considering that and then went on to address making cars, and roads, more accident survivable instead of concentrating on the driver problem.

    Bad driving does cause most accidents but given the scope of the problem drivers are a lot better than we give them credit for.

  • Dennis Lui

    That guy got it himself.

  • Tim

    If the Tesla driver was paying attention to the road and his surroundings like every responsible driver should, he would have seen the truck pulling out in front of him and used the brakes and/or steered to avoid a collision.

    At the same time, if the truck driver had waited for a safer window to cross into highway traffic, this story may not have become a story in the first place.

    Finger pointing aside, as drivers, it is crucial to watch the road and pay attention to what is going on around you and your vehicle. Driver aids, no matter how seemingly autonomous, are no excuse to ignore the road and all the other people you share it with.

    Regardless, it is unfortunate that someone died because of such a simple mistake.