A Tesla Cybertruck crashed into a concrete overpass barrier on Houston’s 69 Eastex Freeway in August 2025. The driver, Justine Saint Amour, is suing Tesla for $1 million, claiming the vehicle’s Full Self-Driving system malfunctioned and caused the collision. Elon Musk says the logs tell a different story. The dashcam footage has gone viral. And the debate over what actually happened in those final seconds before impact has become one of the most contentious flashpoints in the ongoing conversation about autonomous driving safety.
This is not a simple case. The details matter, and both sides have evidence that supports their position. Understanding what happened requires looking at the dashcam video, Tesla’s internal logs, the legal claims, and the broader context of how FSD operates in edge cases.
The dashcam footage, released as part of the lawsuit and widely shared across social media and news outlets, shows the Cybertruck approaching a Y-shaped highway overpass. The road curves to the right. The vehicle initially appears to follow the curve but then continues straight, heading directly into a concrete barrier at highway speed. The impact is severe.
The video quickly went viral with headlines suggesting that Tesla’s autopilot system had attempted to drive a mother and her baby off an overpass. That framing is sensationalistic, but the footage is genuinely concerning. The vehicle clearly fails to navigate a highway split that millions of drivers handle routinely.
For anyone who has used FSD or Autopilot, the driving behavior in the video does look unusual. The system typically handles highway curves with reasonable competence, and a straight-line departure from a clearly marked curve is the kind of failure mode that raises immediate questions about what was controlling the vehicle.
Elon Musk responded directly on X (formerly Twitter), stating that Tesla’s internal logs show the driver disengaged the self-driving system 4 seconds before the crash. His exact words were, “Logs show driver disengaged Autopilot four seconds before crashing.” He added, “As anyone knows who uses it, that video is not how Autopilot drives.”
Musk’s claim, if accurate, would mean the vehicle was under full manual control during the critical final moments before impact. Four seconds at highway speed covers roughly 250 to 350 feet depending on the vehicle’s speed. That is a significant distance and timeframe in which a human driver should theoretically have been able to correct course.
Tesla has not formally responded to the lawsuit through legal channels as of this writing, and the case remains in its early stages. But Musk’s public statement on X establishes the company’s likely defense: the system was not in control when the crash occurred.
The plaintiff’s legal team and several automotive safety analysts have pushed back on the 4-second disengagement narrative. Their argument centers on what happened before the disengagement. If FSD was engaged and already failing to track the curve correctly, the driver may have disengaged precisely because the system was driving the vehicle toward the barrier. In other words, the disengagement was not the driver choosing to take manual control for routine reasons. It was an emergency reaction to a system that was already on the wrong path.
This distinction matters enormously. If the system placed the vehicle in a dangerous trajectory and the driver’s attempt to intervene came too late, the root cause is still the autonomous system’s failure, even if the logs technically show the system was off at the moment of impact.
This is a known challenge in autonomous driving accountability. Engineers and ethicists call it the “handoff problem.” When an automated system fails and hands control back to a human in a split second, the human is often in the worst possible position to respond. They may be disoriented, caught off guard, or simply unable to correct the vehicle’s trajectory in the available time.
This crash is not happening in a vacuum. NHTSA has been investigating Tesla’s driver-assistance systems for years, and the company has issued multiple recalls related to FSD and Autopilot behavior. In late 2023, Tesla recalled over 2 million vehicles to add additional safeguards to its Autopilot system after a two-year investigation found the system allowed drivers to misuse it.
More recently, concerns have emerged about FSD’s handling of specific edge cases, including highway splits, construction zones, and scenarios where lane markings are ambiguous or absent. Each incident adds to a growing body of data that regulators, insurers, and the legal system are using to evaluate whether these systems are safe enough for widespread use.
None of this means FSD is fundamentally broken. The system has logged billions of miles and handles the vast majority of driving scenarios competently. But the edge cases, the 0.1% of situations where the system fails, are exactly where safety-critical scrutiny belongs.
The Saint Amour lawsuit could set important precedents depending on how it unfolds. If Tesla’s logs are admitted as evidence and the court accepts the 4-second disengagement timeline, it becomes much harder for the plaintiff to prove the system caused the crash. But if the plaintiff’s team can demonstrate that the system’s behavior prior to disengagement created the dangerous situation, the legal landscape shifts significantly.
Discovery in this case will likely produce detailed telemetry data, including steering inputs, speed, acceleration, lane-keeping data, and the exact state of every FSD parameter in the seconds and minutes before the crash. That data will either vindicate Tesla’s position or expose a system failure that the company will need to address.
Cases like this also influence how future FSD and Autopilot litigation is handled. Every settlement, verdict, or ruling creates a reference point for attorneys, judges, and regulators evaluating similar claims. The Houston Cybertruck crash may not be the biggest FSD incident in terms of injuries, but it has generated enough public attention to become a landmark case in autonomous driving law.
If you are a Tesla owner using FSD, this case is a reminder that the system requires your attention at all times. Tesla’s own terms of service make this clear, and the user interface displays regular prompts to keep hands on the wheel and eyes on the road. But the gap between what the marketing suggests (the name “Full Self-Driving” implies complete autonomy) and what the system actually delivers (a supervised driver-assistance system) continues to cause confusion.
FSD has improved dramatically over the past year, particularly with recent software updates that have enhanced the system’s ability to handle complex intersections, highway merges, and urban driving. But it is not a fully autonomous system, and treating it as one creates real risk.
The smartest approach for any FSD user is to think of the system as a highly capable co-pilot that occasionally makes mistakes. You need to be ready to take over at any moment, especially in edge cases like highway splits, construction zones, and unfamiliar road geometry.
The outcome of this case will not single-handedly determine the future of autonomous driving, but it will add an important data point to the ongoing conversation about accountability, transparency, and the responsibilities of companies deploying these systems. Whether the logs or the dashcam footage tell the more complete story is a question for the courts. But the broader question of how we assign responsibility when humans and machines share control of a vehicle is one that the entire industry needs to answer.
Taha Abbasi is a technology analyst and automotive content creator who tests autonomous driving systems in real-world conditions. Follow his work on YouTube for hands-on FSD testing and analysis.
Related reading: Tesla Cybercab Production Ramps at Giga Texas