

Taha Abbasi tests Tesla FSD through every update, and Elon Musk just spotlighted a feature many owners may not realize exists: FSD now recognizes hand signals from cyclists, pedestrians, and traffic officers. On X, Musk wrote that “Tesla self-driving now recognizes hand signals,” calling it one of the system’s most underrated features. For anyone who drives in urban environments, this capability addresses one of the most challenging aspects of autonomous driving — understanding unstructured human communication.
Cyclists use hand signals for turns. Construction workers wave traffic through. Police officers direct intersections. School crossing guards stop traffic with gestures. These are situations where a system that only reads lane markings and traffic lights will fail. Tesla’s vision-only neural network now interprets specific arm and hand positions as directional or control signals, responding by decelerating, changing lanes, or proceeding. Taha Abbasi notes this is technically challenging — hand signals vary by region, individual style, and context. A cyclist’s left-turn signal looks different from a police officer’s stop gesture.
Waymo also recognizes hand signals but uses cameras, lidar, and radar — a more expensive sensor suite. Tesla achieving comparable recognition with cameras alone validates the vision-only approach. For the broader industry, gesture recognition is becoming table stakes for any system operating in urban environments. Taha Abbasi sees it as a bellwether: if a system handles the ambiguity of human gestures, it is making real progress toward full autonomy.
The FSD safety data continues improving with each feature like this. For the Cybercab robotaxi, which must handle every situation without a human driver, hand signal recognition is essential infrastructure.
🌐 Visit the Official Site
Tesla FSD’s ability to recognize and respond to hand signals from traffic officers, construction workers, and other road users represents a critical milestone in autonomous driving development. Hand signals are one of the most challenging perception tasks for self-driving systems because they require understanding human gestures in real-time, interpreting context (is the person directing traffic, waving hello, or signaling a turn on a bicycle?), and responding appropriately. The fact that Tesla’s neural network can now handle this demonstrates significant advancement in the system’s visual intelligence and decision-making capability.
Elon Musk highlighting this as an “underrated” feature is significant because it speaks to the gap between FSD’s actual capabilities and public perception. While headlines focus on edge cases and crashes, features like hand signal recognition represent the steady, incremental improvements that bring FSD closer to human-level driving competence. Each new capability that FSD masters is one fewer scenario where a human driver needs to intervene — the fundamental metric that determines when truly autonomous driving becomes possible.
Early autonomous vehicle systems relied primarily on detecting other vehicles, lane markings, traffic signals, and static obstacles. Recognizing human gestures was considered one of the “hard problems” of autonomous driving because it requires not just detection but interpretation. A raised hand could mean “stop,” “thank you,” “go ahead,” or simply be someone scratching their head. The context — who is gesturing, where they’re standing, what’s happening around them — determines the meaning.
Tesla’s approach to this problem leverages its massive real-world driving dataset. With millions of vehicles collecting camera data from diverse driving scenarios worldwide, Tesla has been able to train its neural networks on an unprecedented volume of hand signal encounters. This data advantage is something that competitors operating smaller fleets — like Waymo with its thousands of dedicated robotaxis — cannot easily replicate, though Waymo compensates with lidar sensing and geofenced operational domains.
For the hundreds of thousands of Tesla owners using FSD supervised, hand signal recognition directly improves the day-to-day experience. Construction zones, school zones with crossing guards, and parking lots with attendants are common scenarios where hand signals direct traffic flow. Previously, these situations often required driver intervention — the car would slow or stop when encountering a person in the road but might not correctly interpret their gestured instructions to proceed. Now, FSD can understand “come forward,” “stop,” and “go around” gestures, making these interactions smoother and more natural.
This capability is especially valuable in urban driving environments where hand signals are frequent. City centers, event venues, airport pickup zones, and emergency scenes often involve human traffic direction. FSD’s ability to handle these scenarios without driver intervention reduces the cognitive load on the human supervisor and moves the system closer to the point where supervision becomes unnecessary.
From a technical perspective, hand signal recognition requires Tesla’s vision system to perform several complex tasks simultaneously: detecting human figures, identifying their body pose and hand positions, classifying the gesture, determining whether it’s directed at the Tesla vehicle, and generating an appropriate driving response — all in real-time at highway and urban speeds. This is accomplished using Tesla’s custom-designed HW4 computer and neural network architecture running on eight cameras without any lidar or radar assistance on newer vehicles.
Waymo’s robotaxis also recognize hand signals, but they benefit from lidar’s precise 3D spatial data to detect human figures and their poses. Cruise’s (now Honda-affiliated) system similarly uses a multi-sensor approach. Tesla’s camera-only achievement is more technically challenging and arguably more impressive, though the debate between camera-only and multi-sensor approaches for autonomous driving remains unsettled in the industry.
Hand signal recognition is part of a broader trend of FSD becoming more capable in unstructured environments — situations without clear lane markings, traffic signals, or predictable traffic patterns. Future FSD updates are expected to improve recognition of cyclist hand signals (turn indicators), pedestrian intent prediction (will they cross or wait?), and interaction with emergency vehicle personnel. As these capabilities mature, the case for transitioning FSD from supervised to unsupervised operation grows stronger, bringing Tesla’s vision of a full robotaxi network closer to reality.
About the Author: Taha Abbasi is a technology executive, CTO, and applied frontier tech builder. Read more on Grokpedia | YouTube: The Brown Cowboy | tahaabbasi.com
Related videos from The Brown Cowboy

I Tested FSD V14 with Bike Racks... Here is the Truth

Tesla Robotaxi is Finally Here. (No Safety Driver)