Building the driver of the future: autonomy at GM

2026-03-19


            

By Rashed Haq, Vice President, Vehicle Autonomy

image text



While robotaxis have captured much of the public attention around autonomy, our focus is broader: bringing advanced autonomous capabilities to personal vehicles at massive scale across both gas and electric platforms with a common architecture. Doing that requires not only a strong autonomy stack, but one that can improve rapidly and deploy across millions of vehicles, just as Super Cruise today spans vehicles from the Chevrolet Bolt to the Cadillac ESCALADE IQ.

As we advance from hands-off to eyes-off technology, our autonomy data-collection vehicles have been running routes in a variety of conditions across select states. As of this month, we’ve entered the next phase of supervised testing of our automated technology on public roads. 

Real-world supervised testing helps close the gap between simulation and reality, while continuously improving both the simulation engine and the autonomous capability. Even the most advanced simulation environments cannot perfectly replicate all the complexity of real driving. It also allows engineers to evaluate behaviors that are difficult to test in simulation or on closed courses. 

Unusual construction layouts, degraded lane markings, sudden weather changes, and unpredictable human behavior create edge cases that stress the system in ways controlled environments cannot. Each of these events becomes a data point that feeds back into the autonomy development loop, improving the AI driving model and overall system robustness. 

A long history of autonomy at GM

Autonomous driving isn’t new to General Motors. Long before “self-driving” entered the vocabulary, GM engineers were exploring how vehicles could reduce the cognitive load of driving. 

In the 1950s, GM’s Firebird concept cars imagined vehicles that could sense and respond to their surroundings. By the 1990s, GM research vehicles were driving autonomously on highways in Southern California using magnetic markers embedded in the road. In 2007, a heavily modified Chevrolet Tahoe developed by Carnegie Mellon completed the DARPA Urban Challenge without human intervention. 

This Chevrolet Tahoe is a self-driving SUV named “Boss” and it made history by driving safely and swiftly
This Chevrolet Tahoe is a self-driving SUV named “Boss” and it made history by driving safely and swiftly

Those early efforts explored what autonomy might look like. Today, advances in artificial intelligence, sensors, and compute are making it possible to deploy these systems safely in the real world. 

At GM, our focus is on accelerating the learning rate of autonomous driving systems — how quickly the technology improves with each development cycle.

In 2016, GM invested in Cruise and later integrated the engineering organization in-house, combining world-class autonomy expertise with GM’s strengths in Super Cruise, vehicle engineering, and manufacturing.

This work now forms the foundation for hands-off, eyes-off driving, which will debut on the 2028 Escalade IQ. Over time, our goal is to expand autonomy across increasing levels of capability — from hands-off assistance to eyes-off operation, and eventually vehicles that can operate on their owner’s behalf in certain environments. This will give our customers the option to select the level of autonomy most relevant to them across our vehicle portfolio.

Accelerating the learning rate of autonomy

Autonomous driving is one of the most demanding applications of physical AI. Unlike digital AI systems, it must perceive a dynamic environment, predict how it will evolve, plan safe trajectories, and control a vehicle in real time. 

The central engineering challenge in autonomous driving is how quickly the system improves. We refer to this as the learning rate of the autonomy stack. A higher learning rate means the system becomes safer and more capable with each development cycle. 

At GM, three factors accelerate that learning:

1. Data: The feedback loop of 800 million miles

image text

Our foundation models are trained on millions of miles of real-world driving.

Autonomous driving improves through experience, and the diversity and volume of driving data are critical.

A small autonomous fleet might accumulate a few million miles annually. A large automaker deploying driver-assistance systems across its lineup can collect those miles far more quickly.

Nearly 700,000 Super Cruise-enabled vehicles have driven more than 800 million hands-free miles across 23 models in North America. We also have learnings from millions of hands-free, eyes-off miles of data driven by Cruise. In addition, we have hundreds of data collection vehicles with production intent sensors and compute to collect high precision data.

Each mile captures real-world driving conditions across geographies, weather environments, traffic patterns, road types and variations of road actor behaviors.

The result is a powerful feedback loop: this scale accelerates system learning and improves models faster.

To complement real-world data, simulation allows engineers to test the system at even larger scale.

GM’s simulation environment enables engineers to simulate roughly 100 years of human driving every day. Engineers can replay real events, modify them to create new scenarios, and generate entirely new driving situations.

This allows the autonomy stack to encounter rare or hazardous scenarios far more frequently than through road testing alone, dramatically accelerating development.

2. Physical AI: scalable driving intelligence 

To meet the intense needs of the physical AI of autonomous driving, modern AI architectures are increasingly able to generalize across different vehicles, sensors, geographies, and driving conditions. 

That means less retraining is required as autonomy expands to new platforms or regions. This flexibility allows us to scale autonomy across electric and gas vehicles, and across brands, in a way few other automakers can.

3. Vertical Integration: the systems advantage

At GM, we are capable of building a vehicle every 60 seconds. We design the hardware, develop the software stack, validate the system, and manufacture vehicles at global scale. This vertical integration allows us to move quickly from development to real vehicles, accelerating testing, data collection, and iteration.

Deploying autonomy across millions of vehicles fundamentally changes the cost curve for both the company and ultimately customers. A subset of sensors, compute platforms, and software developed for hands-free driving can be shared with higher levels of automation to achieve better economies of scale.

The growing adoption of Super Cruise demonstrates this potential. Hundreds of thousands of customers now use the system regularly — often twice a day during their daily commute — showing strong demand for advanced driver assistance and autonomy.

2028 Cadillac Escalade IQL
2028 Cadillac Escalade IQL. Simulated images. Production vehicle may vary. The ESCALADE IQ will feature next generation electrical architecture in 2028.

Evolving the driver’s relationship with vehicles  

Autonomous driving represents a fundamental shift in how people interact with vehicles—reducing stress, improving safety, and returning time to drivers.

As our systems learn from real-world driving, software updates will continuously refine how vehicles perceive their environment, make decisions, and respond on the road.

Our goal is to build an autonomy platform with the robustness, safety, and scalability required to deploy across millions of vehicles worldwide.

Achieving that goal depends on two things: accelerating how quickly the system learns and deploying that technology at the scale only a global automaker can deliver.

That combination is what will ultimately bring autonomous driving to everyday vehicles.