CES 2026 The Chat GPT Moment for Autonomous Driving

From Henrik Bork | Translated by AI 4 min Reading Time

Related Vendors

Geely presents its "World Action Model" at CES, trained with millions of videos. Xpeng and Nvidia are also relying on end-to-end AI. 2026 could be the year of the breakthrough—at least in China.

A look at Geely's booth at CES 2026 on January 6, 2026, at the Las Vegas Convention Center.(Image: picture alliance / Anadolu | Tayfun Coskun)
A look at Geely's booth at CES 2026 on January 6, 2026, at the Las Vegas Convention Center.
(Image: picture alliance / Anadolu | Tayfun Coskun)

Geely has introduced new technology for its assistance systems during CES in Las Vegas. The vehicles of China's second-largest car manufacturer will now be equipped with a new "World Action Model" (WAM). According to a press release from Geely at CES, it is a kind of "super AI brain" that "significantly improves perception and reasoning."

WAM is essentially a form of artificial intelligence that works more extensively with visual data and simulations than before. By using the word "Action" in the product name World Action Model, Geely aims to emphasize that conclusions drawn from video-based training can now be directly translated into actions, such as steering or braking when driving assistance is activated.

"After Li Auto and Xpeng took the lead, Geely has now unveiled its own World Model," reported the Chinese automotive portal 36Kr from the expo. "The ultra-large model route for intelligent, assisted driving has become an industry consensus."

Architecture of AI Agents

Geely bombarded the trade press with a whole range of new product names, likely to emphasize the sophistication of its new system. In addition to WAM, "G-ASD" and "Full Domain AI 2.0" were touted as innovations. G-ASD, which stands for "Geely Afari Smart Driving," refers to the new version of the automaker's ADAS system. The automotive company recently consolidated several of its subsidiaries related to artificial intelligence into a new subsidiary in southwest China’s Chongqing. G-ASD is the first product from this new "Chongqing Afari Technology Co., Ltd."

G-ASD is the system that implements the World Model for autonomous driving. "G-ASD utilizes an industry-leading Smart-AI agent architecture and consistently applies the WAM World Action Model, making it one of the most model-rich and advanced driver assistance systems in the world," writes Geely.

With "Full Domain AI 2.0," also presented by Geely in Las Vegas as a significant innovation, the application of WAM across multiple domains is meant: driving assistance, intelligent cockpit, chassis, and safety systems. According to Geely, previous fragmented applications have been replaced by a "central vehicle-wide AI architecture." The result is a kind of "super AI brain" that "coordinates all functions across domains" and allows them to "work together in real time," the automaker explained.

All together, including WAM, G-ASD, and Full Domain AI 2.0, along with the sales target of 3.45 million vehicles, demonstrates that Geely aims to "reshape the global automotive industry through intelligence" in 2026, according to the press release, written in the typically self-assured Geely style.

Action, Not Recommendation

Objectively, it can be stated that Geely is now following other automakers such as Xpeng and Li Auto, as well as other developers of driving assistance systems like Huawei, by focusing more than ever on World Models and "end-to-end" processes. Explicit instructions in the form of programmed rules are losing significance. The training of models with videos is becoming more central.

Competitor Xpeng also utilized the prominent stage of CES in Las Vegas to present its new AI model "Xpeng VLA 2.0." This too is a World Model which, according to the automaker, has been trained using millions of videos, transforming this information directly into action instructions without the previous translation layer of language.

AI with Video Models Instead of Language Models

While language is still integrated at Xpeng to consider driver instructions, the decisions of the driving assistance systems in certain traffic situations are based on behavior patterns derived from videos.

Instructions are no longer given through language; instead, actions "learned" from simulations in the physical world form the core of the new AI brains. Nvidia, which announced its entry into the robotaxi business at CES and presented its product "Alpamayo," emphasized this paradigm shift from LLMs and language-based AI to physical AI.

"The Chat-GPT moment for physical AI is here—the moment when machines begin to understand, reason, and act in the real world," said Nvidia CEO Jensen Huang, according to media reports in Las Vegas.

New ADAS Systems with AI Technology

All new driver assistance systems presented at CES and in China at the beginning of the year have one thing in common: they rely on "end-to-end" architectures. The AI models accessed during driving are designed to immediately translate visual data from the vehicle's cameras into concrete actions without unnecessary detours.

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Chinese automakers and Nvidia have each crafted impressive PR texts. Who has the lead over the competition may be less important than the fact that there is indeed a new momentum in the development of autonomous driving. "2026 could be the first year with truly autonomous driving" in China and the United States, said He Xiaopeng, the founder of Xpeng, in Las Vegas.

From now on, the focus will be on who can bring the most cars with these new "World Models" to the road and collect the most data for their iterative improvement.