Our CES AV News Roundup - from Nvidia to Uber
Plus, we chat with the man who set the cross country Tesla FSD Record to find out how.
Happy new year! And welcome to the Ride AI newsletter: your weekly digest of news and intelligence at the intersection of technology and transportation.
Ride AI will be at CES for the rest of the week! If you’d like to meet and chat, please email me directly at sophia@rideai.org. I will respond as promptly as possible.
Now, Here’s What You Need To Know Today.
A man has completed a cross country road trip… driven entirely by Tesla’s FSD software with no disengagements.
The trip encompassed a 2,732.4 mile coast to coast drive, starting at the Tesla Diner in Los Angeles and ending in Myrtle Beach. David Moss, the man who attempted the record, started his attempt on December 28th, and ended it successfully on December 30th.
I caught up with David via email to confirm the facts after his record-setting attempt went viral. According to David, before the cross country attempt, he had already driven 6,550 miles on FSD with no disengagements on FSD version 14.2. He then had a personal goal of trying to reach 10,000 miles driven on FSD with no disengagements, and achieved it with the cross country attempt.
According to David, he was “100% confident in the attempt.” His intervention-less streak currently sits at 11,600 miles.
Minutes after David posted about his successful attempt, the official Tesla North America X account congratulated him, followed by Ashok Elluswamy, head of Tesla’s AI efforts, Elon, and Andrej Karpathy, former director of AI at Tesla.
Nvidia has launched Alpamayo: open-source AI models aimed at helping autonomous vehicles reason like humans.
At CES 2026, Nvidia unveiled Alpamayo, a new family of open-source AI models, simulation tools, and datasets designed to help autonomous vehicles reason through complex real-world driving scenarios. The company framed the release as a major step toward what it calls “physical AI,” with CEO Jensen Huang saying the technology allows machines to understand, reason, and act in the physical world, including navigating rare or unfamiliar situations such as traffic light outages at busy intersections.
At the center of the release is Alpamayo 1, a 10-billion-parameter vision language action model built around chain-of-thought reasoning. Nvidia said the model breaks down driving problems into steps, evaluates multiple possible actions, and selects the safest option, while also being able to explain why it made a particular decision. The company has made the core model available on Hugging Face and said developers could fine-tune it into smaller versions, use it to train simpler driving systems, or build tools such as automatic video labeling and decision evaluators on top of it.
Alongside the model, Nvidia released an open dataset containing more than 1,700 hours of driving data collected across diverse regions and conditions, with a focus on rare and complex scenarios. It also introduced AlpaSim, an open-source simulation framework available on GitHub, designed to recreate real-world driving environments for large-scale testing. Nvidia said developers could combine real-world data with synthetic data generated using its Cosmos world models to train and validate Alpamayo-based autonomous driving systems before deploying them on public roads.
Uber, Lucid, and Nuro have revealed more details about its Gravity-based robotaxi.
At CES 2026, Uber, working with American luxury electric vehicle manufacturer Lucid and autonomous driving startup Nuro, unveiled the production-intent design of its next-generation global robotaxi at CES 2026, alongside a first look at an in-cabin rider experience designed by Uber. The reveal marked a key milestone for the partnership announced in July 2025, as the companies moved from prototypes toward a vehicle built specifically for large-scale robotaxi service on the Uber platform.
Autonomous on-road testing of the Lucid Gravity-based robotaxi began last month in the San Francisco Bay Area, with Nuro leading testing operations using engineering prototypes supervised by autonomous vehicle operators. The partners say they expect to operate more than 100 vehicles in the engineering test fleet. The production vehicle will feature a new sensor array with high-resolution cameras, solid-state lidar, and radar providing 360-degree perception, embedded in a roof-mounted halo design that also includes integrated LEDs to help riders identify their vehicle, display rider initials, and communicate trip status from pickup through dropoff. Pending final validation, production is expected to begin later this year at Lucid’s Arizona factory, ahead of an initial commercial launch in San Francisco.
Inside the vehicle, Uber took direct responsibility for the rider experience for the first time in an autonomous vehicle partnership, adding interactive displays that allow passengers to control climate settings, heated seats, and music, view real-time visualizations of what the robotaxi sees and plans to do, and contact support or request a pull-over. The Gravity robotaxi offers seating for up to six passengers with generous luggage space, which could make it the first robotaxi that seats over four people (Waymo’s Zeekr ride vehicle still only seats four).
Uber says it aims to deploy 20,000 or more Lucid vehicles equipped with the Nuro Driver over six years across multiple markets, with the fleet owned and operated by Uber or its partners and offered exclusively through the Uber app.
In Other CES Press Releases & News…
Mobileye ADAS solution chosen by a “Top 10 US Automaker”. Who could it be?
China’s Hesai will double lidar sensor production, target <$200/sensor
Alright, that’s it from me… until next week. If you enjoy this newsletter, share it with your friend, colleague, or boss. Thank you for reading; Sophia out!




