Nvidia announces new open AI models and tools for autonomous driving research


Nvidia announced new AI infrastructure and models on Monday as it works to build the underlying technology for physical AI, including robots and autonomous vehicles that can perceive and interact with the real world.

The semiconductor giant announced the Mayo-R1, An Open logical vision language model For autonomous driving research at the NeurIPS AI conference in San Diego, California. The company claims this is the first working prototype of a vision language focused on autonomous driving. Visual language models can process text and images together, allowing vehicles to “see” their surroundings and make decisions based on what they see.

This new model is based on Nvidia Model of cosmic causeIt is a thinking model that considers decisions before responding. Nvidia initially released the Cosmos model family in January 2025. Additional Models have been released In August.

Nvidia said in a report that technology like Alpamayo-R1 is critical for companies looking to reach Level 4 autonomous driving, which means complete autonomy in a specific area and under specific conditions. Blog post.

NVIDIA hopes this type of thinking model will give self-driving vehicles the “common sense” to better handle precise driving decisions like humans.

This new model is available on GitHub and Hugging Face.

Along with the new vision model, Nvidia has also uploaded new step-by-step guides, inference resources, and post-training workflows to GitHub — collectively called the Cosmos Cookbook — to help developers better use and train Cosmos models for their specific use cases. The guide covers data processing, synthetic data generation, and model evaluation.

TechCrunch event

San Francisco
|
October 13-15, 2026

These announcements come as the company works full speed toward physical AI as a new vehicle for its advanced GPUs.

Nvidia co-founder and CEO Jensen Huang has repeatedly said that The next wave of AI is physical AI. Bill Daly, chief scientist at Nvidia, He echoed that sentiment In conversation with TechCrunch over the summer, focusing on physical AI in robotics.

“I think robots will eventually be a big player in the world, and we want to make the brains of basically all robots,” Daly said at the time. “To achieve this, we need to start developing the underlying technologies.”

Leave a Reply

Your email address will not be published. Required fields are marked *