Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Nvidia has released a new set of robotics foundation models, simulation tools, and advanced hardware in Consumer Electronics Show 2026moves that indicate the company’s ambition to become the default platform for specialists RobotsJust as Android has become the operating system for smartphones.
NVIDIA’s move into robotics reflects a broader shift in the industry as AI moves from the cloud to machines that can learn how to reason about the physical world, enabled by cheaper sensors, advanced simulation, and AI models that can increasingly generalize across tasks.
Nvidia on Monday revealed details about its entire physical AI ecosystem, including new open foundation models that allow robots to think, plan and adapt across many diverse tasks and environments, moving beyond narrow, task-specific bots, all available on Hugging Face.
These models include: Cosmos Transfer 2.5 and Cosmos Predict 2.5, which are global models for generating synthetic data and evaluating robotics policies in simulations; The cause of the universe 2, the Vision Language Model (VLM) that allows AI systems to see, understand, and act in the physical world; And Isaac GR00T N1.6, its The next generation The Vision Language Action (VLA) model is specifically designed for humanoid robots. GR00T relies on Cosmos Reason as its mind, opening up entire body control to humans so they can move and manipulate objects simultaneously.
Nvidia also introduced Isaac Lab-Arena at CES, an open source simulation framework hosted on GitHub that serves as another component of the company’s physical AI platform, enabling safe virtual testing of robotic capabilities.
The platform promises to address a critical industry challenge: As robots learn increasingly complex tasks, from precise object manipulation to cable installation, validating these capabilities in physical environments can be expensive, slow, and risky. Isaac Lab-Arena addresses this problem by integrating resources, mission scenarios, training tools, and established standards such as Libero, RoboCasa, and RoboTwin, creating a unified standard where the industry previously lacked one.
Powering the Nvidia OSMO ecosystem is an open source command center that acts as a connected infrastructure that integrates entire workflows from data generation to training across desktop and cloud environments.
TechCrunch event
San Francisco
|
October 13-15, 2026
And to help keep it all running, there’s the new Blackwell-powered Jetson T4000 graphics card, the newest member of the Thor family. Nvidia is promoting it as a cost-effective on-device computing upgrade that delivers 1,200 teraflops of AI compute and 64GB of memory while running at 40 to 70 watts of efficiency.
Nvidia is also deepening its work Partnering with Hugging Face To allow more people to experience robotics training without the need for expensive hardware or specialized knowledge. The collaboration integrates Nvidia’s Isaac and GR00T technologies into Hugging Face’s LeRobot framework, connecting Nvidia’s 2 million bot developers with Hugging Face’s 13 million AI creators. The developer platform is open source Access 2 humanoid now works directly with Nvidia’s Jetson Thor chip, allowing developers to experiment with different AI models without being tied to proprietary systems.
The bigger picture here is that Nvidia is trying to make robot development easier, and wants to be the primary supplier of the hardware and software that runs them, much like it is the default Android platform for smartphone makers.
There are early signs that Nvidia’s strategy is working. Robotics is Hugging Face’s fastest-growing category, with Nvidia models leading downloads. Meanwhile, robotics companies, from Boston Dynamics and Caterpillar to Franka Robots and NEURA Robotics, are already using Nvidia’s technology.
Follow along with all of TechCrunch Coverage of the annual CES conference is here.