What happens when Waymo encounters a hurricane? Or an elephant?


A self-driving vehicle drives along a lonely stretch of highway. Suddenly, a huge tornado appeared in the distance. What does a driverless car do next?

And that’s just one of the scenarios Waymo can simulate in the “hyper-realistic” virtual world it has just created with the help of Google’s DeepMind. Waymo’s global model is built using Genie 3Google’s new AI universe model that can create virtual interactive spaces containing text or images as prompts. But Genie 3 isn’t just for creativity Bad knockoffs of Nintendo games; It can also create realistic, interactive 3D environments “adapted to the rigors of the driving range.” Waymo says.

Simulation is a critical element in the development of autonomous vehicles, enabling developers to test their vehicles in a variety of settings and scenarios, many of which may only arise on the rarest of occasions – without any physical risk of harm to passengers or pedestrians. Autonomous vehicle companies use these virtual environments to conduct a battery of tests, logging millions — or even billions — of miles in the process, in hopes of better training their vehicles for any potential “emergency situation” they might encounter in the real world.

What kind of edge cases is Waymo testing? In addition to the aforementioned hurricane, the company can also simulate a snow-covered Golden Gate Bridge, a waterlogged suburban cul-de-sac with floating furniture, a neighborhood engulfed in flames, or even an encounter with a rogue elephant. In each scenario, the Waymo robotaxi’s Lidar sensors create a 3D view of the surrounding environment, including obstacles in the road.

“The Waymo World Model can create almost any scene — from normal everyday driving to rare long-tail scenarios — via multiple sensing methods,” the company says in a blog post.

Waymo says the Genie 3 is ideal for creating virtual worlds for its robotic vehicles, citing three unique mechanisms: driving motion control, scene layout control, and language control. Driving action control allows developers to simulate “what-if” counterfactuals, while scene layout control allows customization of road layouts, such as traffic lights and other road user behaviors. Waymo describes Language Control as “the most flexible tool” that allows adjustment of the time of day and weather conditions. This is especially useful if developers are trying to simulate low-light or high-glare conditions, where the car’s various sensors may have difficulty seeing the road ahead.

The Waymo World Model can also capture real car camera footage and turn it into a simulated environment, for the “highest degree of realism and realism” in virtual testing, the company says. It can create longer simulated scenes, such as those played at 4X speed, without sacrificing image quality or computer processing.

“By simulating the impossible, we proactively prepare the Waymo Driver for some of the rarest and most complex scenarios,” the company says in its blog post.

Leave a Reply

Your email address will not be published. Required fields are marked *