Ford’s AI voice assistant is coming later this year, the L3 for driving in 2028


Ford’s new AI-powered voice assistant will be rolled out to customers later this year, the company’s chief software executive said at CES today. In 2028, the automaker will introduce Level 3 hands-free autonomous driving feature As part of their more affordable (and hopefully more profitable) Universal Electric Vehicle (UEV) platform.It is scheduled to be launched in 2027.

More importantly, Ford said it will develop much of the core technology around these products in-house in order to reduce costs and retain greater control over them. Considering that the company does not create its own large-scale models or design its own silicon, like Tesla and Rivian. Instead, it will build its own electronic and computer units that are smaller and more efficient than existing systems.

“By designing our own software and hardware in-house, we found a way to make this technology accessible to everyone,” Doug Field, Ford’s chief electric vehicle and software officer, wrote in a blog post. “This means we can put advanced hands-free driving in vehicles that people are already buying, not just vehicles with inaccessible prices.”

Ford said it will develop much of the core technology around these products in-house

The new comes as Ford faces increasing pressure to launch affordable electric vehicles next Its big bet on electric versions of the Mustang and F-150 pickup truck failed to interest customers or turn a profit.. The company recently canceled the F-150 Lightning model amid declining electric vehicle sales, and said it will build more hybrid vehicles as well as battery storage systems to meet growing demand from building artificial intelligence data centers. Ford also reset its AI strategy after that Self-driving vehicle program using Argo AI closes In 2022, fully autonomous Level 4 vehicles will transition to Level 2 and Level 3 conditional autonomous driver assistance features.

Amid all this, the company is trying to reach a compromise on AI: not getting fully involved in the robot army like Tesla and Hyundai, while still committing to some AI-powered products, like voice assistants and automated driving features.

Ford said its AI assistant will launch on Ford and Lincoln mobile apps in 2026, before expanding to the in-car experience in 2027. An example of this is a Ford owner standing at one of the machines, unsure of how many bags of mulch will fit in the bed of his truck. The owner can take a photo of the mulch and ask the assistant, who can respond with a more accurate answer than, say, ChatGPT or Google’s Gemini, because it contains all the information about the owner’s vehicle, including truck bed size and trim level.

Sherri House, Ford’s chief financial officer, said at a recent technology conference Ford will integrate Gemini from Google In its vehicles. However, the automaker is designing its Assistant to be independent of chatbots, which means it will work with a variety of different LLM degree holders.

Amidst all this, the company is trying to reach a compromise on artificial intelligence.

“The key part is that we take that LLM certification, and then we give it access to all the relevant Ford systems so that the LLM then knows about the specific vehicle you’re using,” Sami Amri, Ford’s head of advanced driver assistance and infotainment systems, told me.

Self-driving features will come later with the launch of Ford’s Universal EV platform. Ford’s flagship product now is BlueCruise, a hands-free Level 2 driver-assistance feature that’s only available on most highways. Ford plans to roll out a hands-free point-to-point system that can recognize traffic signs and navigate intersections. A Level 3 system will then be launched where the driver still needs to be able to take control of the car on demand but can also take his eyes off the road in certain situations. (Some experts have argued so L3 systems can be dangerous Due to the need for drivers to remain attentive even though the car is performing most of the driving tasks.)

Al Omari explained that by carefully examining every sensor, software component and computing unit, the team achieved a system that is approximately 30 percent less expensive than today’s hands-free system, while providing much greater capacity.

All of this will depend on a “radical rethink” of Ford’s computing architecture, Field said in a blog post. This means a more unified “brain” that can process infotainment, co-driver assistance systems, voice commands and more.

For nearly a decade, Ford has built a team with the experience needed to lead these projects. The former Argo AI team, originally focused on developing Level 4 robotaxis, has been brought on board for its expertise in machine learning, robotics and software. and A team of BlackBerry engineersThe company, which was initially hired in 2017, is now building next-generation electronics modules to enable some of these innovations, said Paul Costa, Ford’s executive director of electronics platforms.

Costa added that Ford doesn’t want to get into a “TOPS arms race,” referring to the metric for measuring AI processor speed at trillions of operations per second. Other companies, such as Tesla and Rivian, have emphasized the processing speed of their AI chips to prove how powerful their automated driving systems are. Ford is not interested in playing that game

Instead of optimizing performance alone, they sought to balance performance, cost and size. The result is a compute unit that is much more powerful, less expensive, and 44 percent smaller than the system it replaces.

“We’re not just picking one area here to improve at the expense of everything else,” Costa said. “We’ve really been able to improve performance across the board, which is why we’re very excited about it.”

Follow topics and authors From this story to see more like this in your personalized homepage feed and receive email updates.


Leave a Reply

Your email address will not be published. Required fields are marked *