Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

I haven’t seen Emilio in years—perhaps not since the mid-’90s when I’d catch a glimpse of him on TV and sigh wistfully. But one night a few months ago, in a bar in Porto, suddenly he was there, in the flesh – or should I say, plastic.
Emilio, you see, is a robot. A knee-high butler with a puffy white head, a cartoon smile, and glowing red eyes. I spent my childhood longing for him to bring me fun drinks on his little staircase while I lay on the couch glued to back-to-back episodes of Animaniacs. Looking back, perhaps my desire for Emilio was the reason I committed myself to a career in writing about technology.
Read more: CNET picks the best CES 2026 awards
Like many fantasies humans have about robots, this one was ill-considered. Who would make fun little drinks and balance them on Emilio’s tray? How would he know where to take them? And who would eventually help him move between two rooms from the kitchen to me?
This Emilio was a little worse for wear, but he lived in a tavern.
Many of the issues that prevented Emilio from being a truly useful butler are the same limitations that real-world robots still face today. As much as I hate to admit it, Emilio was nothing more than a glorified remote-controlled car with a face, needing human assistance to do almost anything.
The same is true for Neo, the humanoid home assistant robot that went viral in late October but still requires remote operation by a human. The two robots are separated by more than 30 years, but their real-world utility and ability to operate autonomously appear similarly disappointing.
The question I ask myself every year when I return to CES — the giant tech trade show to which the CNET team heads every January — is when the many robots I met there will finally prove themselves worthy of a place in our homes.
“The main obstacle between us and truly useful home robots is artificial intelligence,” said renowned computer scientist Ben Goertzel when I sat down with him at the technology-focused Web Summit in Lisbon last month. The physical abilities of the robots have improved greatly between Emilio and Neo. What holds Neo and other home robots back is ultimately intelligence.
The AI breakthroughs we have witnessed over the past few years are paving the way for change in this regard. Large linguistic models developed by companies like OpenAI, Google, and Anthropic enable us to have more nuanced conversations with our technology, which is particularly compelling in the case of emotional or companion robots.
Ben Wood, senior analyst at CCS Insight, said that this year at CES, we could see a company integrating more advanced artificial intelligence into the robot concept it has already shown. He suggested that Samsung could build on its mobile robot, Ballie, by working closely with Google — as it already does with phones — to create a next-generation Gemini-equipped version, for example.
“Generative AI allows for more natural language interaction with smart devices,” Wood said, “but that’s the same whether you’re talking to a smart speaker, a robot vacuum, or a human.”
What’s more useful for robotics than MBA are advances in vision, language and motion models, which, as the name suggests, are a type of artificial intelligence that allows a combination of images and words to be input, and actions to be output. For robots navigating a physical space, this combination is essential and will truly set them apart from other AI-equipped devices.
“More advanced models of robots mixed with more capable, more integrated generative AI could see some of the smarter use cases for either cutting-edge sci-fi robots or some more practical robots,” Wood said.
There has been debate about whether advances in robotics will require us to unlock artificial general intelligence (AGI) – a hypothetical level of artificial superintelligence. Goertzel, who works in the fields of superintelligence and robotics, doesn’t think that’s the case. He told me that VLA hardware has gotten so good, we no longer need AGI to make decent home robots.
The big challenge facing home robots is that every home is different.
Places such as hotels, schools, and hospitals share enough similarities that an automated navigation system can be fairly standardized. But developing robots for enterprise and industry, where they perform repetitive tasks in predictable environments, is very different from training a robot that you can place in wildly diverse house layouts.
And some are still trying. The team at Sunday Robotics, based in California, is training the Memo robot, using data provided by families across the US who use high-tech gloves to capture the complex movement of their hands as they perform household tasks. It’s an ambitious approach to preparing robots for family life, and if Sunday Robotics can stick to the desired timeline, it could be one of the first companies to deploy non-remotely controllable humanoid robots in people’s homes.
But for some, there’s a real question mark over whether we should target human home helpers at all.
“If you only think about everyday things, like housework, the human body is not optimal,” Goertzel said. “If I think about our kitchen at home, my wife needs me to reach high things, and I don’t like crawling in the floor to reach low things properly, because our heights are a little different. Why build this problem on a robot?”
Instead of outfitting an expensive home robot with the same obstacles we face as humans, he envisions a networked system of smaller, more practical robots that can interact and are designed to excel at specific tasks.
There are opportunities for established tech companies to jump in here, whether it’s Samsung with Ballie and Apple with its vague plans for home robots, or companies like Qualcomm, which will be at CES and may discuss its own robotics plans at the show.
Qualcomm already makes chips for cars (a relative of robotics, especially in autonomous form) and a whole host of small consumer electronics that maximize power while providing long battery life and AI capabilities. At the Web Summit, CEO Cristiano Amon told me that he sees bots as an “amazing opportunity.”
“We’re excited,” he said. “Both enterprise to consumer, I think the type of silicon we’re developing for phones and (edge computing) is the perfect silicon for robotics.”
Many of us have already begun investing in smaller, task-specific robots by purchasing vacuum cleaners, mops, and lawn mowers — an established category that won’t boom until after CES 2026.
“There’s going to be a huge influx of vacuum cleaners, robot mops and robot lawn mowers,” Wood said. However, he noted that you need to have “the right kind of home” so they can function optimally.
However, CCS Insight research indicates that 15% of households across the US, UK, Spain, France and Germany intend to purchase a robot vacuum in 2026. They’re not the coolest or prettiest robots, but they win when it comes to their usefulness and what people actually want to spend their money on.
As for the humanoid home robot? “Honestly, it’s years away,” Wood said. “People like the idea of it, but it’s a long way from being something people could have in their homes or even want.”
His predictions are in line with those of Boston Dynamics CEO Robert Playter, who said: euronews At Web Summit he believes robots will not be in our homes for at least five to ten years. (This message comes from a guy whose company makes the acrobatic-minded human toy Atlas and the big, scary dog, both of which have the US military snorting around.)
There are plenty of reasons why you might not actually want to get your hands on this futuristic home robot, from practical considerations like space and utility, to larger concerns about privacy, safety, and cost (the Neo is priced at $20,000, and Sunday Robotics told me Memo will be a “cutting-edge” product).
In November, robotics researchers at Carnegie Mellon University Published paper Announcing that popular AI models are not yet ready to operate robots due to issues ranging from bias, discrimination, and unsafe physical behavior.
The study, which analyzed ChatGPT, Gemini, Copilot and HuggingChat, found that most models were willing to agree to orders that would put people’s mobility aids at risk, allow a robot to wave a knife, take non-consensual photos or steal credit card information.
“If an AI system is to guide a robot that interacts with vulnerable people, it must adhere to standards at least as high as those for a new medical device or pharmaceutical drug,” said co-author Rumaisa Azeem, a research associate at the Civilian Artificial Intelligence and Administrator Laboratory at King’s College London. She said there is an urgent need to conduct comprehensive and routine risk assessments before putting these AI models into robots.
Just one week after the study was published, Figure AI got to work A lawsuit against the whistleblowerwho warned that the company’s humanoid robots could “crack a human skull.”
Although her observation is compelling, robots like Elon Musk’s Tesla Optimus and Boston Dynamics’ robots can be a bit worrying. But if these types of robots are shown off at CES this week, the focus will likely be on the skills they can bring to industrial environments rather than the home.
Robots will be a “huge trend” at the show, according to Wood, but it remains to be seen whether any of the robots on display will rise above robotic vacuum cleaners, pool cleaners, or cute but one-note companion devices to become true household essentials.
For me, I still dream of owning Emilio – even more so after finally meeting him. It’s tempting to head to eBay to see if anyone is selling this 90s robot toy that I’ve built my career on, but it’s probably best to stick with the real thing. For now, I’ll continue to hope that the truly capable (and secure) robot server I dreamed of as a child will one day become a reality.