What I want to see from AI in 2026: Better phone labels and features and a plan for the environment


In 2025, AI has given us new, more achievable paradigms research, Coding, Generate video and images And more. AI models can now use large amounts of computing power to “think,” which has helped provide more complex answers with greater accuracy. The AI ​​also has some proxy legs, which means it can go out onto the Internet and Doing tasks for yousuch as planning a vacation or Order a pizza.

Despite these developments, we may still be a long way from achieving this Artificial general intelligenceOr AGI. This is the theoretical future when artificial intelligence becomes so good that it is indistinguishable from (or better than) human intelligence. Right now, the AI ​​system operates in a vacuum and doesn’t really understand the world around us. He can imitate wits and put words together to make it seem like he understands them. But it doesn’t. Using AI on a daily basis has shown me that we still have a way to go before we reach AGI.

Read more: CNET picks the best CES 2026 awards

Such as the artificial intelligence industry It reaches brutal reviewsCompanies are moving quickly to meet Wall Street’s demands. Google, OpenAI, Anthropy and others Throw trillions in training and infrastructure costs To enter the next technological revolution. While the expenditure may seem ridiculous, if AI succeeds in upending the way humanity works, the rewards could be enormous. At the same time, as revolutionary as AI is, it is constantly getting it wrong. It also floods the Internet with sloppy content — such as short, entertaining videos that may be profitable but are rarely valuable.

Humanity, which will benefit or suffer from artificial intelligence, deserves better. If our survival is really at stake, then at the very least, AI could be more substantively useful, rather than merely abstract Rote writer of college essays and Nude image generators. Here are all the things I, as an AI reporter, would like to see from the industry in 2026.

It’s the environment

My biggest and most pressing concern about AI is the massive impact Data centers will have an environmental impact. Before the AI ​​revolution, the planet was already in trouble Existential threat Because of our dependence on fossil fuels. Big technology Companies stepped up With initiatives that say they aim to reach Net zero emissions by a certain date. Then ChatGPT arrived on the scene.


Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.


With the massive demand for power in AI, including Wall Street’s desperate need for profitability, data centers are coming back Fossil fuels He loves Methane gas To keep graphics processing units running, the tools that perform complex calculations to connect words and pixels together.

There’s something incredibly dystopian about the end of the planet coming at the hands of a cynic AI-generated videos of kittens puffing up in the gym.

Whenever I have the opportunity, I ask companies like Google, OpenAI, and Nvidia what they are doing to ensure that AI data centers do not pollute the water or the air. They say they remain committed to reaching emissions targets but rarely provide specific details. I guess they’re not quite sure what the plan is yet. Maybe artificial intelligence will give them the answer?

At the very least, I’m glad the US is too Rethinking nuclear energy. It is an efficient and largely pollution-free source of energy. It is a bit sad that it is market demands that will bring back nuclear power and not politicians struggling to protect the planet. At least the United States can take inspiration from Europe, where… Nuclear energy is more common. It’s just frustrating that it takes this Five years or more to build New plant.

I want my phone to be smarter

Over the past three years, smartphone makers like apple, Samsung and Google They have been touting new AI features in their mobile phones. Often, these presentations demonstrate how AI can help with image editing or text cleaning. Even then, so were consumers Steeped in artificial intelligence in smartphones. I don’t blame them. People turn to smartphones for high-quality shots, communication, or social media. These AI features feel more like add-ons than must-haves.

Here’s the thing: AI has the power to fix many of the vulnerabilities of smartphone use. The technology is much better at things like transcription, translation, and answering questions than previous “smart” features. The problem is that for AI to do these things well, it requires a lot of computing. When someone tries to use speech-to-text, they don’t have time to wait for the audio to be uploaded to Google’s cloud so it can be transcribed and sent back to their phone. Even if the process takes 10 seconds, that’s still too long to be in the middle of a back-and-forth text thread.

Atlas of Artificial Intelligence

CNET

Local AI models are available to run on the device to do this type of quick task. The problem is that models still can’t do it right all the time. As a result, things can seem random High quality copies only work some of the time. I hope that in 2026, native AI on phones can get to a point where it just works.

I also want to see Local AI models on phones that could be more effective. Google has a feature on Pixel phones called Magic braid. It can automatically pull from your email and text data and intuitively add map directions to your coffee date. Or if you’re texting about a flight, it can automatically pull up the flight information. This kind of seamless integration is what I want from AI on mobile, not a reimagining of images in animated form.

Magic Cue is still in its early stages, and it doesn’t work all the time or as you’d expect. If Google, OpenAI, or other companies can figure this out, then I feel like consumers will really start to appreciate AI on phones.

Is this artificial intelligence?

While browsing Instagram Reels or TikTok, when I see something charming, funny, or out of the ordinary, I immediately jump to the comments to see if it’s AI.

AI video models are becoming increasingly compelling. Gone are the wonky moves, the 12 fingers, and perfectly positioned shots with supernatural perfection. AI videos on social media now mimic security camera footage and mobile videos, and added filters can mask video AI.

I’m tired of the guessing game. I want both Meta and TikTok to directly declare whether an uploaded video was created using AI. Meta actually has systems in place to try to determine if something has been uploaded using generative AI, But it is inconsistent. TikTok is also working on exploring artificial intelligence. I’m not entirely sure how exactly the platforms will do this, but it will certainly make life on social media a lot less mysterious.

Sora and Google have watermarks for videos generated by AI. But evading these things is becoming easier, and many people are using Chinese AI models, like Wan, to create videos. Although Wan adds a watermark, people can find ways to download these videos without it. Some in the comments section shouldn’t have to decide whether a video is powered by AI or not. (There is even Subreddits that poll users trying to find out if a video is powered by AI.)

We need clarity.

I’m tired of the constant guessing. Come on, Meta and TikTok – what’s the point of all the billions invested in AI? Just tell me if the video on your platform is based on AI.



Leave a Reply

Your email address will not be published. Required fields are marked *