Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Editors’ Note: Welcome to CNET’s new guest column series called Alt View, which will serve as a forum for a diverse group of experts and prominent figures to share their insights into the rapidly evolving field of artificial intelligence. We are launching it with Vasant Dharan artificial intelligence researcher, data scientist, and host of the Brave New World podcast. For more AI coverage, check out CNET Atlas of Artificial Intelligence.
I was 11 years old when I first saw Stanley Kubrick’s 1968 film 2001: A space journey.
I was fascinated by the images of flashing oscilloscopes and gobbledygook screens, but I was too young to understand the subtlety of the plot at the time. I’ve never seen a computer, except in movies. Artificial intelligence was not part of my imagination.
I recently rewatched the movie. Although the effects of the flash explosion have not matured well, the plot remains incredibly forward-thinking and, we can now say, visionary. The story revolves around the discovery of a Stonehenge-like monolith discovered near Jupiter that sends a powerful signal to one of Jupiter’s moons, indicating the existence of intelligent life beyond Earth. The spaceship Discovery is sent to investigate the mysterious object. The true purpose of the mission is known only to Discovery’s computer, HAL.
HAL’s guidance is to ensure mission success. This includes maintaining confidentiality and assisting the crew by providing them with the correct information at all times. HAL cannot move, but it can see, hear, speak and monitor every part of the vehicle. In fact, mission management is largely in the hands of artificial intelligence.
Things get worse when HAL apparently malfunctions. Astronauts are advised to turn off HAL’s cognitive functions for the remainder of the mission.
When Dave returns to the ship after his failed attempt to rescue Frank, whose lifeline was severed during a spacewalk, he asks HAL to open the capsule pod doors to let him in. Hal’s response is the film’s most famous line:
“I’m sorry Dave, I’m afraid I can’t do it.”
It’s a nightmare scenario with an AI taking control, convinced it’s doing the right thing.
The fundamental question the film raises is the risks associated with trusting someone artificial intelligence In complex situations, it has become urgently important today with the spread of artificial intelligence – most clearly in the form of… Tools like ChatGPT, Gemini, Claude, and Copilot -And he makes more and more decisions for us. What was science fiction in 1968 has suddenly become very real today. One of the reasons 2001 is considered one of the greatest films ever made is that it is full of universal lessons that force us to think about the increasing delegation of decision-making to automation. These lessons are especially important in the world of modern artificial intelligence, where machines know something about everything.
First, and perhaps most obviously, we should expect AI to make mistakes today and for the foreseeable future. The relevant lesson is inevitability and influence.”Anonymous Anonymous“In complex situations,” a phrase made famous by US Defense Secretary-turned-philosopher Donald Rumsfeld during the US-Iraq conflict. In the machine learning community, these situations are called “edge cases,” and systems are expected to handle them.
What I find most interesting about the plot is the possibility that HAL deliberately conjured up an edge case of its own to test the crew. HAL may have been collecting data about human attitudes toward it, such as how humans react in critical situations. Could he have feigned failure in order to test how the crew would respond in a situation where they deemed the AI untrustworthy? May they stop it? Such an action would jeopardize its mission, so it is not unlikely that HAL would want to identify and pre-empt any risks to the mission. Surely any sufficiently intelligent entity would have considered such a possibility.
If so, it was a very intelligent experiment conducted by the AI, and its designers should have taken it into account. This situation, where AI creates unexpected sub-goals to achieve its larger goals, is one of the biggest unaddressed problems we face today.
This type of control problem arises from the difficulty, and perhaps futility, of defining an unambiguous objective function for complex problems that applies correctly to all situations, especially unknowns. Alternatively, complex problems can involve multiple conflicting goals and constraints, which may create situations that cannot be fully envisioned in advance. Modern AI machines are so mysterious and complex on the inside, it’s difficult to control something whose internal components we don’t fully understand.
The problem of aligning AI with human interests has become one of the biggest challenges in the emerging world of AI. We are inundated with millions of HAL-like autonomous agents who have to make critical decisions in real time every day. Unmanned vehicles that rely on artificial intelligence for decision-making are becoming increasingly prevalent not just on the road but across the skies, outer space and the depths of the oceans, with underwater drones being used to protect critical infrastructure and conduct surveillance operations. Future conflicts will likely be resolved by autonomous AI. It can be said that we are already witnessing the beginnings of a new arms race between major world powers, and the increasing use of drones and unmanned machines in warfare. Israeli army Artificial intelligence is widely used To identify and destroy targets, unmanned cargo vehicles were first used on the Lebanese border in November 2024.
appearance General intelligence In automated form, it unleashes the power of AI for everyone, not just governments and companies. How can we live with powerful machines within everyone’s reach? Do our current regulations, laws and rules of engagement still work in such an environment? Do we need new types of laws in this emerging new world?
2001: A space odyssey offers a single vision of a human being grappling with artificial intelligence.
General intelligence is a wealth for creative people. For the first time, anyone can harness pre-trained building blocks, such as large language models and vision systems, to create HAL-like AI applications in a few days, something that would have taken decades just a few years ago. General Intelligence takes artificial intelligence to a new level, as the increasing level of intelligence in the systems around us becomes evident. The more data a device sees, the more it learns. This is an amazing development, but there is always a lurking danger of its dark side, of using AI for nefarious purposes.
Just as the machines of the industrial age amplified humanity’s mechanical power, giving rise to modern society, artificial intelligence amplifies our cognitive and intellectual capacity. However, what worries many people in the AI field is the scope of intentionally malicious applications that could be unleashed by or against individuals, companies and governments as the technology advances. Deep fakesfor example, has become a major concern and received its fair share of attention, including in the popular press. These fakes are usually videos, photos, or audio clips, created using artificial intelligence to convincingly mimic real people’s appearances, voices, or actions, making them appear to be saying or doing things they actually did not do. But other dangerous uses for AI are becoming clear as we realize its capabilities. There are likely many unknown cases waiting to be discovered.
A horrific example of the power of modern technology’s amplification to cause harm was the shooting death of Brian Thompson, CEO of UnitedHealthcare, in Manhattan in December 2024. Luigi Mangione, the 26-year-old alleged killer, used publicly available information about weapons to create his weapon with a 3D printer using standard polymer materials.
This type of scenario greatly concerns developers of AI tools such as master’s degree holders: how to preempt misuse, such as using AI to produce weapons – physical or psychological – without the AI being aware of it. These fears are justified. Published examples of “jail-breaking” individuals with MBAs should worry us. There is an amusing case involving journalist Kevin Rose, who managed to get an AI machine outside its guardrails. He – she He asked Rose to leave his wife Because she didn’t love him, and that this was his true love. However, less pleasant cases have since emerged, to which the machine’s output is claimed to have contributed User decisions To cause real harm For themselves And others.
The Mangione incident suggests that advanced technologies becoming more accessible could pose a serious disruption to the weapons and law enforcement industries. Gun control laws seem ineffective in an era when individuals can be assisted by artificial intelligence to produce a lethal weapon at home. Although Mangione relied on his tech-savvy programming ability to achieve his goal, it’s still a small step away from having ChatGPT design the 9mm pistol before printing it. But why stop there? An intelligent mobile robot may be able to analyze the target, figure out the best weapon to use at the right time, and do the killing as well. Such a world poses significant challenges to law enforcement.
What makes governing general intelligence a unique challenge for us is the fact that its design lacks a specific purpose, but at the same time it is capable of learning how to become agentic—capable of planning, acting, and adapting independently—and making decisions on our behalf. Previous technologies, including AI devices, were created for specific purposes, such as medical diagnosis, engineering design, planning, customer support, etc. We can turn off such applications when they do not work satisfactorily or when they become outdated.
In 2001: A Space Odyssey, Dave went to great lengths to try to shut down HAL to prevent further damage once its malicious behavior became apparent after the loss of four lives. I shudder to think of the power of killer drones whose agents turn against their creators and are impossible to stop.
For children who will reach adulthood after 2022, AI is directly interacting with them all the time. Students are increasingly turning to artificial intelligence rather than humans for answers, entertainment, and even companionship. There is no turning back or turning off AI. It’s here to stay. So, this is a good time to think about how to control artificial intelligence even when it starts to affect a large part of our lives.
Excerpted with permission from the publisher, Wiley, from Thinking with Machines: The Brave New World of Artificial Intelligence by Vasant Dhar. Copyright © 2026 by Vasant Dhar. All rights reserved. This book is available wherever books and e-books are sold.