Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Using chatbots powered by artificial intelligence Even just 10 minutes can have a shockingly negative impact on people’s ability to think and solve problems, according to a recent study. New study From researchers at Carnegie Mellon University, MIT, Oxford, and the University of California.
The researchers assigned people to solve various problems, including simple fractions and reading comprehension, through an online platform that paid them money for their work. They conducted three experiments, each involving several hundred people. Some participants were given access to an AI assistant capable of solving the problem independently. When the AI assistant was suddenly turned away, these people were more likely to give up on the problem or get their answers wrong. The study suggests that widespread use of artificial intelligence may boost productivity at the expense of developing basic problem-solving skills.
“The bottom line is not that we should ban AI in education or the workplace,” says Michel Bakker, an assistant professor at MIT who was involved in the study. “It’s clear that AI can help people perform better in the moment, and that can be valuable. But we have to be more careful about what kind of help AI provides, and when.”
I recently met Packer, who has messy hair and a big smile, on the MIT campus. Originally from the Netherlands, he previously worked at Google DeepMind in London. He told me that A Well-known article The way artificial intelligence might erode humans over time inspired him to think about how technology might actually erode people’s abilities. The article makes for a somewhat bleak read, as it suggests that disempowerment is inevitable. However, perhaps discovering how AI can help people develop their mental abilities should be part of how models fit into human values.
“It’s basically a cognitive question, about perseverance, learning, and how people respond to difficulty,” Packer tells me. “We wanted to take these broader concerns about long-term human-AI interaction and study them in a controlled experimental setting.”
The resulting study is particularly troubling, Bakker says, because a person’s willingness to keep solving problems is crucial to acquiring new skills and predicts their ability to learn over time.
Packer says it may be necessary to rethink how AI tools work, so that models sometimes prioritize a person’s learning over solving a problem, as a good human teacher would. “Systems that provide direct answers may have very different long-term effects than systems that support, coach, or challenge the user,” Packer says. However, he admits that balancing this kind of “paternalistic” approach can be difficult.
AI companies are already thinking about the more subtle effects their models can have on users. The likability of certain models—or how likely they are to agree with and patronize users—is important OpenAI has sought to mitigate this notion With newer versions of GPT.
Putting too much trust in AI can seem problematic, especially when the tools don’t work as you expect. Agent AI systems are particularly unpredictable because they perform complex actions autonomously and can lead to strange errors. It makes you wonder what Cloud Code and Codex do to the skills of programmers who may sometimes need to fix the bugs they introduce.
I recently received a lesson about the danger of offloading critical thinking in favor of artificial intelligence myself. I’ve been using OpenClaw (with Codex inside) as a daily assistant, and have found it remarkably good at resolving configuration issues on linux. However, recently, after the Wi-Fi connection kept dropping out, my AI assistant suggested running a series of commands to modify the driver’s connection to the Wi-Fi card. The result was a device that refused to turn on no matter what I did.
Maybe, instead of just trying to solve the problem for me, OpenClaw should have paused and taught me how to solve the problem myself. As a result, I may have a more capable computer and mind.
This is an edition of Will Knight Artificial Intelligence Lab Newsletter. Read previous newsletters here.