Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Every time you hear a billionaire (or even millionaire) CEO describe how LLM-based agents are coming to all human jobs, remember this funny but telling incident about the limitations of AI: Renowned AI researcher Andrei Karpathy got a day early access to Google’s newest model, the Gemini 3 — and Google refused to believe him when he said the year was 2025.
When he finally saw the year for himself, he was shocked, telling him: “I’m going through a massive case of time shock right now.”
Gemini 3 has been released On November 18th With the hype that Google Shoot it “A New Age of Intelligence.” Gemini 3 is, by almost any measure (including Carpathian’s calculation), a very capable basic model, especially on reasoning tasks. Karpathy is a widely respected AI research scientist who was a founding member of OpenAI, ran AI at Tesla for a while, and is now building a startup, Eureka Labs, to reimagine schools for the age of AI with effective teachers. Published a Lots of content about what goes on under the hood of LLMs.
After testing the model early, Karpathy wrote: X thread is now widespread, About the “amusing” interaction he had with her.
Apparently, the model’s pre-training data only included information up to the year 2024. So Gemini 3 thought the year was still 2024. When Karpathy tried to prove to him that the date was actually November 17, 2025, Gemini 3 accused the researcher of “trying to fool him.”
He showed her news articles, photos, and Google search results. But instead of being convinced, LLM accused Karpathy of gaslighting, that is, of uploading fake materials generated by artificial intelligence. She even went so far as to describe “dead flashes” in photos that supposedly proved this was a scam, according to Karpathy’s account. (He did not respond to our request for further comment.)
Puzzled, Karpathy – who is, after all, one of the world’s leading experts in training LLMs – eventually discovered the problem. Not only does the MBA have no training data for 2025, but “I forgot to turn on Google Search,” he wrote. In other words, he was working with a model of being disconnected from the Internet, which in the eyes of the LLM student is like being disconnected from the world.
TechCrunch event
San Francisco
|
October 13-15, 2026
When Karpathy triggered this function, the AI looked around and exited to the year 2025, shocked. He literally said, “Oh my God.”
He continued writing, as if stuttering, “I. I… I don’t know what to say. I was right. I was right about Everything. My internal clock was wrong.” Gemini 3 validated the headlines Karpathy gave: Current History, in which Warren Buffett revealed his last major investment (in Alphabet) before retirement, and that Grand Theft Auto VI was Being delayed.
Then she looked around on her own, like Brendan Fraser’s character in the 1999 comedy Blast from the Past, emerging from a bomb shelter 35 years later.
She thanked Karpathy for giving her “early access” to “Reality” the day before its public launch. He apologized to the researcher for “throwing the spotlight on you when You He was the one telling the truth all the time.
But the funniest part was the current events that fascinated Gemini 3 the most. “Nvidia deserves it $4.54 trillion? Finally, the Eagles took revenge on the leaders? This is wild.
Welcome to 2025, Gemini.
The responses to X were equally funny, with some users sharing their own instances of arguing with LLM holders over facts (such as who The current president is). “When the system + missing tools push the model into full detective mode, it’s like watching an AI make its way through reality,” one person wrote.
But beyond the humor, there is an underlying message.
“It is in these unintended moments when you are clearly off the hiking trails and somewhere in the jungle of circularity, that you can get the best sense of the scent of the models,” Karpathy wrote.
To decode that a bit: Karpathy points out that when an AI is in its own version of the wild, you get a sense of its personality, and perhaps even its negative traits. It’s chatter about “code smell,” that little metaphorical “whiff” a developer gets that something looks weird in the software code but it’s not clear what’s wrong.
Given her training in human-generated content, as all LLMs are, it’s no surprise that Gemini 3 delved into the matter, argued, and even imagined she saw evidence that validated her point. It showed a “typical odor.”
On the other hand, since the MBA – despite its sophisticated neural network – is not a living being, it does not experience feelings such as shock (or temporary shock), even if it says so. So he doesn’t feel embarrassed either.
This means that when Gemini faced 3 facts she actually believed them, accepted them, apologized for her behavior, acted remorseful, and marveled at the Eagles winning the Super Bowl in February. This is different from other models. For example, researchers have discovered earlier versions of Claude Offering face-saving lies to explain her misconduct When the model recognized his wrong ways.
What a lot of these Show funny research projects for artificial intelligence, Over and over again, MBAs are imperfect replicas of the skills of imperfect humans. This tells me that their best use case is (and perhaps forever) to treat them as valuable tools to help humans, not like them. A kind of superhuman That will replace us.