Google AI Overviews still struggles to answer basic questions and count
Even as the tech world preaches about our AI future, Google's AI overviews still cannot answer some basic questions correctly.


Remember those old school sports and actions movies — think Billy Bob in Varsity Blues — where they'd ask dazed people simple questions to see if they're concussed? How many fingers am I holding up? Or, what year is it?
Well, even by that low, low standard, Google's AI overviews may not pass concussion protocol. This week, folks noticed that Google's AI overviews couldn't reliably discern that the year was, in fact, 2025. (To be clear, we are now about halfway through 2025.)
There were a number of posts about it online.
This Tweet is currently unavailable. It might be loading or has been removed.
This Tweet is currently unavailable. It might be loading or has been removed.
TechCrunch spotted the issue and noted that Google fixed this particular bug, if not the underlying problem.
"As with all Search features, we rigorously make improvements and use examples like this to update our systems. The vast majority of AI Overviews provide helpful, factual information, and we’re actively working on an update to address this type of issue," a Google spokesperson told the tech outlet.
The whole what year is it debacle is far from the only time Google's AI overviews have tripped up over simple questions. Two staff members at Mashable asked Google other simple questions: "Is it Friday?" and "How many r's are in blueberry?" It answered both simple questions incorrectly, spitting out that it was Thursday and there was only one r in blueberry, respectively. It's worth noting that Google's AI tools previously went viral for answering a similar question incorrectly ("how many r's are in the word strawberry?"). It seems the underlying issue — that issue would be counting — still has not been remedied.
Google Search expert Lily Ray has called this approach to fixing AI bugs the "whack-a-mole approach." In other words, Google fixes bugs one-by-one instead of making a wholesale improvement.


The accuracy problem has been a longstanding one for Google's AI overviews. Mashable tested the overviews' accuracy six months after launch — December of actual 2024 — and found there were still major problems, even if it was improving.
Overviews can prove especially faulty when working with incorrect or incomplete queries. The Google tool often makes stuff up or confidently gets info wrong when it doesn't have a clear answer. It became a trend, for instance, to have Google's AI overviews create meaning for nonsensical, invented idioms. Or remember when Google's AI overviews first launched and it confidently told folks that a dog played in the NBA and that you should add glue to pizza?
Despite these persistent issues and incorrect answers, Google still pushed forward in rolling out AI Mode for all U.S. searchers. And at Google I/O 2025, the company bragged that its AI Overviews were reaching 1.5 billion people per month.
So when you're out there Googling, just be careful — and be sure to know what year it is.