Apple Execs Get Seriously Flustered Trying to Explain the Company's Dismal AI Failure
Apple is facing some intense scrutiny after yet again flubbing its AI tech. During the company's Worldwide Developers Conference on Tuesday, the company showed off its latest vision of "Apple Intelligence," a suite of small-but-useful tools that use machine learning to scan images for objects or smush emojis together to form custom Franken-emojis. While it wasn't exactly a dismal failure, the event left plenty of questions regarding the tech giant's approach to AI unanswered. As the Wall Street Journal's Joanna Stern, who sat down with software chief Craig Federighi and marketing head Greg Joswiak following the event, points out, a […]


Apple is facing some intense scrutiny after yet again flubbing its AI tech.
During the company's Worldwide Developers Conference on Tuesday, the company showed off its latest vision for "Apple Intelligence," a suite of tools that use machine learning to scan images for objects or smush emojis together to form custom Franken-reacts — cutesy ideas that fall vastly short of the ambition of competitors in the space like OpenAI.
The Wall Street Journal's Joanna Stern sat down with Apple's software chief Craig Federighi and marketing head Greg Joswiak following the event, and the execs squirmed trying to explain why a smarter Siri assistant — once the vanguard of consumer-facing AI and now dustbin junk compared to products like ChatGPT — is still missing in action.
That's despite some big promises. During last year's WWDC, Apple showed off a "more personal Siri" that promised huge new capabilities for the assistant built on generative AI. But by March, the company had pulled an iPhone 16 ad that made huge promises for the tech, admitting that a more advanced Siri was "going to take us longer than we thought."
In the year 2025, Siri is still a barely usable way to start timers or send somebody a quick iMessage, massively overshadowed by AI models that can analyze research papers, generate code, or generate photorealistic video.
Federighi and Joswiak had a tough time explaining how the multi-trillion-dollar tech giant has fallen so far behind the competition, which has been running laps around it when it comes to AI implementation.
The two executives seemed awfully flustered as they tried to dodge Stern's piercing questions, highlighting how much pressure has been built up over a year of empty promises and walking back claims.
A purported "V1" of a conversational Siri AI simply "didn't converge in the way, quality-wise, that we needed it to," Federighi said. "We had something working, but then, as you got off the beaten path... and we know with Siri, it's open-ended what you might ask it to do, and the data that might be on your device that would be used in personal knowledge..."
"We wanted it to be really, really reliable," he concluded. "And we weren't able to achieve the reliability in the time we thought."
The situation Apple finds itself in is, in many ways, a perfect snapshot of the current state of the tech. AI chatbots continue to hallucinate with abandon, leaving a wake of chaos and confusion in their wake. That's despite billions of dollars being poured into the resource-intensive models that power them.
Whether the issue, which has turned into a major thorn in the side of consumer-facing tech companies, can ever be resolved remains a major subject of debate, with some experts arguing that LLMs and other related AI approaches are a dead end.
The stakes are certainly high, as Federighi points out, given the copious amounts of highly personal information companies like Apple have access to.
To the Cupertino-based firm, which has long been known to choose a slow-and-steady-wins-the-race approach to new tech, that kind of error-prone quality just doesn't cut it.
"Look, we don't wanna disappoint customers," Joswiak told Stern. "We never do. But it would've been more disappointing to ship something that didn't hit our quality standard, that had an error rate that we felt was unacceptable."
"No one's doing it really well right now," Federighi said. "And we wanted to be the first. We wanted to do it best."
It's hard not to detect a sense of doubt. In the days leading up to its developer conference this week, Apple published a damning research paper that poured cold water on the "reasoning" capabilities of the latest, most powerful large language models.
As Apple continues to drag its feet on ChatGPT-like implementations of AI, some interesting questions are starting to calcify. Does Apple know something that we don't? Will the tech ever get to the point where it's good enough, even for Apple? If a company with a virtually infinite amount of resources at its disposal can't do it properly, who can?
Companies have already been embroiled in plenty of controversies thanks to their lying, misleading, and groveling AI chatbots that can cause embarrassing or dangerous problems their creators never anticipated. Just this week, Futurism reported on people developing intense obsessions with ChatGPT, causing them to spiral into severe mental health crises.
Apple has been buying itself plenty of time to iron out the kinks — but whether we'll ever see the light of day of a chatty Siri that lives on our iPhones remains as uncertain as ever before.
More on Apple: Apple Researchers Just Released a Damning Paper That Pours Cold Water on the Entire AI Industry
The post Apple Execs Get Seriously Flustered Trying to Explain the Company's Dismal AI Failure appeared first on Futurism.