Apple's Upgraded AI Models Underwhelm On Performance
Apple's latest AI models continue to lag behind competitors, according to the company's own benchmark testing it disclosed this week. The tech giant's newest "Apple On-Device" model, which runs locally on iPhones and other devices, performed only "comparably" to similarly-sized models from Google and Alibaba in human evaluations of text generation quality -- not better, despite being Apple's most recent release. The performance gap widens with Apple's more powerful "Apple Server" model, designed for data center deployment. Human testers rated it behind OpenAI's year-old GPT-4o in text generation tasks. In image analysis tests, evaluators preferred Meta's Llama 4 Scout model over Apple Server, a particularly notable result given that Llama 4 Scout itself underperforms leading models from Google, Anthropic, and OpenAI on various benchmarks. Read more of this story at Slashdot.

Read more of this story at Slashdot.