Apple's AI Misconstrued a BBC Headline, Converting it to State Luigi Mangione Self-Fired
Apple's AI Misconstrued a BBC Headline, Converting it to State Luigi Mangione Self-Fired
Apple recently commenced the rollout of an anticipated set of AI capabilities for its devices, however, problems are surfacing quickly. The BBC voiced concerns to Apple after an AI-powered notification summary incorrectly categorized a BBC headline, altering it to state that Luigi Mangione, the UHC CEO's alleged assassin, had shot himself. In reality, Mangione did not commit suicide and remains in police custody.
Apple's devices incorporate an intelligence feature on iOS that aims to alleviate user fatigue by collecting and condensing notifications from various apps. For instance, multiple text messages from one source are consolidated into a single, concise notification instead of being displayed individually.
The flaws of this "intelligence" should not be surprising to anyone with knowledge of generative AI. Unfortunately, or at times incorrectly, the notifications are summarized. Notification summaries arrived on iOS with version 18.1 in October, while this week, Apple integrated ChatGPT natively into Siri.
In an article, the BBC published a screenshot of a notification summarizing multiple stories delivered as alerts. The notification reads: "Luigi Mangione shoots himself; Syrian mother hopes Assad pays the price; South Korea police raid Yoon Suk Yeol’s office." The BBC noted that the other summaries were accurate.
The BBC has reached out to Apple regarding this issue, which is embarrassing for the tech giant and potentially harmful to the news media's reputation if viewers perceive misinformation is being disseminated. Apple has yet to respond to the BBC's inquiries concerning the mishap.
Artificial Intelligence holds immense possibilities, yet language models are arguably one of the least promising implementations. Companies anticipate that the technology will reach a maturity level that would allow enterprises to utilize it for purposes such as customer support chat or searching through sizable internal data repositories. However, enterprises making use of AI continue to report the need for significant editing to improve the technology's output.
It is rather unusual for Apple to introduce such an unstable and unpredictable technology into its products. Apple has no control over ChatGPT's outputs; even its developers, OpenAI, struggle to manage the unpredictable behavior of the language models. Summarizing notifications should be among the simplest tasks for AI, yet Apple has proven itself to be inept even with this seemingly straightforward task.
At the very least, Apple Intelligence's features suggest possible practical applications for AI. Enhanced photo editing and a focus mode that deciphers which notifications should be delivered are welcome additions. However, for a company known for producing polished experiences, faulty notification summaries and ChatGPT's hallucinations could make iOS appear rough around the edges. It seems that Apple is hastening its pace on the hype train solely to boost new iPhone sales, as the features are only available to users of the iPhone 15 Pro or newer devices.
The integration of AI in Apple's devices indicates a future where tech and technology will play a significant role in our daily lives. Despite this, as seen with the BBC incident, the application of AI, particularly in language models, often requires significant editing to ensure accuracy.
The incident with the miscategorized BBC headline highlights the challenges of integrating unstable AI technologies into consumer products, potentially tarnishing Apple's reputation for delivering polished experiences.