Apple is shifting its approach to training artificial intelligence. Apple is taking a new approach to AI training to improve performance without compromising privacy. As per Bloomberg’s report, the company plans to introduce a new method in the beta releases of iOS 18.5 and macOS 15.5. Apple provided the details about the new technique in their official blog post published on Apple's machine learning research website.
In Apple’s machine learning research website blog post, it was explained how Apple relied on synthetic data or data generated by algorithms instead of collected from actual users to develop AI features like text writing tools and email summaries. While this protects user privacy, Apple admits synthetic data has its limitations, especially when attempting to understand patterns in how people write or summarize lengthy messages.
New Privacy focused technology
To tackle this issue, Apple is introducing a method to compare synthetic emails and real ones, without reading the actual content of user emails. So let’s check how it works. Apple states that it begins by creating thousands of fake emails on various everyday topics. For example, Apple provides a random email that says: "Would you like to play tennis tomorrow at 11:30 AM?" Each email is transformed into an embedding, a kind of data that reflects its content, including things like topic and length.
These embeddings will sent to a select group of user devices that have taken part in the Apple’s Device Analytics program. Participating devices match the synthetic embeddings with a small selection of the user’s recent emails, and determine which synthetic message is the closest match. However, According to Apple the real emails and matching results remain on the user's device.
In the blog post, it was explained that it utilizes a privacy method referred to as differential privacy, however, the device only returns anonymous signals. Apple reviews which synthetic messages were picked the most, without identifying which device made the selection. These widely selected messages help to improve Apple’s AI features by more accurately reflecting the content people typically write, without compromising privacy.
According to Apple, this new method will help to enhance the training data for features like email summaries, by improving the accuracy and usefulness of AI outputs while maintaining user trust.
The same technique is already in use for features such as Genmoji, which is Apple’s tool for creating custom emojis. Apple clarifies that by anonymously observing which prompts are used most, the company can optimize its AI model to give more accurate responses to real-world requests. Apple ensures that rare or unique prompts are kept hidden, and Apple never links data to specific devices or users.
Apple announced that similar privacy oriented techniques will be applied to other AI tools, including Image Playground, Image Wand, Memories creation, and Visual Intelligence features.
Leave a Reply