Deep Dojo

Machine Learning Inside Apple

Apple machine learning APIs aren’t just for third-party developers. Steven Levy wrote this article almost ten months before Apple announced Core ML. It offers a detailed look into how the company uses machine learning across its own products.

“As the briefing unfolds, it becomes clear how much AI has already shaped the overall experience of using the Apple ecosystem. The view from the AI establishment is that Apple is constrained by its lack of a search engine (which can deliver the data that helps to train neural networks) and its inflexible insistence on protecting user information (which potentially denies Apple data it otherwise might use). But it turns out that Apple has figured out how to jump both those hurdles.”

— Steven Levy

The integration of deep learning into Siri dates back to 2014.

“This was one of those things where the jump was so significant that you do the test again to make sure that somebody didn’t drop a decimal place.”

— Eddy Cue

One detail that keeps getting hinted at, both in the article and elsewhere, is online learning.

“We keep some of the most sensitive things where the ML is occurring entirely local to the device.”

— Craig Federighi

Details aren’t clear enough from this kind of statement to know for sure. Online learning implies an ability to train the machine learning model on the device. Something conspicuously absent from the first public version of Core ML.

This kind of limitation is understandable. As Gaurav Kapoor said in his introduction to the framework, “Training is a complex subject.”

With deep learning, jumping from inference to training is like jumping from the Preview app into Photoshop. The change in complexity and required expertise is significant. It wouldn’t be surprising if Apple waits and doesn’t add training to Core ML until they are able to flatten that learning curve. Making it more like a jump from Preview to Pages.