When was the last time you opened up a PDF file and edited the design of the document directly?
You don’t.
PDF is not about making a document. PDF is about being able to easily view a document.
With Core ML, Apple has managed to achieve an equivalent of PDF for machine learning. With their .mlmodel
format, the company is not venturing into the business of training models (at least not yet). Instead, they have rolled out a meticulously crafted red carpet for models that are already trained. It’s a carpet that deploys across their entire lineup of hardware.
As a business strategy, it’s shrewd. As a technical achievement, it’s stunning. It moves complex machine learning technology within reach of the average developer.
To use a trained model in your project, you literally drag and drop the model file into Xcode. A type-safe Swift interface for the model gets synthesized automatically. A hardware-accelerated runtime implementation is created as well. Vast amounts of technical detail that typically encumber machine learning are encapsulated away.
You provide input. The model provides output. You’re done.
let flowerModel = FlowerClassifier()
if let prediction = try? flowerModel.prediction(flowerImage: image) {
return prediction.flowerType
}