A Mac lover's guide to the robot apocalypse.
What makes machine learning different? What implications does it have for how software is developed?
We dive into these questions in Episode 2 of “Demystifying Machine Learning”. See an example in action, from training on a Mac to deployment on a mobile phone.
Machine learning has a lot of hype these days. Some positive. Some negative. We don’t often see examples of where it fails. The mystery surrounding the technology can make it hard to understand.
This is the first video in the “Demystifying Machine Learning” series.
Training a machine learning model on a Mac hasn’t been easy. There are open source frameworks that run on a Mac CPU. But trying to accelerate training with a GPU meant using Nvidia hardware most likely on a Linux box.
With Create ML, Apple isn’t aiming for parity with these data science packages. Create ML takes the tangle of algorithmic detail that comes with machine learning and encapsulates it away. The result is an astonishingly simple training environment that is hardware-accelerated for the Mac.
Pete Warden discusses code, data and software process. When mixed with machine learning, it can be tricky to iterate on results.
“I’m no shrinking violet when it comes to version control. I’ve toughed my way through some terrible systems, and I can still monkey together a solution using rsync and chicken wire if I have to. Even with all that behind me, I can say with my hand on my heart, that machine learning is by far the worst environment I’ve ever found for collaborating and keeping track of changes.”
A list of over 150 machine learning terms, sorted alphabetically.
A small, randomly selected subset of the entire batch of examples run together in a single iteration of training or inference. The batch size of a mini-batch is usually between 10 and 1,000. It is much more efficient to calculate the loss on a mini-batch than on the full training data.
People, it seems, have an emotional reaction to things that look almost but not quite human. There are theories as to why but regardless of the reasons, it’s been a challenging barrier to using photo-realistic computer-generated humans. Something the entertainment industry has been chipping away at for a while.
The makers of Mug Life have released an iOS app that can look at a 2D image of a face and then animate it in three dimensions. The results are eye-opening.
“This innovative technology featuring deep neural networks marries decades of video game expertise with the latest advances in computer vision.”
Technical advancements in this area are accelerating.
MLModel
sits at the heart of Core ML. It's an abstraction that's focused on input and output features. MLModelDescription
indicates how these features are structured.
A list of free Core ML models with associated sample code and reference. Curated by Kedan Li.
“Download and use per license. Remember to acknowledge.”
The site currently has eight image filters and roughly twenty classifiers.
CUDA is an NVIDIA hardware toolkit. Many deep learning frameworks use it to accelerate training. Apple last offered an NVIDIA GPU in its MacBook Pro in 2014. A GeForce GT 750M, which provided a CUDA compute capability of 3.0.
At WWDC, Apple announced official support for external GPU enclosures in macOS High Sierra. This week NVIDIA followed suit.
“After skipping the assorted High Sierra betas, NVIDIA has rolled out drivers for its line of PCI-E graphics cards.”
This gives any Mac with Thunderbolt 3 connectivity the ability to power up CUDA 9.0 with a compute capability of 6.1.