A list of free Core ML models with associated sample code and reference. Curated by Kedan Li.
“Download and use per license. Remember to acknowledge.”
The site currently has eight image filters and roughly twenty classifiers.
CUDA is an NVIDIA hardware toolkit. Many deep learning frameworks use it to accelerate training. Apple last offered an NVIDIA GPU in its MacBook Pro in 2014. A GeForce GT 750M, which provided a CUDA compute capability of 3.0.
At WWDC, Apple announced official support for external GPU enclosures in macOS High Sierra. This week NVIDIA followed suit.
“After skipping the assorted High Sierra betas, NVIDIA has rolled out drivers for its line of PCI-E graphics cards.”
Coursera launched in 2011. As a co-founder, Andrew Ng offered a course on machine learning that quickly became popular. Six years later, Proffessor Ng is offering a new series of courses in deep learning.
In his post, Arvind Nagaraj offers some observations.
“In classic Ng style, the course is delivered through a carefully chosen curriculum, neatly timed videos and precisely positioned information nuggets. Andrew picks up from where his classic ML course left off.”
He also offers some encouragement.
“Everyone starts in this field as a beginner. If you are a complete newcomer to the deep learning field, it’s natural to feel intimidated by all the jargon and concepts. Please don’t give up.”
Apple isn’t just a hardware company. It’s not just a software company either. It’s both. I’ve always admired how clever the company is at leveraging that distinction.
The latest iPhone will unlock itself merely by looking at your face.
nn module from Torch is currently supported but this still converts a range of different layers.
“In a sensibly organised society, if you improve productivity there is room for everybody to benefit. The problem is not the technology, but the way the benefits are shared out.”
Camilla Dahlstrøm dives under the hood and reveals some dynamic details of Core ML.
“This post will have a look at the case where the developer wishes to update or insert a model into the application, but avoiding the process of recompiling the entire application. … Several approaches will be discussed, from Apple’s preferred method to less conventional tricks which trade speed and storage efficiency alike.”
Matthijs Hollemans dives into even greater detail.
On the surface, Core ML has the appearance of being fairly static. You bring a trained model file into your project and let Xcode turn it into a runtime solution when your app is built. The details, it turns out, are a bit more dynamic.
Some of these details have even made their way up to public API.
let compiledModelURL = try MLModel.compileModel(at: modelURL) let model = FlowerClassifier(contentsOf: compiledModelURL)
Back in 2015 interest in Torch was peaking. Around that time, I took a look under the hood to try and map out how the framework was put together.
“Around pre-2014, there were three main frameworks. … They all had their nitch.
Theano was really good as a symbolic compiler. Torch was a framework that would try to be out of your way if you’re a C programmer. You could write your C programs and then just interface it into Lua’s interpreter language. Caffe was very suited to computer vision models. So if you wanted a conv net and you wanted to train it on a large vision dataset, Caffe was your framework.
All three of these frameworks had aging designs. These frameworks were about six or seven years old. It was evident that the field was moving their research in a certain direction and these frameworks and their abstractions weren’t keeping up.
In late 2015, TensorFlow came out. Tensorflow was one of the first professionally built frameworks from the ground up to be open source. … I see Tensorflow as a much better Theano-style framework.”
… [Before that] Deep Mind was using Torch. Facebook. Twitter. Several university labs. The year of 2015 was Torch. The year of 2014 was Caffe. The year of 2016 was TensorFlow in terms of getting the large set of audiences.”
… Keras is a fantastic front end for TensorFlow and Theano and CNTK. You can build neural networks quickly. … It’s a very powerful tool for data scientists who want to remain in Python and never want to go into C or C++.”
Soumith was a significant contributor to Torch and started working on its successor in July 2016.
“PyTorch is both a front end and a back end. You can think of PyTorch as something that gives you the ease of use of Keras, or probably more in terms of debugging. And power users can go all the way down to the C level and do hand coded optimizations.
It takes the whole stack of a front end calling a back end to create a neural network. And that back end in turn calls some underlying GPU code or CPU code. And we make that whole stack very flat without many abstractions so that you have a superior user experience.”