A Guide to Deep Learning by

Deep learning is a fast-changing field at the intersection of computer science and mathematics. It is a relatively new branch of a wider field called machine learning. The goal of machine learning is to teach computers to perform various tasks based on the given data. This guide is for those who know some math, know some programming language and now want to dive deep into deep learning.
This is not a guide to:
• general machine learning
• big data processing
• data science
deep reinforcement learning


You must know standard university-level math. You can review those concepts in the first chapters of the book Deep learning:

You must know programming to develop and test deep learning models. We suggest using Python for machine learning. NumPy/SciPy libraries for scientific computing are required.

When you are comfortable with the prerequisites, we suggest four options for studying deep learning. Choose any of them or any combination of them. The number of stars indicates the difficulty.

  • Hugo Larochelle's video course on YouTube. The videos were recorded in 2013 but most of the content is still fresh. The mathematics behind neural networks is explained in detail. Slides and related materials are available. ★★
  • Stanford's CS231n (Convolutional Neural Networks for Visual Recognition) by Fei-Fei Li, Andrej Karpathy and Justin Johnson. The course is focused on image processing, but covers most of the important concepts in deep learning. Videos (2016) and lecture notes are available. ★★
  • Michael Nielsen's online book Neural networks and deep learning is the easiest way to study neural networks. It doesn't cover all important topics, but contains intuitive explanations and code for the basic concepts. ★
  • Deep learning, a book by Ian Goodfellow, Yoshua Bengio and Aaron Courville, is the most comprehensive resource for studying deep learning. It covers a lot more than all the other courses combined. ★★★

There are many software frameworks that provide necessary functions, classes and modules for machine learning and for deep learning in particular. We suggest you not use these frameworks at the early stages of studying, instead we suggest you implement the basic algorithms from scratch. Most of the courses describe the maths behind the algorithms in enough detail, so they can be easily implemented.

  • Jupyter notebooks are a convenient way to play with Python code. They are nicely integrated with matplotlib, a popular tool for visualizations. We suggest you implement algorithms in such environments. ★

Machine learning basics

Machine learning is the art and science of teaching computers based on data. It is a relatively established field at the intersection of computer science and mathematics, while deep learning is just a small subfield of it. The concepts and tools of machine learning are important for understanding deep learning.

Most of the popular machine learning algorithms are implemented in the Scikit-learn Python library. Implementing some of them from scratch helps with understanding how machine learning works.

  • Practical Machine Learning Tutorial with Python covers linear regression, k-nearest-neighbors and support vector machines. First it shows how to use them from scikit-learn, then implements the algorithms from scratch. ★
  • Andrew Ng's course on Coursera has many assignments in Octave language. The same algorithms can be implemented in Python. ★★

Neural networks basics

Neural networks are powerful machine learning algorithms. They form the basis of deep learning.

Try to implement a single layer neural network from scratch, including the training procedure.

Improving the way neural networks learn

It's not very easy to train neural networks. Sometimes they don't learn at all (underfitting), sometimes they learn exactly what you give them and their "knowledge" does not generalize to new, unseen data (overfitting). There are many ways to handle these problems.

There are many frameworks that provide the standard algorithms and are optimised for good performance on modern hardware. Most of these frameworks have interfaces for Python with the notable exception of Torch, which requires Lua. Once you know how basic learning algorithms are implemented under the hood, it's time to choose a framework to build on.

There are also higher-level frameworks that run on top of these:

  • Lasagne is a higher level framework built on top of Theano. It provides simple functions to create large networks with few lines of code.
  • Keras is a higher level framework that works on top of either Theano or TensorFlow.
  • If you need more guidance on which framework is right for you, see Lecture 12 of Stanford's CS231n. ★★

Convolutional neural networks

Convolutional networks ("CNNs") are a special kind of neural nets that use several clever tricks to learn faster and better. ConvNets essentially revolutionized computer vision and are heavily used in speech recognition and text classification as well.

Convolutional networks are implemented in every major framework. It is usually easier to understand the code that is written using higher level libraries.

Recurrent neural networks

Recurrent networks ("RNNs") are designed to work with sequences. Usually they are used for sentence classification (e.g. sentiment analysis) and speech recognition, but also for text generation and even image generation.


Autoencoders are neural networks designed for unsupervised learning, i.e. when the data is not labeled. They can be used for dimension reduction, pretraining of other neural networks, for data generation etc. Here we also include resources about an interesting hybrid of autoencoders and graphical models called variational autoencoders, although their mathematical basis is not introduced until the next section.

Most autoencoders are pretty easy to implement. We suggest you try to implement one before looking at complete examples.

Probabilistic graphical models

Probabilistic graphical models (“PGMs”) form a separate subfield at the intersection of statistics and machine learning. There are many books and courses on PGMs in general. Here we present how these models are applied in the context of deep learning. Hugo Larochelle's course describes a few famous models, while the book Deep Learning devotes four chapters (16-19) to the theory and describes more than a dozen models in the last chapter. These topics require a lot of mathematics.

Higher level frameworks (Lasagne, Keras) do not implement graphical models. But there is a lot of code for Theano, Tensorflow and Torch.