Deep learning is a technology that is booming, thanks in particular to the use of GPUs (Graphical Processing Units), the availability of large amounts of data and the understanding of theoretical elements that make it possible to better define neural network architectures that are more easily trainable. In this course, students will be introduced to the basics of neural networks and also to the different architectural elements that make it possible to design a neural network according to the prediction problem considered. The course is divided into modules in which questions of optimization algorithms, their initialization, regularization techniques, fully connected architectures, convolutional networks, recurrent networks, introspection techniques are addressed. Practical works on GPUs are associated with the courses.
- Being able to implement and deploy a deep learning algorithm
- Being able to choose the right architecture that suits a particular machine learning problem
- Being able to diagnose the training of a neural network (what is it learning ? how is it learning ? is it learning ? will it be able to generalize ?)
The lecture material consists in the lecture slides :
- Lecture 1 : Introduction HTML PDF : Introduction to deep learning, linear networks
- Lecture 2&3 HTML PDF : RBF, feedforward networks, computational graph, gradient descent, initialization, regularization
- Lecture 4 HTML, PDF : Convolutional neural networks
- Lecture 5&6 HTML, PDF : Convolutional neural networks applications
- Lecture 7&8 HTML, PDF : Recurrent neural networks and applications
The video recordings of the lecture are available on the WebTv channel.
The references cited in the lecture notes are accessible in references.pdf
The labworks are generally provided on this page :