Lecture Notes in Deep Learning: Introduction – Part 5

Symbolic picture for the article. The link opens the image in a large view.

Exercise Details & Outlook

These are the lecture notes for FAU’s YouTube Lecture “Deep Learning“. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. If you spot mistakes, please let us know!

A fast-forward to layer-wise back-propagation. Don’t worry, we will explain all the details. Image under CC BY 4.0 from the Deep Learning Lecture.

Thanks for tuning in again and welcome to deep learning! In this small video, we will look into the organizational matters and conclude the introduction. So, let’s look at organizational matters. Now, the module that you can obtain here at FAU consists of a total of five ECTS. This is the lecture plus the exercises. So, it’s not just sufficient to watch all of these videos, you have to pass the exercises. In the exercises, you will implement everything that we’re talking about here also in Python. We’ll start from scratch so you will implement perceptrons neural networks up to deep learning. In the very end, we will even move ahead towards GPU implementation and also large deep learning frameworks. So, this is a mandatory part it’s not sufficient only to pass the oral exam.

We will also implement max pooling in the excercise. Image under CC BY 4.0 from the Deep Learning Lecture.

The content of the exercise is Python. You’ll do an introduction to Python if you have never used, it because python is one of the main languages that deep learning implementations use today. You will really develop a neural network from scratch. There will be feed-forward neural networks, there will be convolutional neural networks. You will look into regularization techniques and how you can actually adjust weights such that they have specific properties. You will see how you can beat overfitting with certain or regularization techniques. Of course, we will also implement recurrent networks. Later, we will use the Python framework and use that on large-scale classification. For the exercises, you should bring a basic knowledge of Python and NumPy. You should know about linear algebra such as matrix multiplication. Image processing is a definite plus. You should know how to process images and – of course – requirements for this class are pattern recognition fundamentals and that you have attended the other lectures of pattern recognition already. If you haven’t, you might have to consult additional references to follow this class.

You should be passionate about coding for this class’ exercises. Photo by Markus Spiske from Pexels.

You should bring a passion for coding and you have to code quite a bit, but you can also learn it during the exercises. If you have not done a lot of programming before this class, you will spend a lot of time on the exercises. But if you complete those exercises, you will be able to implement things in deep learning frameworks and this is very good training. After this course, you can not just download code from GitHub and run it on your own data, but:

  • you also understand the inner workings of the networks,
  • how to write your own layers and
  • how to extend deep learning algorithms also on a very low level.
In the exercises of the course, you will also have the opportunity to work with really big data sets. Image courtesy of Marc Aubreville. Access full video here.

So, pay attention to detail and if you are not very well used to programming, it will cost a bit of time. There will be five exercises throughout the semester. There are unit tests for all but the last exercise. So, these unit tests should help you with the implementations and in the last exercise there will be a PyTorch implementation and you will be facing a challenge: You have to solve image recognition tasks in order to pass the exercise. Deadlines are announced in the respective exercise sessions. So, you have to register for them in StudOn.

What we’ve seen in the lecture so far is the deep learning is more and more present in daily life. So, it’s not just a technique that’s done in research. We’ve seen this emerging really into many many different applications from speech recognition, image processing, and so on to autonomous driving. It’s a very active area of research. If you’re doing this lecture you have a very good preparation for a research project with our lab or industry or other partners.

More exciting things coming up in this deep learning lecture. Image under CC BY 4.0 from the Deep Learning Lecture.

So far, we looked into the perceptron and it’s relation to biological neurons. So, next time on deep learning, we will actually start with the next lecture block which means, we will extend the perceptron to a universal function approximator. We will look into gradient-based training algorithms for these models and then we also look into the efficient computation of gradients.

Now, if you want to prepare for the oral exam, it’s good to think of a couple of comprehensive questions. Questions may be

  • “What are the six postulates of pattern recognition?”
  • “What is the perceptron objective function?”
  • “Can you name three applications successfully tackled by deep learning?”

and of course, we have a lot of further reading. So you can find the links on the slides and we will also post the links and references below this post.

If you have any questions,

  • you can ask tutors in your exercise.
  • you can email me, or
  • if you’re watching this on youtube, you actually can use the comment function and ask your questions.

So, there are many options to get in contact and of course, we have quite a few references for these first five videos. This is now too quick to read all of them but you can pause the video and then review them and we will also post those references in the references below this post. So, I hope that and you like this video and see you next time in deep learning!

Note: FAU’s exercise material is extensive. In order to get an idea about our exercises, we created these examples on GitHub. The full exercise program is currently only available to FAU’s students.

If you liked this post, you can find more essays here, more educational material on Machine Learning here, or have a look at our Deep Learning Lecture. I would also appreciate a clap or a follow on YouTube, Twitter, Facebook, or LinkedIn in case you want to be informed about more essays, videos, and research in the future. This article is released under the Creative Commons 4.0 Attribution License and can be reprinted and modified if referenced.

References

[1] David Silver, Julian Schrittwieser, Karen Simonyan, et al. “Mastering the game of go without human knowledge”. In: Nature 550.7676 (2017), p. 354.
[2] David Silver, Thomas Hubert, Julian Schrittwieser, et al. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm”. In: arXiv preprint arXiv:1712.01815 (2017).
[3] M. Aubreville, M. Krappmann, C. Bertram, et al. “A Guided Spatial Transformer Network for Histology Cell Differentiation”. In: ArXiv e-prints (July 2017). arXiv: 1707.08525 [cs.CV].
[4] David Bernecker, Christian Riess, Elli Angelopoulou, et al. “Continuous short-term irradiance forecasts using sky images”. In: Solar Energy 110 (2014), pp. 303–315.
[5] Patrick Ferdinand Christ, Mohamed Ezzeldin A Elshaer, Florian Ettlinger, et al. “Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields”. In: International Conference on Medical Image Computing and Computer-Assisted Springer. 2016, pp. 415–423.
[6] Vincent Christlein, David Bernecker, Florian Hönig, et al. “Writer Identification Using GMM Supervectors and Exemplar-SVMs”. In: Pattern Recognition 63 (2017), pp. 258–267.
[7] Florin Cristian Ghesu, Bogdan Georgescu, Tommaso Mansi, et al. “An Artificial Agent for Anatomical Landmark Detection in Medical Images”. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016. Athens, 2016, pp. 229–237.
[8] Jia Deng, Wei Dong, Richard Socher, et al. “Imagenet: A large-scale hierarchical image database”. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference IEEE. 2009, pp. 248–255.
[9] A. Karpathy and L. Fei-Fei. “Deep Visual-Semantic Alignments for Generating Image Descriptions”. In: ArXiv e-prints (Dec. 2014). arXiv: 1412.2306 [cs.CV].
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. “ImageNet Classification with Deep Convolutional Neural Networks”. In: Advances in Neural Information Processing Systems 25. Curran Associates, Inc., 2012, pp. 1097–1105.
[11] Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, et al. “You Only Look Once: Unified, Real-Time Object Detection”. In: CoRR abs/1506.02640 (2015).
[12] J. Redmon and A. Farhadi. “YOLO9000: Better, Faster, Stronger”. In: ArXiv e-prints (Dec. 2016). arXiv: 1612.08242 [cs.CV].
[13] Joseph Redmon and Ali Farhadi. “YOLOv3: An Incremental Improvement”. In: arXiv (2018).
[14] Frank Rosenblatt. The Perceptron–a perceiving and recognizing automaton. 85-460-1. Cornell Aeronautical Laboratory, 1957.
[15] Olga Russakovsky, Jia Deng, Hao Su, et al. “ImageNet Large Scale Visual Recognition Challenge”. In: International Journal of Computer Vision 115.3 (2015), pp. 211–252.
[16] David Silver, Aja Huang, Chris J. Maddison, et al. “Mastering the game of Go with deep neural networks and tree search”. In: Nature 529.7587 (Jan. 2016), pp. 484–489.
[17] S. E. Wei, V. Ramakrishna, T. Kanade, et al. “Convolutional Pose Machines”. In: CVPR. 2016, pp. 4724–4732.
[18] Tobias Würfl, Florin C Ghesu, Vincent Christlein, et al. “Deep learning computed tomography”. In: International Conference on Medical Image Computing and Computer-Assisted Springer International Publishing. 2016, pp. 432–440.