This video explains hinge loss and its relation to support vector machines. We also show why sub-gradients allow us to optimize factions that are not continuously differentiable. Furthermore, the hinge loss enables to embed optimization constraints into loss functions.
Watch on:FAU TVFAU TV (no ...
This video explains how to derive L2 Loss and Cross-Entropy Loss from statistical assumptions. Highly relevant for the exam!
Watch on:FAU TVFAU TV (no memes)YouTube
Read the Transcript (Summer 2020) at:LMETowards Data Science
This video explains backpropagation at the level of layer abstraction.
Watch on:FAU TVFAU TV (no memes)YouTube
Read the Transcript (Summer 2020) at:LMETowards Data Science
This video introduces the basics of the backpropagation algorithm.
Watch on:FAU TVFAU TV (no memes)YouTube
Read the Transcript (Summer 2020) at:LMETowards Data Science
This video introduces the topics of activation functions, loss, and the idea of gradient descent.
Watch on:FAU TVFAU TV (no memes)YouTube
Read the Transcript (Summer 2020) at:LMETowards Data Science
This video introduces the topic of feedforward networks, universal approximation, and how to map a decision tree onto a neural network.
Watch on:FAU TVFAU TV (no memes)YouTube
Read the Transcript (Summer 2020) at:LMETowards Data Science