Watch now: Deep Learning: Loss and Optimization – Part 2 (WS 20/21)
This video explains hinge loss and its relation to support vector machines. We also show why sub-gradients allow us to optimize factions that are not continuously differentiable. Furthermore, the hinge loss enables to embed optimization constraints into loss functions.
FAU TV (no memes)
Read the Transcript (Summer 2020) at:
Towards Data Science