Deep Learning WS 20/21Watch now: Deep Learning: Loss and Optimization – Part 2 (WS 20/21)
This video explains hinge loss and its relation to support vector machines. We also show why sub-gradients allow us to optimize factions that are not continuously differentiable. Furthermore, the hinge loss enables to embed optimization constraints into loss functions. Watch on:FAU TVFAU TV (no memes)YouTube Read the Transcript (Summer 2020) at:LMETowards Data ScienceThis video explains hinge loss and its relation to support vector machines. We also show why sub-gradients allow us to optimize factions that are not continuously differentiable. Furthermore, the hinge loss enables to embed optimization constraints into loss functions. Watch on:FAU TVFAU TV (no memes)YouTube Read the Transcript (Summer 2020) at:LMETowards Data Science