Watch now: Deep Learning: Loss and Optimization – Part 2 (WS 20/21)

Symbolic picture for the article. The link opens the image in a large view.

This video explains hinge loss and its relation to support vector machines. We also show why sub-gradients allow us to optimize factions that are not continuously differentiable. Furthermore, the hinge loss enables to embed optimization constraints into loss functions.

Watch on:
FAU TV
FAU TV (no memes)
YouTube

Read the Transcript (Summer 2020) at:
LME
Towards Data Science