Watch now: Deep Learning: Loss and Optimization – Part 2 (WS 20/21)

Symbolbild zum Artikel. Der Link öffnet das Bild in einer großen Anzeige.

This video explains hinge loss and its relation to support vector machines. We also show why sub-gradients allow us to optimize factions that are not continuously differentiable. Furthermore, the hinge loss enables to embed optimization constraints into loss functions.

Watch on:
FAU TV
FAU TV (no memes)
YouTube

Read the Transcript (Summer 2020) at:
LME
Towards Data Science