Lecture Notes in Deep Learning: Unsupervised Learning – Part 4

Symbolbild zum Artikel. Der Link öffnet das Bild in einer großen Anzeige.

Conditional & Cycle GANs

These are the lecture notes for FAU’s YouTube Lecture “Deep Learning“. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. Try it yourself! If you spot mistakes, please let us know!

Need a cover for your new album? I GAN help you. Image created using gifify. Source: YouTube

Welcome back to deep learning! Today we want to talk about a couple of the more advanced GAN concepts, in particular, the conditional GANs and Cycle GANs.

Conditional GANs allow controlling the output with respect to a certain variable. Image under CC BY 4.0 from the Deep Learning Lecture.

So, let’s have a look at what I have here on my slides. It’s part four of our unsupervised deep learning lecture. First, we start with the conditional GANs. So, one problem that we had so far is that the generators create a fake generic image. Unfortunately, it’s not specific for a certain condition or characteristic. So let’s say if you have text to image generation then, of course, the image should depend on the text. So, you need to be able to model the dependency somehow. If you want to generate zeros then you don’t want to generate ones. So, you need to put in some condition whether you want to generate the digit 0, 1, 2, 3, and so on. This can be done by encoding conditioning which is introduced in [15].

The GAN is controlled using a conditioning vector. Image under CC BY 4.0 from the Deep Learning Lecture.

The idea here is now that you essentially split up your latent vector into the set that has essentially the observation. Then, you also have the condition which is encoded here in the conditioning vector y. You concatenate the two and use them in order to generate something. Also, the discriminator then gets the generated image, but it also gets access to the conditional vector y. So, it knows what it’s supposed to see and the specific generated output of the generator. So, both of them receive the conditioning and this then essentially again results in a two-player minimax game that can be described again as a loss that is dependent on the discriminator. The extension here is that you additionally have the conditioning with y in the loss.

Now, we can steer facial expressions. Image under CC BY 4.0 from the Deep Learning Lecture.

So how does this thing work? You add a conditional feature like smiling, gender, age, or other properties of the image. Then, the generator and the discriminator learn to operate in those modes. This then leads to the property that you’re able to generate a face of a certain attribute. The discriminator learns that this is the face given that specific attribute. So, here, you see different examples of generated faces. In the first row are just random samples. The second row is conditioned into the property of old age. The third row is given the condition old age plus smiling and here you see that the conditioning vector is still able to produce similar images, but you can actually add those conditions on top.

A GAN conditioned for age. Image created using gifify. Source: YouTube

So, this allows then to create really very nice things like the image to image translation. Below, you have several examples of inputs and outputs. You can essentially then create labels to street scenes, you can generate aerial images to maps, you can generate labels to facades, or black & white to color, day to night, and edges to photo.

There are many more applications of conditional GANs. Image under CC BY 4.0 from the Deep Learning Lecture.

The idea here is that we use the label image again as a conditioning vector. This leads us to the observations that this is domain translation. It is simply a conditional GAN. The positive examples are given to the discriminator. The example below shows a handbag and its edges. The negative examples are then constructed by giving the edges of the handbag to the generator to create a handbag that fools the discriminator.

A conditional GAN from edge to handbag. Image under CC BY 4.0 from the Deep Learning Lecture.

You can see that we are able to generate really complex images just by using conditional GANs. Now, a key problem here is, of course, that you need the two images to be aligned. So, your conditioning image like the edge image here has to exactly match the respective handbag image. If they don’t, you wouldn’t be able to train this. So, for domain translation using conditional GANs, you need exact matches. In many cases, you don’t have access to exact matches. So, let’s say you have a scene that shows zebras. You will probably not find a paired data set that shows exactly the same scene, but with horses. So, you cannot just use it with a conditional GAN.

Can we map zebras to horses? Image under CC BY 4.0 from the Deep Learning Lecture.

The key ingredient, here, is the so-called cycle consistency loss. So, you couple GANs with trainable inverse mappings. The key idea here is that you have one conditional GAN that inputs x as the conditioning image and generates then some new output. If you take this new output and use it in the conditioning variable of F, it should produce x again. So, you use the conditioning variables to form a loop and the key component here is that G and F should be essentially inverses of each other.

The cycle consistency loss allows us to train unpaired domain translation. Image under CC BY 4.0 from the Deep Learning Lecture.

So, if you take F(G(x)), you should end up with x again. Of course, also if you take G(F(y)) then you should end up with y again. This then gives rise to the following concepts: So, you take two generators and two discriminators, one GAN G is generating y from x. One GAN F is generating x from y. You still need two discriminators Dₓ and the discriminator Dᵧ. The Cycle GAN loss further has the consistency conditions as additions to the loss. Of course, you have the typical discriminator losses the original GAN losses for Dₓ and Dᵧ. They are, of course, coupled respectively with G and F. On top, you put this cycle consistency loss. The cycle consistency loss is a coupled loss that at the same time translates x to y and y to x again and makes sure that the zebra that is generated in y is still not recognized as fake by the discriminator. At the same time, you have the inverse cycle consistency which is then translating y into x using F and then x into y using G again while fooling the discriminator regarding x. So, you need the two discriminators. This then gives rise to the cyclic consistency loss that we have noted down for you here. You can, for example, use L1 norms and the expected values of those L1 norms to form specific identities. So, the total loss is then given as the GAN losses that we’ve already discussed earlier plus λ the cycle consistency loss.

Examples for CycleGANs Image under CC BY 4.0 from the Deep Learning Lecture.

So, this concept is fairly easy to grasp and I can tell you this has been widely applied. So, there are many many examples. You can translate from Monet to Photos, from zebras to horses, from summer to winter, and the respective inverse operations. If you couple this with more GANs and more cycle consistency losses, then you’re even able to take one photograph and translate it to Monet, Van Gogh, and other artists and have them represent a specific style.

Cycle GANs also find applications in autonomous driving. Image under CC BY 4.0 from the Deep Learning Lecture.

This is, of course, also interesting for autonomous driving where you then can for example input a scene and then generate different segmentation masks. So, you can also use it for image segmentation in this task. Here, we have an ablation study for the Cycle GAN where we show the Cycle alone, the GAN alone, the GAN plus forward loss, the GAN plus backward loss, and the complete Cycle GAN loss. You can see that with the Cycle GAN loss, you get much much better back and forth translations if you compare this to your respective ground truth.

More exciting things coming up in this deep learning lecture. Image under CC BY 4.0 from the Deep Learning Lecture.

Okay, there are a couple of more things to say about GANs and these are the advanced GAN concepts that we’ll talk about next time in deep learning. So, I hope you enjoyed this video and looking forward to seeing you in the next one. Good-bye!

Cycle GANs make surgical training slightly more realistic. Image created using gifify. Source: YouTube

If you liked this post, you can find more essays here, more educational material on Machine Learning here, or have a look at our Deep LearningLecture. I would also appreciate a follow on YouTubeTwitterFacebook, or LinkedIn in case you want to be informed about more essays, videos, and research in the future. This article is released under the Creative Commons 4.0 Attribution License and can be reprinted and modified if referenced. If you are interested in generating transcripts from video lectures try AutoBlog.

Links

Link – Variational Autoencoders:
Link – NIPS 2016 GAN Tutorial of Goodfellow
Link – How to train a GAN? Tips and tricks to make GANs work (careful, not
everything is true anymore!)
Link - Ever wondered about how to name your GAN?

References

[1] Xi Chen, Xi Chen, Yan Duan, et al. “InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets”. In: Advances in Neural Information Processing Systems 29. Curran Associates, Inc., 2016, pp. 2172–2180.
[2] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, et al. “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion”. In: Journal of Machine Learning Research 11.Dec (2010), pp. 3371–3408.
[3] Emily L. Denton, Soumith Chintala, Arthur Szlam, et al. “Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks”. In: CoRR abs/1506.05751 (2015). arXiv: 1506.05751.
[4] Richard O. Duda, Peter E. Hart, and David G. Stork. Pattern classification. 2nd ed. New York: Wiley-Interscience, Nov. 2000.
[5] Asja Fischer and Christian Igel. “Training restricted Boltzmann machines: An introduction”. In: Pattern Recognition 47.1 (2014), pp. 25–39.
[6] John Gauthier. Conditional generative adversarial networks for face generation. Mar. 17, 2015. URL: http://www.foldl.me/2015/conditional-gans-face-generation/ (visited on 01/22/2018).
[7] Ian Goodfellow. NIPS 2016 Tutorial: Generative Adversarial Networks. 2016. eprint: arXiv:1701.00160.
[8] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, et al. “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium”. In: Advances in Neural Information Processing Systems 30. Curran Associates, Inc., 2017, pp. 6626–6637.
[9] Geoffrey E Hinton and Ruslan R Salakhutdinov. “Reducing the dimensionality of data with neural networks.” In: Science 313.5786 (July 2006), pp. 504–507. arXiv: 20.
[10] Geoffrey E. Hinton. “A Practical Guide to Training Restricted Boltzmann Machines”. In: Neural Networks: Tricks of the Trade: Second Edition. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 599–619.
[11] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, et al. “Image-to-Image Translation with Conditional Adversarial Networks”. In: (2016). eprint: arXiv:1611.07004.
[12] Diederik P Kingma and Max Welling. “Auto-Encoding Variational Bayes”. In: arXiv e-prints, arXiv:1312.6114 (Dec. 2013), arXiv:1312.6114. arXiv: 1312.6114 [stat.ML].
[13] Jonathan Masci, Ueli Meier, Dan Ciresan, et al. “Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction”. In: Artificial Neural Networks and Machine Learning – ICANN 2011. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 52–59.
[14] Luke Metz, Ben Poole, David Pfau, et al. “Unrolled Generative Adversarial Networks”. In: International Conference on Learning Representations. Apr. 2017. eprint: arXiv:1611.02163.
[15] Mehdi Mirza and Simon Osindero. “Conditional Generative Adversarial Nets”. In: CoRR abs/1411.1784 (2014). arXiv: 1411.1784.
[16] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial 2015. eprint: arXiv:1511.06434.
[17] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, et al. “Improved Techniques for Training GANs”. In: Advances in Neural Information Processing Systems 29. Curran Associates, Inc., 2016, pp. 2234–2242.
[18] Andrew Ng. “CS294A Lecture notes”. In: 2011.
[19] Han Zhang, Tao Xu, Hongsheng Li, et al. “StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks”. In: CoRR abs/1612.03242 (2016). arXiv: 1612.03242.
[20] Han Zhang, Tao Xu, Hongsheng Li, et al. “Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks”. In: arXiv preprint arXiv:1612.03242 (2016).
[21] Bolei Zhou, Aditya Khosla, Agata Lapedriza, et al. “Learning Deep Features for Discriminative Localization”. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, June 2016, pp. 2921–2929. arXiv: 1512.04150.
[22] Jun-Yan Zhu, Taesung Park, Phillip Isola, et al. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”. In: CoRR abs/1703.10593 (2017). arXiv: 1703.10593.