Contrastive learning is a framework for learning relative similarity between samples in a distribution.
Contrastive learning frameworks can be both supervised (leveraging human annotations) and self-supervised (no human annotations). In this post we are going to explore a self-supervised variant, SIMCLR “A Simple Framework for Contrastive Learning of Visual Representations” by Chen et.al.
If you just want to run the code, here’s the github repo.
If you are interested in making some digital mnemonics yourself checkout www.memaid.me — the simple tool I created to do this.
Many of us are striving to learn more and read more, how many popular science books did you read or listen to last year? Chances are quite a few. In the last decade we have been inundated by amazing books that simplify and summarize some of the most fascinating findings in academia. Yuval Harrari’s “Sapiens” and Khaneman’s “Thinking Fast and Slow “come to mind, to name a few. And yet older classics may still remain unread such as Dawkins…
In this post we explore the benefits of applying self-supervised learning to the image classification problem in computer vision.
If you don’t have a clear idea of what self supervised learning is, see my short introduction to the concept here.
If you just want to access the (dirty) code, you can find the jupyter notebook here.
We create an augmented version of the cifar10 dataset with all images randomly rotated 0,90,180 or 270 degrees. Using this rotated dataset we train a…
“If intelligence is a cake, the bulk of the cake is self-supervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning (RL).” — Yann LeCun head of Facebook AI
If you want to see an easy example of self-supervised learning, with code, check out my other post here.
Self-supervised learning has become a hot topic in the field of Machine Learning lately, with several giants of the field (such as Hinton and Yann LeCun) promoting its importance. …