Loading the player...

# Use Youtube player (with Youtube AD) #<<<>>> # Use our player (Downlaod, Unblock & No Youtube AD) 再生できないときはここをクリック click hrer if failed to load 如无法播放请点击这里#

INFO:
The optimal training recipe for knowledge distillation is consistency and patience. Consistency refers to showing the teacher and the student the exact same view of an image and additionally improving the support of the distribution with the MixUp augmentation. Patience refers to enduring long training schedules. Exciting to see advances in model compression to make stronger models more widely used! Paper Links: Knowledge Distillation: A Good Teacher is Patient and Consistent: https://arxiv.org/abs/2106.05237 Does Knowledge Distillation Really Work? https://arxiv.org/pdf/2106.05945.pdf Meta Pseudo Labels: https://arxiv.org/pdf/2003.10580.pdf MixUp Augmentation: https://keras.io/examples/vision/mixup/ Scaling Vision Transformers: https://arxiv.org/pdf/2106.04560.pdf Well-Read Students Learn Better: https://arxiv.org/pdf/1908.08962.pdf Chapters 0:00 Paper Title 0:05 Model Compression 1:11 Limitations of Pruning 2:13 Consistency in Distillation 4:08 Comparison with Meta Pseudo Labels 5:10 MixUp Augmentation 6:52 Patience in Distillation 8:53 Results 10:37 Exploring Knowledge Distillation Thanks for watching! Please Subscribe!
Knowledge Distillation: A Good Teacher is Patient and ConsistentKnowledge Distillation: A Good Teacher is Patient and ConsistentKnowledge Distillation: A Good Teacher is Patient and ConsistentKnowledge Distillation: A Good Teacher is Patient and Consistent
Knowledge Distillation: A Good Teacher is Patient and Consistent