Canonical Correlation Analysis is one of the methods used to explore deep neural networks. Methods like CKA and SVCCA reveal to us insights into how a neural network processes its inputs. This is often done by using CKA and SVCCA as a similarity measure for different activation matrices. In this video, we look at a number of papers that compare different neural networks together. We also look at papers that compare the representations of the various layers of a neural network.
Contents:
Introduction (0:00)
Correlation (0:54)
How CCA is used to compare representations (2:50)
SVCCA and Computer Vision models (4:40)
Examining NLP language models with SVCCA: LSTM (9:01)
PWCCA - Projection Weighted Canonical Correlation Analysis (10:22)
How multilingual BERT represents different languages (10:43)
CKA: Centered Kernel Alignment (15:25)
BERT, GPT2, ELMo similarity analysis with CKA (16:07)
Convnets, Resnets, deep nets and wide nets (17:35)
Conclusion (18:59)
Explainable AI Cheat Sheet: [ Ссылка ]
1) Explainable AI Intro : [ Ссылка ]
2) Neural Activations & Dataset Examples [ Ссылка ]
3) Probing Classifiers: A Gentle Intro (Explainable AI for Deep Learning) [ Ссылка ]
-----
Papers:
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability
[ Ссылка ]
Understanding Learning Dynamics Of Language Models with SVCCA
[ Ссылка ]
Insights on representational similarity in neural networks with canonical correlation
[ Ссылка ]
BERT is Not an Interlingua and the Bias of Tokenization
[ Ссылка ]
Similarity of Neural Network Representations Revisited
[ Ссылка ]
Similarity Analysis of Contextual Word Representation Models
[ Ссылка ]
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
[ Ссылка ]
-----
Twitter: [ Ссылка ]
Blog: [ Ссылка ]
Mailing List: [ Ссылка ]
------
More videos by Jay:
The Narrated Transformer Language Model
[ Ссылка ]
Jay's Visual Intro to AI
[ Ссылка ]
How GPT-3 Works - Easily Explained with Animations
[ Ссылка ]
Up and Down the Ladder of Abstraction [interactive article by Bret Victor, 2011]
[ Ссылка ]
The Unreasonable Effectiveness of RNNs (Article and Visualization Commentary) [2015 article]
[ Ссылка ]
Ещё видео!