Probing Classifiers are an Explainable AI tool used to make sense of the representations that deep neural networks learn for their inputs. They allow us to understand if the numeric representation at the end (or in the middle) of the model encodes certain properties of the input which we are interested in (for example, whether an input token is a verb or a noun). Using probes, machine learning researchers gained a better understanding of the difference between models and between the various layers of a single model.
Introductions (0:00)
Motivation for probes in Machine Translation (0:40)
Probing sentence encoders (3:42)
How a probe is trained (5:32)
Probing token representations (8:08)
Size of probes (9:15)
Better metrics using Control Tasks (9:48)
Conclusion (10:32)
Explainable AI Cheat Sheet: [ Ссылка ]
1) Explainable AI Intro : [ Ссылка ]
2) Neural Activations & Dataset Examples [ Ссылка ]
-----
What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties
[ Ссылка ]
Diagnostic classifiers: revealing how neural networks process hierarchical structure
[ Ссылка ]
Designing and Interpreting Probes with Control Tasks
[ Ссылка ]
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks
[ Ссылка ]
What do you learn from context? Probing for sentence structure in contextualized word representations
[ Ссылка ]
-----
Twitter: [ Ссылка ]
Blog: [ Ссылка ]
Mailing List: [ Ссылка ]
------
More videos by Jay:
The Narrated Transformer Language Model
[ Ссылка ]
Jay's Visual Intro to AI
[ Ссылка ]
How GPT-3 Works - Easily Explained with Animations
[ Ссылка ]
Up and Down the Ladder of Abstraction [interactive article by Bret Victor, 2011]
[ Ссылка ]
The Unreasonable Effectiveness of RNNs (Article and Visualization Commentary) [2015 article]
[ Ссылка ]
Ещё видео!