Transfer learning has had an enormous impact on Natural Language Processing. Fine-tuning on a smaller set of labels for more accurate models is becoming a norm. But can we have classification models on arbitrary labels with absolutely zero training labels? We'll explore a few such techniques in NLP, where you get decent performance without using any training labels.
To connect with Manu: [ Ссылка ]
Powered by Restream [ Ссылка ]
Ещё видео!