Let’s take a look at how Google Translate’s Neural Network works behind the scenes! Read these references below for the best understanding of Neural Machine Translation!
SUBSCRIBE TO CODE EMPORIUM: [ Ссылка ]
To submit your video to CS Dojo Community, please use this link: [ Ссылка ]
REFERENCES
[1] Landmark paper of LSTM (Hochreiter et al., 1997): [ Ссылка ]...
[2] Landmark paper of Neural Machine Translation NMT (Kalchbrenner et al., 2013): [ Ссылка ]
[3] Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation (Cho et al., 2014): [ Ссылка ]
[4] Seq to Seq learning with neural networks (Sutskever et al., 2014): [ Ссылка ])
[5] The paper that introduced Bidirectional RNN : [ Ссылка ]...
[6] On the properties of NMP: Encoder-Decoder Approaches (Cho et al., 2014): [ Ссылка ] Fig. 4 (a)
[7] NMT by jointly learning to align & translate (Bahdanau et al., 2016): [ Ссылка ] 5.2.2
[8] Google Translate Main paper (Wu et al., 2016): [ Ссылка ]
![](https://i.ytimg.com/vi/AIpXjFwVdIE/maxresdefault.jpg)