MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning, Spring 2018
Instructor: Gilbert Strang
View the complete course: [ Ссылка ]
YouTube Playlist: [ Ссылка ]
In this lecture, Professor Strang presents Professor Sra's theorem which proves the convergence of stochastic gradient descent (SGD). He then reviews backpropagation, a method to compute derivatives quickly, using the chain rule.
Note: Videos of Lectures 28 and 29 are not available because those were in-class lab sessions that were not recorded.
License: Creative Commons BY-NC-SA
More information at [ Ссылка ]
More courses at [ Ссылка ]
![](https://i.ytimg.com/vi/lZrIPRnoGQQ/maxresdefault.jpg)