In this video, we'll learn about K-fold cross-validation and how it can be used for selecting optimal tuning parameters, choosing between models, and selecting features. We'll compare cross-validation with the train/test split procedure, and we'll also discuss some variations of cross-validation that can result in more accurate estimates of model performance.
Download the notebook: [ Ссылка ]
Documentation on cross-validation: [ Ссылка ]
Documentation on model evaluation: [ Ссылка ]
GitHub issue on negative mean squared error: [ Ссылка ]
An Introduction to Statistical Learning: [ Ссылка ]
K-fold and leave-one-out cross-validation: [ Ссылка ]
Cross-validation the right and wrong ways: [ Ссылка ]
Accurately Measuring Model Prediction Error: [ Ссылка ]
An Introduction to Feature Selection: [ Ссылка ]
Harvard CS109: [ Ссылка ]
Cross-validation pitfalls: [ Ссылка ]
WANT TO GET BETTER AT MACHINE LEARNING? HERE ARE YOUR NEXT STEPS:
1) WATCH my scikit-learn video series:
[ Ссылка ]
2) SUBSCRIBE for more videos:
[ Ссылка ]
3) JOIN "Data School Insiders" to access bonus content:
[ Ссылка ]
4) ENROLL in my Machine Learning course:
[ Ссылка ]
5) LET'S CONNECT!
- Newsletter: [ Ссылка ]
- Twitter: [ Ссылка ]
- Facebook: [ Ссылка ]
- LinkedIn: [ Ссылка ]
![](https://i.ytimg.com/vi/6dbrR-WymjI/maxresdefault.jpg)