Topics Cover in today’s lecure:
- There are two methods of sampling. A. Cross- Validation B. Bootstrap
- The difference between testing error and training error. Training error is often quite lower than the test error.
- Training error is not a good surrogate to testing error.
- Understand the concept of overfitting through the graph of testing and training data.
- Bias– How far on the average the model is true.
- Variance– How much that the estimate varies around it average.
- Bias and Variance together gives prediction error.
- K- fold validation – This is used to estimate test error.
- Data is divided into equal size in this technique.
- True and estimated test for mean squared error for data.
- The estimate of prediction error is biased upward, this is the issue with cross-validation.
- Bias can be minimize but in that situation variance will be high.
- Understand when we should use cross- validation.
- Right and wrong ways to apply cross-validation.