Validation split (machine learning)
In order to ensure that machine learning models are able to generalize well to new data not seen before by the model, is it important to have several sets of data including training data, test data, and cross-validation split data for the original set of data to obtain the best possible predictive model.
Training Set
When conducting machine learning, data collection is critical to generate accurate algorithms to make good predictions. A predictive model is created after undergoing training utilizing a training set of known examples.
Test Set
A credible method is required to test the accuracy of the model after training. Using the same training examples for testing is unlikely to give an accurate representation of the predictive accuracy of the model as the model is likely to be biased towards the training set. Thus, the original data set is usually split to make a test set. The test set is usually used to select the algorithm with the best performance.
Cross-validation set
Selecting an algorithm based on the test set could lead to further biases. As the algorithm is selected from the best performance based on the same test set, this isn’t an accurate representation of generalized accuracy to examples never seen before by the algorithm (as a test set is finite and does not necessarily cover the wide variety of real examples). The algorithm selected will likely have an optimistic estimation of the generalization error. Consequently, the original dataset is further split to include a cross validation set. The cross validation set is used to select the best performing algorithm, and the test set is used to estimate the generalization error from this algorithm.
- training set
- data points used to train the algorithm
- cross validation set
- data points used to select the best algorithm
- test set
- data points used to test the selected algorithm for the generalization error/accuracy.
A typical split of the original dataset is 60% training, 20% cross validation and 20% test set