Data Science Desktop Survival Guide
by Graham Williams |
|||||
Evaluating the Model |
Raw Before deploying a model we will want to have some measure of confidence in the predictions. This is the role of evaluation—we evaluate the performance of a model to gain an expectation of how well the model will perform on new observations.
We evaluate a model by making predictions on observations that were not used in building the model. These observations will need to have a known outcome so that we can compare the model prediction against the known outcome. This is the purpose of the test dataset as explained in Section 7.12.
head(predict_te) == head(actual_te)
sum(head(predict_te) == head(actual_te))
sum(predict_te == actual_te)
|