In this Mage Academy lesson on model evaluation, we’ll learn how to calculate and understand the accuracy of Machine Learning models.
How to code
Accuracy is a classification metric used to measure your model’s performance. Accuracy is calculated as the number of predictions the model got correct, over the sum of all the predictions.
It’s all about hitting the target in the center
A high accuracy model means it’s predictions are right most of the time, while low accuracy means it’s predictions are off. As a metric, you’ll want to have a high accuracy model, but beware of
The number of predictions a model gets correct is the sum of predictions that came true.
Next, the sum of all predictions is simply the sum of all quadrants.
Confusion Matrix for Remarketing
Following the calculations we have (515 + 8) / (515 +5 + 32 +8), which is 523 / 560. This means our total accuracy of this model is 93.33%.
How to code
Accuracy is a metric that can be calculated by scratch or using a library called SKLearn.
First, to calculate the total number of predictions, we take the length of y_pred, our predictions.
1total = len(y_pred)
Then to verify if the values are matching, we go through each value and check if they match.
1 2 3 4 5correct = 0 for i in range(0, total): # Prediction matches if y_pred[i] === y_true[i]: correct += 1
Finally, we can calculate the accuracy as the number of matches, divided by the total.
1 2 3print("Correct:", correct) print("Total:", total) print("Accuracy:", correct / total)
SKLearn or SciKitLearn, is a Python library that handles calculating the accuracy of a model using the methodaccuracy_score
. First, you supply it with two things, the set of predicted values and the set of actual values. Then, from there it will find the number of matches and it’s total size.
Here there are 2 matches and 4 total, so an accuracy of 0.5.
As an additional function of the
method, you may set the normalize flag to
to get the count of correct instead.
1accuracy_score(y_true, y_pred, normalize=False)