Accuracy_Score in SklearnA crucial stage in the data science workflow is to measure our model's accuracy using the appropriate metric. In this tutorial, we'll learn two methods for calculating the source sample's predicted class accuracy: manually and using Python's scikit-learn library. Here is a rundown of the topics we have discussed in this tutorial.
What is Accuracy?One of the widely used metrics that computes the performance of classification models is accuracy. The percentage of labels that our model successfully predicted is represented by accuracy. For instance, if our model accurately classified 80 of 100 labels, its accuracy would be 0.80. Creating Function to Compute Accuracy ScoreLet's create a Python function to compute the predicted values accuracy score, given that we already have the sample's true labels and the labels predicted the model. Code The above function accepts values for the classification model's predicted labels and true labels of the sample as its arguments and computes the accuracy score. Here, we iterate through each pair of true and predicted labels in parallel to record the number of correct predictions. We then divide that number by the total number of labels to compute the accuracy score. We will apply the function on a sample now. Code Output: 0.9777777777777777 We get 0.978 as the accuracy score for the Support Vector Classification model's predictions. Note that using numpy arrays to vectorize the equality computation can make the code mentioned above more efficient. Accuracy using Sklearn's accuracy_score()The accuracy_score() method of sklearn.metrics, accept the true labels of the sample and the labels predicted by the model as its parameters and computes the accuracy score as a float value, which can likewise be used to obtain the accuracy score in Python. There are several helpful functions to compute typical evaluation metrics in the sklearn.metrics class. Let's use sklearn's accuracy_score() function to compute the Support Vector Classification model's accuracy score using the same sample dataset as earlier. sklearn.metrics.accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None) We use this for computing the accuracy score of classification. This method calculates subgroup accuracy in multi-label classification; a dataset's predicted subset of labels must precisely match the actual dataset of labels in y_true. Parameters
Returns
Example of Accuracy_scoreCode Output: 0.9777777777777777 When using binary label indicators with multiple labels: Code Output: 0.5 How scikit learn accuracy_score worksThe accuracy_score method of the sklearn.metrics package assigns subset accuracy in multi-label classification. It is required that the labels the model has predicted for the given sample and the true labels of the sample match exactly. Accuracy describes the model's behaviour across all classes. If all of the classes are comparably significant, it is helpful. The ratio of the count of accurate predictions to the total number of samples or the total number of predictions is used to determine the model's accuracy. Code:
Code Output: Confusion Matrix: [[2 2] [3 0]] 0.2857142857142857 So, in this tutorial, we learnt scikit-learn accuracy_score in Python and examined some implementation examples. Next TopicK-Fold Cross-Validation in Sklearn |
We provides tutorials and interview questions of all technology like java tutorial, android, java frameworks
G-13, 2nd Floor, Sec-3, Noida, UP, 201301, India