Javatpoint Logo
Javatpoint Logo

accuracy_score in Sklearn

Overview

A set of predicted labels are compared to the actual labels in Python using the accuracy_score method of the sklearn.metrics module.

This Python lesson will cover a variety of examples connected to sklearn accuracy_score as well as the scikit learn accuracy score module in general. And we'll talk about these subjects.

  • learning accuracy_score using sklearn
  • examples of sklearn accuracy_score
  • Workings of sklearn accuracy_score

Importing accuracy_score method :

We'll import the accuracy_score method into our application in order to use it, as demonstrated below :

accuracy_score in Sklearn :

In Python Scikit Learn, the accuracy of the fraction or count of correct predictions is determined using the accuracy_score method.

In mathematical terms, the proportion of true positives and true negatives to all positive and negative occurrences is referred to as the model accuracy, which is a performance statistic for machine learning classification models. In other words, accuracy indicates the proportion of times our machine learning model will predict a result accurately out of all the predictions it has made.

For instance : Assume that we used a dataset of 100 records to test our machine learning model, and that it correctly predicted 95 out of those 100 instances. The accuracy percentage in this situation would be (95/100) = 95%. Although the accuracy rate is high, it doesn't tell us anything about the mistakes our machine learning models make when learning from new, untried data.

It mathematically denotes the ratio between the total number of forecasts that came true, both positively and negatively.

Where,

  • TP = True positive : The degree to which the model accurately predicts that the positive class is measured by true positive (TP). In other words, the instance is positive as it is predicted to be by the model. When determining the count of positives our model predicts correctly, the true positives are important. A genuine positive would be the count of occurrences of class "X" that our model properly identified as class "X" in a binary classification issue with the class "X" and "Y", for instance. If the model is intended to determine whether or not an email is spam, for instance, a true positive would happen when the model properly determines that an email is spam. The percent of all instances that are accurately categorised as belonging to a particular class is known as the true positive rate. True positives are significant because they show how effectively our model works in the presence of positive data. 201 of the 234 actual positives in a random confusion matrix are those that were correctly anticipated. As a result, True Positive has a value of 201.
  • FP = False Positive : Whenever the model predicts that an instance exists in a class when in fact it does not, false positives take place. False positives can cause issues since they can result in bad decisions being made. A high false positive rate in a medical diagnosis model, for instance, could lead to patients receiving needless care. Classification models may suffer from false positives because they reduce the model's overall accuracy. False positive rates are one method of measuring false positives. The percentage of all negative examples that are incorrectly interpreted as positive is known as the false positive rate. False positives can sometimes be useful, despite the fact that they initially seem to be harmful for the model. For instance, it is frequently preferable in medical applications to come down on the side of caution and have fewer false positives rather than completely miss a diagnosis. False positives, on the other hand, can be quite expensive in other applications, like spam screening. Therefore, when selecting amongst several classification models, it is crucial to carefully analyse the trade-offs involved. The number of negatives (out of 34) that are incorrectly forecasted as positives were These 3 were incorrectly projected as positive out of 34 actual negatives. Consequently, False Positive has a value of 3.
  • TN = True Negatives : The outcomes that the model accurately foresees as negative are known as true negatives. For instance, if the model is used to determine if a person has a sickness or not, a true negative would be if the model indicates that the individual does not have the disease when, in fact, they really do not. One metric used to gauge how effectively a classification model is working is true negatives. A large proportion of true negatives generally denotes good model performance. To calculate other performance metrics including accuracy, precision, recall, and F1 score, true negative is combined with false negative, true positive, and false positive. Although true negative offers useful information about the performance of the classification model, it should be read in the light of other metrics to obtain a full view of the model's correctness. 31 of the 34 actual negatives are those that were anticipated correctly. As a result, True Negative has a value of 31.
  • FN = False Negatives : When a model predicts that an instance will be negative when it actually is positive, this is known as a false negative. False negative results can be extremely expensive, particularly in the world of medicine. For instance, the disease could advance untreated if a tumour screening test concludes that a patient doesn't have any tumour when, in fact, they do. Other industries, including security or fraud detection, are equally susceptible to false negatives. In certain circumstances, a false negative can result in granting access to someone or allowing a transaction that shouldn't have been permitted. When assessing the effectiveness of a classification model, it is crucial to take false negatives into consideration because they are frequently more significant than false positives. Example- The number of positives (out of 234) that are mistakenly forecasted as negatives is represented by this number. 3 of the 234 real positives were mistakenly forecasted as negatives. Consequently, False Negative has a value of 3.

Syntax :

Using the accuracy_score method from Sklearn, we can also determine accuracy in this way :

Parameters :

The following parameters are accepted by the accuracy_score function :

  • y_true : Label indicator array/Correct label for sparse matrix.
  • y_pred : The predicted labels as returned by the classifiers as a label indicator array or sparse matrix.
  • normalise : The fraction of accurate predictions is returned if this value is True; otherwise, the total number of accurate predictions is returned. normalize's default value is True.
  • sample_weight : These sample_weights can be used to determine accuracy.

Return value :

Based on the value of the normalise parameter, this function either returns the percentage of accurate forecasts or the total number of accurate predictions.

Working of the Sklearn accuracy_score :

The accuracy_score function of the scikit learn accuracy_score tool calculates subset accuracy in multi-label classification.

  • It is required that the labels predicted for the specimen and the labels in y_true correspond exactly.
  • Accuracy that describes the model's behaviour across all classes. If all of the classes are equally important, it is helpful.
  • The ratio of the number of accurate forecasts to all of the predictions is used to determine the model's accuracy.

Example :

Use of Python's accuracy_score function is seen in the code below.

Output:

The percent accuracy of the prediction is :  0.8
The total no. of accurate predictions are :  4

Explanation :

From the sklearn.metrics library, we first imported the accuracy_score function in our program. Then the genuine labels and anticipated labels were defined. then, to get the percentage of labels that were correctly classified, we use the accuracy_score function. The accuracy_score function returns 0.8 since true_labels and pred_labels only have 1 value that does not tally and 4 values that tally. The accuracy_score function was used and normalize was set to False, which resulted in the accuracy_score function returning 4 as the answer.

Example 2 :

Output:

The percent accuracy of the prediction is :  0.4444444444444444
The total no. of accurate predictions are :  4






Youtube For Videos Join Our Youtube Channel: Join Now

Feedback


Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Preparation


Trending Technologies


B.Tech / MCA