## Evaluation Metrics for Object Detection and Recognition## IntroductionComputer vision applications are now being used widely everywhere, and computer vision-related image editing and detection are the most common and basic tasks that developers perform. Object detection and recognition are used in e-commerce, CCTV surveillance, medical imaging, and autonomous driving. All the tasks mentioned above perform main activities like finding out the objects and localizing within the images and videos. ## Evaluation metricsEvaluation metrics will help to assess the performance of the object detection and recognition models in computer vision and deep learning tasks. These metrics benefit in quantifying the accuracy of the models and give valuable insights into the model's merits and demerits. Many find terms like IOU, AP, MIOU, mAP, etc., while writing and researching papers are the common metrics that are used in object detection. In this article, we will cover all the important metrics that help to identify the strengths and weaknesses of evaluation metrics. ## Common Evaluation MetricsThese are the metrics that evaluate every machine learning and computer vision task's accuracy and performance. They are - Accuracy
- F1 Score
- Dice Coefficient
- Intersection over Union (IoU)
- Mean Intersection over Union (more)
- Average Precision (AP)
- Mean Average Precision (mAP)
Now, let's dive deep into every evaluation metric and its performance. ## 1. AccuracyAccuracy is the complete basic metric that defines the model's overall performance and correctness in object recognition and detection. Accuracy is mostly utilized in the classification tasks like yes or no. However, accuracy can also be used in object detection and recognition as these tasks can also be classified, and their correctness can be calculated using the Accuracy metric. The formula for Accuracy calculation is as follows: Where:
Accuracy gives a simple basic calculation of how the model is working completely, and it may not be used in imbalanced datasets as different classes mislead other classes in terms of sample weights. ## 2. F1 ScoreBefore learning what the F1 score is, we have to look at two important terms. They are Precision and Recall.
Now, the F1 score will combine the above two metrics, Precision, and recall, into a single metric value. The f1 score range will be between 0 and 1. If the model's F1 score is 1, then the performance of the model is higher, and anything nearing a 0 F1 score states that the performance is worse. The formula for F1 Score is: ## 3. Dice CoefficientImage segmentation tasks use dice coefficient metric frequently for the analysis of images. This metric states the similar relationship between the two masks of predicted and ground truth. The Dice Coefficient has a range from 0 (no overlap) to 1 (perfect overlap). The Dice Coefficient is called with the other name Sørensen-Dice index. The formula for it to be calculated is as follows: Where: |A ∩ B|: It is the size of the intersection between the predicted mask (A) and the ground truth mask (B). |A| and |B| are the sizes of the individual masks. ## 4. Intersection over Union (IoU)Intersection over Union is the common metric used in Object detection and Image segmentation tasks. It states the overlapping similarity between two boxes called the segmentation box and ground truth mask. The IoU metric has a range from 0 (no overlap) to 1 (perfect overlap). The following steps can calculate the IoU. - Calculating the Area of Union
- Calculating the Area of Intersection(Overlap between two boxes)
- Dividing Area of Intersection by Area of Union
The formula is as follows: ## 5. Mean Intersection over Union(mIOU)Miou(Mean Intersection over Union) is the primary metric used for the evaluation of the accuracy results of the image segmentation tasks. It will measure how well the segmentation masks relate to the ground masks. mIoU will estimate the level of cross-over and overlapping between the original and predicted segmentations. Miou is the overlapping between two boxes. If the overlapping region is bigger, then the value of Miou is also will be greater. The steps for calculating MIOU are as follows: - Calculate IoU
- Calculate the average(mean) of all IoU values
The formula for calculating mIoU is here, the n is the total number of classes. ## 6. Average Precision (AP)Average Precision will state the ability of the models to rank and retrieve valuable information. The calculation of this metric is done by generating a Precision recall curve(PR Curve) and then calculating the Area under the Curve, respectively. The steps followed are - Firstly, Compute the Precision and recall at separate confidence thresholds for every class. This will ultimately give a set of precision and recall points. Then, plot the above points on the graph.
- Precision will be on the y-axis, and recall on the x-axis. PR curve will be generated.
- Now compute the Area under the PR curve. This Area is the average Precision for the specific class.
- If there are multiple classes, compute AP for every class, and then calculate the mean (average) of these individual AP values to obtain the final AP.
## 7. Mean Average Precision (mAP)In the previous Average Precision, all the individual objects are assessed, whereas in the mean Average Precision, the entire model's Precision is assessed. It is mostly used for multiple classes. The steps for calculating mAP are as follows: - Calculate AP for all the separate classes
- Now, Calculate the mean of all the above AP values
## Python Implementation
Precision: 1.00 Recall: 1.00 F1 Score: 1.00 Accuracy: 1.00 Dice Coefficient: 2.00 Average Precision (AP): 1.00 Mean Average Precision (mAP): 1.00 Mean Intersection over Union (mIoU): 0.76 |