Javatpoint Logo
Javatpoint Logo

OpenCV Blob Detection

Blob stands for Binary Large Object and refers to the connected pixel in the binary image. The term "Large" focuses on the object of a specific size, and that other "small" binary objects are usually noise. There are three processes regarding BLOB analysis.

BLOB extraction

Blob extraction means to separate the BLOBs (objects) in a binary image. A BLOB contains a group of connected pixels. We can determine whether two pixels are connected or not by the connectivity, i.e., which pixels is neighbor of another pixel. There are two types of connectivity. The 8-connectivity and the 4-connectivity. The 8-connectivity is far better than 4-connectivity.

BLOB representation

BLOB representation is simply means that convert the BLOB into a few representative numbers. After the BLOB extraction, the next step is to classify the several BLOBs. There are two steps in the BLOB representation process. In the first step, each BLOB is denoted by several characteristics, and the second step is to apply some matching methods that compare the features of each BLOB.

BLOB classification

Here we determine the type of BLOB, for example, given BLOB is a circle or not. Here the question is how to define which BLOBs are circle and which are not based on their features that we described earlier. For this purpose, generally we need to make a prototype model of the object we are looking for.

How to perform Background Subtraction?

Background subtraction is widely used to generating a foreground mask. The binary images contain the pixels which belong to moving objects in the scene. Background subtraction calculates the foreground mask and performs the subtraction between the current frame and background model.

There are two main steps in Background modeling

  • Background Initialization- In this step, an initial model of the background is computed.
  • Background Update- In this step, that model is updated that adapt the possible change in the scene.

Manual subtraction from the first frame

First, we import the libraries and load the video. Next, we take the first frame of the video, convert it into grayscale, and apply the Gaussian Blur to remove some noise. We use the while loop, so we load frame one by one. After doing this, we get the core part of the background of the subtraction where we calculate the absolute difference between the first frame and the current frame.

Example-1

Subtraction using Subtractor MOG2

OpenCV provides the subtractor MOG2 which is effective than the manual mode. The Subtractor MOG2 has the benefit of working with the frame history. The syntax is as follows:

The first argument, history is the number of the last frame(by default 120).

The second argument, a varThreshold is the value that used when evaluating the difference to extract the background. A lower threshold will find more variation with the advantage of a noisier image.

The third argument, detectShadows is the functions of the algorithm which can remove the shadow if enabled.

Example-2:

In the above code, The cv2.VideoCapture("filename") accepts the full path included the file where the cv2.createBackgroundSubtractorMOG2() will exclude the background from the video file.






Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Preparation


Trending Technologies


B.Tech / MCA