Wednesday, December 5, 2012

OpenCV Motion Detection Based Action Trigger - Part 2

This post is in continuation of my previous post where this motion detection application was introduced. Let's examine what is being done in the program in order to detect motion.

Motion detection, in simple terms, involves comparing camera images with past images and detecting if anything significant changed between them. The program captures frames from the camera and proceeds to do the following steps on each frame to detect motion:

1. The first step is to scale down the image to a smaller size. A smaller size image needs less resources for processing. As long as the object we want to detect still has a reasonable size in the scaled down image, accuracy would not suffer. We are scaling the image down to a maximum of 800 x 600 size, but in practice even a 400 x 300 frame size would also be good enough.
Original frame after size reduction

2. Smoothen the image. Smoothening the image would remove some noise and would reduce spurious triggering of the motion detector. We do a bilateral smoothing that is slower than the usual Gaussian smoothening, but preserves edges, thus retaining features whose movement we can detect.

3. Increase contrast and adjust brightness of the image. This improves the image quality and the separation between different objects. If a dark object is being detected against a light background, increasing contrast and brightness can completely wash out the background, thus making it easier for us to detect the object! The contrast and brightness levels are adjustable using the corresponding sliders.

4. Enhance the edges in the image by detecting edges and adding it back to the image. Making the edges prominent aids in detecting motion by amplifying changes around the edges. I chose the Laplacian formula for edge detection because it uses the derivatives in both x and y direction and hence is better than plain Sobel and it is less agressive than Canny edge detection.

Edges detected in the frame

After contrast, brightness adjustment and edge enhancement

5. Create a running average of the image processed till now. This running average acts like a memory of the past. The default running average weight used is 0.02 which means that it has memory of approximately 1/0.02 = 50 frames. With a 30fps video, it is approximately 2 seconds. The weight is adjustable by using the corresponding slider.

6. Subtract the current frame from the running average. The difference roughly shows the item blobs that have moved. Convert this difference image to grayscale and subsequently threshold it to make the blobs clear.
Difference with moving average

7. Dilate and erode the thresholded image to remove noise and enhance the blobs. First erode twice to remove noise. Then dilate a few times to close gaps formed in the image. Erode the dilation to get some of the original proportion back. The threshold level and erosion, dilation amounts are adjustable using the corresponding sliders. Increasing the dilation and erosion helps make contiguous blobs, but can detect noise as blob in a noisy image.
Thresholded difference with blobs detected (overlapped with original)

8. Find the contours of the blob and determine the bounding rectangle size for each of the blobs. The bounding rectangle gives an idea of the size of the blob. Blobs of sizes smaller than our expected object size are also discarded in this step.
Points of movement marked

9. Act on the detected motion points. If the program is in configuration mode, draw a marker around the point of movement detected. Otherwise execute the trigger action.

This is an overview of a generic motion detection. Special processing when applied with knowledge of specific kind of images/videos can enhance accuracy. To grab the source code of this application, visit my previous post (OpenCV Motion Detection Based Action Trigger - Part 1).

1 comment:

PRC said...

can you please provide the code in python..that will help me alot.
Thank you