In this blog post, I am glad to present to you the evolution of the algorithm we built for detecting black in a live stream.
Black detection is one of the quality checks we conduct as part of our monitoring service - to ensure the playout systems are working fine. Detecting black video while monitoring the live stream played out by our playout is one of the key requirements. Our goal was to ensure that the detection algorithm takes minimal CPU load and less than a frame duration to process.
Video black detection
Detection of black in a YUV image requires to ensure the Y or the Luma is close to 0%. We used Root Mean Square(RMS) of Y (Luma) values of all the pixels in the image. UV or Chroma detection is not needed as when Luminance, i.e., the amount of light is 0% image will be black.
Here’s the equation that we used to compute RMS of Y(Luma)
Formula used to compute RMS value of Luma
where N is the width of image, M is the height
f(i, j) represents a pixel at position i, j.
Once we got the RMS value of Luma(Y) for a frame, we had to decide a threshold below which a frame can be termed as black. As broadcast safe color ranges from 16-235 for LUMA, and also to account for encode-decode losses, we came to a conclusion that a frame with RMS value lesser than 8% can be called out as black, let’s call it as rms_threshold.
Now that we have established black detection mechanism for a frame or image let’s extend it to video.
To detect black in video, we chose to use two inputs, one being number of frames for which black must be continuously detected before calling out the video as black, and the other being number of frames after which we must stop calling out the video black. Let’s refer to them as black_inp_threshold_ms and black_out_threshold_ms. Using these two thresholds we were able to detect black video.
Overcoming a challenge
When there were small non-black regions in the frame, RMS was still below rms_threshold, but the frame was still being detected as black. Figure 1. was being detected as black:
Figure 1: Frame being detected as black
To overcome this issue, we decided to slice or divide the frames into N slices of full width. Later, the RMS value is computed for each slice and compared with the rms_threshold. If the RMS of any of the slices is higher than the rms_threshold, then the frame is a non-black frame. With this algorithm we were able to overcome false positives.
Figure 2. Frame with slices depicted by dotted white lines
We then optimized our implementation using Intel® Integrated Performance Primitives. Our implementation of black detection can be found on GitHub.
Thanks to Swapnil Dabhade and Viswanath Bathina from Video Engineering Team @Amagi for their inputs.