top of page
  • Writer's pictureAlibek Jakupov

Open CV : Real-time video streaming with graphical elements

Updated: Nov 19, 2021

OpenCV (Open source computer vision) is a library of programming functions mainly aimed at real-time computer vision.Originally developed by Intel, it was later supported by Willow Garage then Itseez (which was later acquired by Intel). The library is cross-platform and free for use under the open-source BSD license. OpenCV supports the deep learning frameworks TensorFlow, Torch/PyTorch and Caffe.

quote from Wikipedia

OpenCV implementation in Python allows creating a video streaming application in a couple of lines of codes. You can even add a haarcascade classifier to detect faces in real-time. Here's a really simple example:

# coding: utf-8
import cv2

# create Haar feature-based cascade classifiers for faces
face_classifier = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
# capture video from the  default camera (id=0)
video_capture = cv2.VideoCapture(0)
# rgb color codes
# line type used for drawung a rectangular around a detected object
# Main loop (endless video capture)
while True:
 # Capture frame-by-frame
    ret, frame =
 # Prepare to convert to gray scale
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    cv2.namedWindow('Video', cv2.WINDOW_NORMAL)
 # Full screen mode
    cv2.namedWindow("Video", cv2.WND_PROP_FULLSCREEN)
    cv2.setWindowProperty("Video", cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)

    faces = face_classifier.detectMultiScale(frame)

 # Analyze only in case of presence of faces
 if len(faces) > 0:
 # for each face save it as file and send to the API
 for (x, y, w, h) in faces:
 # draw a white rectangle around all the faces
            cv2.rectangle(frame, (x, y), (x + w, y + h), white, lineType)
 # Display the resulting frame
        cv2.imshow('Video', frame)

 if cv2.waitKey(1) & 0xFF == ord('q'):

# When everything is done, release the capture

But what if you wanted to add some graphical elements to it? One may use tkinter and add buttons to the bottom panel, like in this example.

However, if you wanted to add some graphical elements to your screen you will not be able to process images directly in your main widget. Thus, if you want to add a logo directly under a detected face, you will need to process a frame and add an image to it.

In this article we will create an application that:

  1. Detects a face using a haarcascade classifier

  2. Analyses a face using Tensor flow

  3. Dynamically adds a logo to a video streaming application according to the classification results

Up we go!

1) First you need to upload images

# upload logo and resize it
white_logo = cv2.imread('picto_blanc/picto1.png')
cyan_logo = cv2.imread('picto_cyan/picto1.png')
orange_logo = cv2.imread('picto_orange/picto1.png')
green_logo = cv2.imread('picto_vert/picto1.png')

2. Set a full screen mode

# capture video from the camera by default
video_capture = cv2.VideoCapture(0)
# Full screen mode
cv2.namedWindow("Video", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("Video", cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)

3. Then create a video streaming and initialize your deep learning model

#These names are part of the model and cannot be changed.
output_layer = 'loss:0'
input_node = 'Placeholder:0'
with tf.Session() as sess:
    prob_tensor = sess.graph.get_tensor_by_name(output_layer)
 # Main loop (endless video capture)
 while True:
     # Capture frame-by-frame
     ret, frame =
     # flip frame to avoid mirror effect
     frame = cv2.flip(frame, 1)
     # convert to gray scale (for better face detection)
     gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
     # set the header image
     frame[0:header_logo_resized.shape[0], 0:header_logo_resized.shape[1]] = header_logo_resized

At this step we only add a big header to the main window.

4. Implement your analysis and according to the result set a logo.

predictions =, {input_node: [augmented_image]})
# get the highest probability label
highest_probability_index = np.argmax(predictions)
predicted_tag = labels[highest_probability_index]
# calculate logo positions
# distance between border and logo
# gowning
gowningX = rectangleXup + PADDING
gowningY = rectangleYdown + PADDING
# blouse (between mask and gowning)
# int () - opencv only supports int for coordinates
blouseX = int((gowningX + maskX)/2)
blouseY = rectangleYdown + PADDING
if predicted_tag == 'OK':
    frameColor = green
    cv2.rectangle(frame, (rectangleXup - 1, rectangleYdown),
                (rectangleXdown + 1, rectangleYdown + LOGO_SIZE + PADDING),
                frameColor, -1)
    frame[maskY:maskY + LOGO_SIZE, maskX:maskX + LOGO_SIZE] = white_mask_compact

And so and so forth. As you may have notices the logic is quite simple: for each condition we add a separate logo and calculate the size of the logo according to the size of a face detected by the application.

The real trick is here

frame[maskY:maskY + LOGO_SIZE, maskX:maskX + LOGO_SIZE] = white_mask_compact

At this very point you add a logo at the previously defined position.

Now you can create your awesome real-time applications using OpenCV and without having to use Tkinter.

Hope you will find it useful

43 views0 comments


bottom of page