How to edit the image stream for video chat, teams, zoom

dannadori
4 min readMar 31, 2020

This original article is also available here (Japanese).
https://cloud.flect.co.jp/entry/2020/03/31/162537

Hello everyone.

At the request of the Governor of Tokyo, our company is now working from home in principle to prevent the spread of the new coronavirus infection and to prevent its spread.
I am sure there are many people who are dealing with the same kind of problems, but I hope we can work together to overcome this difficulty.

As a general rule, if you work from home for a long period of time, you may not be able to have the casual conversations that you used to have on a daily basis, which may cause stress.
In such a situation, I would like to create a situation where one can laugh and take a breather, so I will introduce a small story.

The content is how to hook up a webcam to process and deliver in video conferencing, such as Microsoft’s Teams and Zoom.
Since I’m a Linux user, I’d like to introduce you to Linux in this article. I’m sure other platforms will be introduced at some point or another.

In addition, I choose the time and the case to make “a situation where even one can laugh and take a breather”, so I ask you to do so at your own risk (^_^ )/.

Assumptions

It should work fine on most Linux systems, but the environment I worked in is a Debian Buster.

$ cat /etc/debian_version
10.3

Also, if you don’t seem to have python3 on board, please introduce it.

$ python3 — version
Python 3.7.3

Install related software.

Virtual Webcam Device

This time, we will use a virtual webcam device called v4l2loopback.
https://github.com/umlaeute/v4l2loopback

We need to identify the virtual webcam device and the actual webcam, so we first check the device file of the actual webcam.
In the example below, it looks like video0 and video1 are assigned to the actual webcam.

$ ls /dev/video*.
/dev/video0 /dev/video1

So, let’s introduce v4l2loopback.
First of all, please git clone, make and install.

$ git clone https://github.com/umlaeute/v4l2loopback.git
$ cd v4l2loopback
$ make
$ sudo make install

Next, load the module. In this case, it is necessary to add exclusive_caps=1 to make it recognized by chrome. [https://github.com/umlaeute/v4l2loopback/issues/78]


sudo modprobe v4l2loopback exclusive_caps=1

Now that the module is loaded, let’s check the device file. In the example below, video2 has been added.

$ ls /dev/video*.
/dev/video0 /dev/video1 /dev/video2

ffmpeg
The easiest way to send data to a virtual webcam device is to use ffmpeg.
You can use apt-get and so on to introduce it quickly.

Web camera hooks and video delivery

This time, I’m going to do some image processing once it detects a smile.
When a smile is detected, a smile symbol will be displayed on the video.

First, clone the following repository files to install the module.

$ git clone https://github.com/dannadori/WebCamHooker.git
$ cd WebCamHooker/
$ pip3 install -r requirements.txt

Here you will get the cascade file, you can find out more about cascade file in opencv official.
https://github.com/opencv/opencv/tree/master/data/haarcascades

$ wget https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_default.xml -P models/
$ wget https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_smile.xml -P models/

Let’s borrow the smile mark from Toya-san.

$ wget https://4.bp.blogspot.com/-QeM2lPMumuo/UNQrby-TEPI/AAAAAAAAI7E/cZIpq3TTyas/s160/mark_face_laugh.png -P images/

I hope it has a folder structure like this.

$ ls -1
haarcascade_frontalface_default.xml
haarcascade_smile.xml
mark_face_laugh.png
webcamhooker.py

The execution is as follows.
“input_video_num” should be the actual webcam device number. For /dev/video0, enter a trailing 0.
“output_video_dev” must be the device file of the virtual webcam device.
In addition, please use ctrl+c to terminate.

$ python3 webcamhooker.py — input_video_num 1 — output_video_dev /dev/video0

When the above command is executed, ffmpeg starts to run and the video is delivered to the virtual camera device.

Let’s have a video chat!

When you want to have a video chat, you should see something called “dummy~” in the list of video devices, so select it. Here’s an example from Teams. Think of each participant’s screen as the left and right. The user on the left is the user using the virtual camera device in this case. The right side is the receiving side. If you smile, you’ll get a smile mark. Great success(^_^)/.

Finally

Now that face-to-face communication is difficult, it would be nice to have more fun using video chat.
This time I have shown an example of processing an image by detecting a smile, but I think you can do various processing by using opencv and other tools.
I’d love to see you try different things!

I am very thirsty!!

Reference

The smile detection in opencv was referred to here.
https://qiita.com/fujino-fpu/items/99ce52950f4554fbc17d

I used this as a reference for pasting images in opencv.
https://qiita.com/a2kiti/items/9672fae8e90c2da6f352.

--

--