Implement virtual background with npm

dannadori
4 min readJun 21, 2020

--

I recently created the first npm package in my history, so I explained how to make a demo with it. The pakcage is for virtual background. I wrote about this in he previous article.

I’ll make demo like this.

It is assumed that this package version is 1.0.19.

Advance preparation

We’ll create a demo in React this time.
First, let’s set up the environment.

create-react-app demo --typescript

Install the package

Now install the virtual background package.

npm install local-video-effector
npm install

Prepare a background image

Let’s prepare an image to use as a virtual background.Place your favorite image in the public directory created by the create-react-app command above.
In this case, I’ve placed an image file named pic1.jpg.

$ ls public/pic1.jpg 
public/pic1.jpg

Code

The whole thing is about 60 lines, so I’ll just paste it all in first.

import * as React from 'react'
import { LocalVideoEffectors, ModelConfigMobileNetV1, ModelConfigResNet, getDeviceLists } from 'local-video-effector'

class App extends React.Component {

localCanvasRef = React.createRef<HTMLCanvasElement>()
localVideoEffectors : LocalVideoEffectors|null = null

componentDidMount() {
getDeviceLists().then((res)=>{console.log(res)}) // <--------- (2-6')

const model = new URL(window.location.href).searchParams.get('model') // <--------- (1-1)
const blurString = new URL(window.location.href).searchParams.get('blur') // <--------- (1-2)
const blur = blurString === null ? 0 : parseInt(blurString)
if(model === 'MobileNetV1'){ // <--------- (2-1)
this.localVideoEffectors = new LocalVideoEffectors(ModelConfigMobileNetV1)
}else if (model === 'ResNet'){
this.localVideoEffectors = new LocalVideoEffectors(ModelConfigResNet)
}else{
this.localVideoEffectors = new LocalVideoEffectors(null)
}
this.localVideoEffectors.cameraEnabled = true // <--------- (2-2)
this.localVideoEffectors.virtualBackgroundEnabled = true // <--------- (2-3)
this.localVideoEffectors.virtualBackgroundImagePath = "/pic1.jpg" // <--------- (2-4)
this.localVideoEffectors.maskBlurAmount = blur // <--------- (2-5)
this.localVideoEffectors.selectInputVideoDevice("").then(() => { // <--------- (2-6)
requestAnimationFrame(() => this.drawVideoCanvas()) // <--------- (3)
})
}

drawVideoCanvas = () => {
if (this.localCanvasRef.current !== null) {
const width = 640
const height = 480
this.localVideoEffectors!.doEffect(width,height) // <---------- (4)

if (this.localVideoEffectors!.outputWidth !== 0 && this.localVideoEffectors!.outputHeight !== 0) {
this.localCanvasRef.current.width = width
this.localCanvasRef.current.height = height
const ctx = this.localCanvasRef.current.getContext("2d")!
ctx.drawImage(this.localVideoEffectors!.outputCanvas, 0, 0, // <---------- (5)
this.localCanvasRef.current.width, this.localCanvasRef.current.height)
}
}
requestAnimationFrame(() => this.drawVideoCanvas()) // <---------- (6)
}

render() {
return (
<div style={{ width: "640px", margin: "auto" }}>
<canvas ref={this.localCanvasRef} style={{ display: "block", width: "640px", margin: "auto" }} />
</div>
)
}
}

export default App

Now, let’s look at the source code.

First, (1–1) to (1–2) analyze the URL parameters.

The “model” is used to specify the AI model to be used to distinguish between human and background when performing virtual background.In general, an AI model is a trade-off between accuracy and processing speed. The virtual background package allows you to choose from three models

  • The default (in accuracy and speed)
  • MobileNetV1 (low accuracy, high speed)
  • ResNet (high accuracy, low speed)

The “blur” determines the amount of blur on the border between the person and the background.The higher the number, the greater the extent of blur; zero means no blur.

(2–1)-(2–6) defines the behavior of this parameter.

A few lines from (2–1) initialize the virtual background class, LocalVideoEffector, with the constructor by setting the model to be used for the virtual background.

In (2–2), a flag to enable the camera is set, and in (2–3), a flag to enable virtual backgrounds is set.

In (2–4), the background image is set to be used. This is the pic1.jpg that was placed in the public directory earlier.

In (2–5), a blur is set to the background image.

(2–6) specifies the camera device ID to be used. If an empty string is passed, the default camera is used.If you want to know the ID of a camera device other than the default one, we have a utility function like (2–6') to get it.

After completing (2–1) to (2–6), you are ready to use the virtual background. We now enter a loop process, as in (3).

In the loop, we will perform steps (4)-(6).

In (4), we will create a video frame with a virtual background. You can specify the width and height of the virtual background in the arguments. With this value the virtual background works. So frame dropping may occur if you set large number. On the other hand, if you feel the processing is too heavy, please reduce this value.

This is the source code for the demo.

Demo

The demo we created looks like this. You can change the blurriness of the model and its borders by changing the query parameters.

Git Repository and NPM

Finally

This time, I created a virtual background demo using the npm package.
You can create a virtual background in about 60 lines, so you can use it easily.

I am very thirsty!!

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

dannadori
dannadori

Written by dannadori

Software researcher and engineer

Responses (1)

Write a response