Screen Capture on your browser with ffmpeg.wasm

dannadori
3 min readJan 4, 2021

--

Note:
This article is also available here.(Japanese)
https://zenn.dev/wok/articles/0009_ffmpeg-wasm-screen-recorder

Introduction

Until a while ago, I was a Linux user and I used ffmpeg to make gif animations for my Blog. I recently moved to Windows and was wondering how I could make a gif animation. I found out that I could make them with ffmpeg for Windows. Well, that was the conclusion I came to, but I don’t like the fact that adding many software to Windows makes it unstable (this is the prejudice of someone who came back to Windows after a long time). So, I decided to use ffmpeg.wasm, which went viral a while ago, to create a function to record the screen on the browser. This will save me from having to install strange applications.

Here is what we have created. On the left side, specify the area to be captured and record it. On the right side, you can view and download the results of the recording.

As a side note, I also know that Windows 10 allows you to record from Win+G. I know about it, but to be honest, I didn’t know about it until after I made this browser app, so I pretended I didn’t hear about it. (I heard that it’s not possible to record full screen or across multiple windows….)

About ffmpeg.wasm

The official page of ffmpeg.wasm is here. As the name suggests, it is a Webassembly build of ffmpeg. And this allows you to use ffmpeg’s functions in your browser. However, transcode is extremely slow because it cannot use hardware assist such as GPU, etc. I felt like I had to wait for several minutes to process a 10-second 4K video. So, you may need to make it so that transcode is not needed.

Let’s look at an example. It seems that Blobs recorded by MediaRecorder can be treated as Input. Before the conversion, it is necessary to fetch the Blob data on MEMFS and make it accessible from ffmpeg.wasm. Then, process the data using the familiar commands of ffmpeg. There is a bit more information about MEMFS in the API documentation and the emscripten documentation.

Overview

To capture the display in the Browser, use getDisplayMedia to get the MediaStream. Normally, we would just feed this to the MediaRecorder, but in this case, we want to be able to specify the area to record, so we feed the MediaStream to HTMLVideoElement and then write out just the target area to HTMLCanvasElement. We will then retrieve the MediaStream from this HTMLCanvasElement and feed it to the MediaRecorder.

After acquiring the blob data with MediaRecorder, import the blob data into MEMFS, and use ffmpeg.wasm to convert the blob data into mp4. Read the converted mp4 from MEMFS and make it available for download (set to anchor).

Source Code

The execution part of ffmpeg.wasm and the code before and after it are as follows. (1) fetches the Blob data on MEMFS. (2) executes the ffmpeg.wasm command. (3) reads out the converted data. In (2), the `-c copy` option is given to prevent transcoding.

Very easy.

appInfo.ffmpeg.FS('writeFile', name, await fetchFile(new Blob(blobs)));  // <---(1)     
await appInfo.ffmpeg!.run('-i', name, '-c', 'copy', outName); // <---(2)
const data = appInfo.ffmpeg!.FS('readFile', outName) // <---(3)

Demo and Repository

The demo of the recording function is available at the following URL, which you may find surprisingly fast since it does not use transcoding.

And the source code is stored in the repository below.

Summary

I used ffmpeg.wasm to implement a screen recording function in the browser.
It feels surprisingly usable, and I’m personally very happy with it.

I am very thirsty!!

--

--

dannadori
dannadori

Written by dannadori

Software researcher and engineer

Responses (1)