JavaScript

Recognizing a face using JavaScript

What are the options? Many solutions exist for Machine Learning.

When you look around for ways to identify faces, you come up with a host of solutions. Many are generic, some are interfaces to existing frameworks. For JavaScript, you have a few popular ones to choose from. You may even be confused by the array of solutions. Even for face recognition you have several options. Many, most actually, are for Python but you can also find a few in JavaScript. Frameworks that are aimed specifically at face recognition are face,js and face-recognition.js. The latter is considered obsolete though. The smallest, in terms of code, is pico.js With about 200 lines of code it can detect your own face using your webcam. The Pico code comes with a trained set already, which means that it will not improve while you are using it. For the curious, the pre-trained classification cascades are available on their GitHub repository. If you do want to train it yourself, there is a learn function you can use. This is a C program available on GitHub. This is a long process to complete making it an interesting exercise rather than something useful. One of the more interesting API’s is face-api.js, this one uses TensorFlow.js for the machine learning part.

How does it work?

The simplest example of machine Learning is a pair of parameters such as the petals of the iris flower. This is the most common initial data set when you want to start learning Machine Learning. The data can be summarised in simple tables.

Sepal length Sepal width Petal length Petal width Class
5.1 3.5 1.4 0.2 Iris Setosa
4.9 3.0 1.4 0.2 Iris Setosa
7.0 3.2 4.7 1.4 Iris Versicolor
6.4 3.2 4.5 1.5 Iris-versicolor
6.9 3.1 4.9 1.5 Iris-versicolor
6.3 3.3 6.0 2.5 Iris-virginica
5.8 2.7 5.1 1.9 Iris-virginica

As you can see from the table, it is now possible to find the sizes which best match a certain flower. This is not an absolute truth but it can get very accurate with enough data points.

The question now becomes: How do you represent an image as a long list of values? Or a face for that matter? Well, the short story is that you convert the picture to the value of the intensity of each pixel. Starting from there, you can decide where lines and or points go that depict a face. What a face actually is has been determined by a pre-trained model. If you apply that to a number of pictures of the person you are trying to detect, then a table similar to the Iris one above can be used for determining which face it is.

How it actually works is a bit more complex than that. For you to create your own solution, you need to use a library made for it. Fortunately, there are many free and open source solutions available.

What are the options?

There are many libraries for using JavaScript, one is face-api.js. The others may more capable choices but this one has a very simple demo page. You can download the demo page from GitHub. The page contains the library and the demo pages. If you want to start at a deeper level, you can check out TensorFlow and dlib. Face-api uses TensorFlow as a machine Learning library.

Once you have everything downloaded from GitHub, you can use the examples library to explore different methods for face-recognition.

What are the use cases?

In industry, face recognition is used for access control, attendance checks and other security related case. In social media networks, your face can be tagged so that you can search for your face rather than your name. For your own system, you can use it for access to your computer and even control some of your applications.

What are we developing?

We are making a simple system to detect a face.

To detect a face, you need to have the software, images and a trained model. You can train the model yourself and you should but for your specific task, you can also re-train an existing model. In this example, the model is pre-trained and downloaded.

For the code to work, you need to collect the sample. In this case we use a webcam, simple enough with HTML5. To do this, add a video tag in the html code.

<video id = "videoID" width="720" height="560" autoplay muted></video>

Simple right? but wait you need to call this from your JavaScript also.

const video = document.getElementById('videoID')

Now you can use the constant to get your stream into the JavaScript code. Create a startVideo function.

function startVideo() {
navigator.mediaDevices.getUserMedia(
{ video: {} },
stream => video.srcObject = stream,
err => console.error(err)
)
}

This is a general function that does not call the videoID, you need to set a function that calls the incoming stream. A way to catch the stream is to use Promise functions.

Promise.all([
faceapi.nets.tinyFaceDetector.loadFromUri('/models'),
faceapi.nets.faceLandmark68Net.loadFromUri('/models'),
faceapi.nets.faceRecognitionNet.loadFromUri('/models'),
faceapi.nets.faceExpressionNet.loadFromUri('/models')
]).then(startVideo);

The Promise statement above will now run the startVideo function when the stream is available. Finally, the video event listener below will run the functions available from the face API.

video.addEventListener('play', () => {
const canvas = faceapi.createCanvasFromMedia(video);
document.body.append(canvas);
const displaySize = { width: video.width, height: video.height };
faceapi.matchDimensions(canvas, displaySize);
setInterval(async () => {
const detections = await faceapi.detectAllFaces(video, new
faceapi.TinyFaceDetectorOptions()).withFaceLandmarks().withFaceExpressions();
const resizedDetections = faceapi.resizeResults(detections, displaySize);
canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height);
faceapi.draw.drawDetections(canvas, resizedDetections);
faceapi.draw.drawFaceLandmarks(canvas, resizedDetections);
faceapi.draw.drawFaceExpressions(canvas, resizedDetections);
}, 100);
});

What do you need in your development environment?

Since we are using JavaScript, we need nodejs, node and npm (or similar). your best tactic here is to create your development directory and then clone the repository from GitHub. The examples are in the examples directory so move there.

$ cd examples/example-browser/

Inside the directory you need to install the packages using npm.

$ npm install

Since you are in the directory where you downloaded face-api.js, npm will find what you need to download. Next you can start the demo and open it in your browser.

$ npm start

The last line in the output shows the port you need to use in your browser. These examples are usually of the cast of Big Bang Theory but you can load in your own pictures and even use the webcam to determine your age.

These demos are fun to play with but the real value is that the code is available to study.

In the files, the JavaScript are separated in a separate directory to make it easy to use. For your pages to work you need to load in the API and all scripts you are going to use.

Conclusion

This is a very short example of how to use existing API’s to detect faces and recognise them. The really fascinating part is to find useful applications for the technology. What will you use it for? Access to your own machine or just some specific data or application?

About the author

Mats Tage Axelsson

Mats Tage Axelsson

I am a freelance writer for Linux magazines. I enjoy finding out what is possible under Linux and how we can all chip in to improve it. I also cover renewable energy and the new way the grid operates. You can find more of my writing on my blog.