Clarifai’s new Face Detection model finds faces in images and returns bounding box location coordinates. This model is useful for security camera footage, photo filter apps, dating apps, digital photography, and more. Here’s a copy + paste tutorial on how to use our Face Detection model and Javascript to build a fun face photo filter!
The brand new Face Detection model has recently been released in alpha and does exactly what you should expect it to – you provide an image with some lovely faces (or not so lovely faces) and it returns the position of any faces it has found. I immediately thought of Snapchat’s photo/video filters as an application and wondered what would happen if I combined Clarifai’s Face Detection model with the 💩
This project isn’t too complex at all, and if you’re happy to just read (commented) code, here is a link to the GitHub repository for this project.
How is this going to work?
We’re going to wait for an image to be selected, and then use the Clarifai JavaScript client to get the position of the faces it’s found. Finally, we’ll take those values, relate them directly to our image and overlay poo emojis on their unsuspecting faces.
Get set up with this project
Firstly, we’ll need a Clarifai account. You can get one here. Create a new application and take note of your Client ID and Client Secret – don’t share these with anyone else.
Next, rename keys.example.js to just keys.js and paste in your Client ID and Client Secret.
Next, let’s set up our markup for this project. If you’re starting from scratch, create a file called index.html and make it look the same as mine. If you’re building this into an existing project, just make sure you have jQuery and the Clarifai JavaScript client included, and it has the following markup in the body:
<div class="input-group">
<label for="image">Choose an image</label>
<input type="file" id="image">
</div>
<img src="">
<canvas id="canvas"></canvas>
Let’s build this thing!
If you want to follow along with the finished JavaScript file, here it is.
Right, let’s do some housekeeping and create some initial variables which we will populate later.
var overlay = "💩";
var canvas, ctx;
var imageDetails = {
clarifaiFaces: [],
realFaces: []
};
The overlay variable contains the emoji which we wish to paste onto the faces we find in the image. The canvas and
Next up, we’re going to initialize a new application using the Clarifai JavaScript client.
var app = new Clarifai.App(cId, cSec);
Now let’s create the code which will be run when the file input has a file selected. I’ve split this application up into three functions, which run one after another. You should hopefully see the flow from one function to the next.
$("input#image").on("change", function() {
if(this.files[0]) {
var reader = new FileReader();
reader.onload = function(e) {
imageDetails.b64 = e.target.result;
$("img").attr("src", imageDetails.b64);
imageDetails.b64Clarifai = imageDetails.b64.replace(/^data:image\/(.*);base64,/, '');
imageDetails.width = $("img").width();
imageDetails.height = $("img").height();
faceDetection(imageDetails.b64Clarifai);
}
reader.readAsDataURL(this.files[0]);
}
});
This is an event listener and is waiting for the file input to change. If a file has been selected, we do the following:
- Convert the image to a base-64 encoded string and insert the string into the <img> src. This displays it on the screen. This string is stored in imageDetails.
- Creates a version of the base-64 encoded string without metadata, which is what Clarifai needs. This is also stored in imageDetails.
- Stores the images width and height in imageDetails too, which we’ll use later to determine where the faces are in our image.
This function finishes by firing off our faceDetection() function. Let’s take a look at how it’s put together.
function faceDetection(b64Img) {
app.models.predict("a403429f2ddf4b49b307e318f00e528b", {
base64: b64Img
}).then(
function(res) {
var data = res.outputs[0].data.regions;
if (data !== null) {
for (var i = 0; i < data.length; i++) {
imageDetails.clarifaiFaces.push(data[i].region_info.bounding_box);
}
}
drawBoxes();
},
function(err) {
console.log(err);
}
)
}
This function takes the Clarifai-ready base-64 encoded string and calls the face detection alpha model. It waits for a response and pushes each of the bounding boxes (the location information of each face) into the imageDetails.clarifaiFaces array.
The important thing to note about these values is that they provide the top-left and bottom-right positions of each box with values ranging from 0 to 1. We will later have to relate these to the actual position on our image using the width and height properties we stored earlier.
But I’m getting ahead of myself. Let’s look at the final function which is called at the end of faceDetection(), and that’s a function called drawBoxes().
function drawBoxes() {
canvas = document.getElementById("canvas");
$(canvas).attr("width", imageDetails.width).attr("height", imageDetails.height);
ctx = canvas.getContext("2d");
ctx.textBaseline = "top";
for(var i=0; i<imageDetails.clarifaiFaces.length; i++) {
box = {
x: imageDetails.clarifaiFaces[i].left_col * imageDetails.width,
y: imageDetails.clarifaiFaces[i].top_row * imageDetails.height,
w: (imageDetails.clarifaiFaces[i].right_col * imageDetails.width) - (imageDetails.clarifaiFaces[i].left_col * imageDetails.width),
h: (imageDetails.clarifaiFaces[i].bottom_row * imageDetails.height) - (imageDetails.clarifaiFaces[i].top_row * imageDetails.height)
}
imageDetails.realFaces.push(box);
ctx.font = (box.w * 1.4) + "px monospace";
ctx.fillText(overlay, box.x - (box.w / 5), box.y - (box.h/4));
}
}
The first line of this function stores the canvas element in the canvas element we set up at the start of this file. The second line makes the canvas the same size as the image, in both its width and its height. Next, we store the canvas context to the
Next, we iterate over the bounding boxes which we received from Clarifai and do some arithmetic which does the following for each box:
- Takes the top-left values from being between 0 and 1 to the correct pixel values on the image (for example, a box top-left position may be (0.124, 0.52), but these
ais assuming the bottom-right of the image is (1, 1). We make them into pixel-correct values). - Takes the bottom-right values and change them to show the width and height of the image (if the image width starts at 1.2 and ends at 2.5, then the width is 1.3, for example).
- We store both the top-left, width and height values for each box in imageDetails.realFaces.
- We draw the emoji set at the top of the image to the position and size of the box.
So wait, how does this work again?
There you have it – a
I learned how to code #💩 over all your faces with #JavaScript + #Clarifai Face Detection model … GET READY 💩💩💩