User-generated content is like a box of chocolates
filled with potentially unwanted nudity - you never know what you're going to get. Whether you're a developer building an app that relies on user-uploaded listings (like AirBnb or eBay) or you're helping a brand run an online contest with user-generated content (like GoPro), you can use this simple and effective method for checking user-generated content for Not Safe for Work subject matter and preventing it from being uploaded and shared.
Right, the disclaimers out of the way, let’s crack on. If you just want to jump ahead and look a the code, the GitHub repository is here, and the readme should be enough to get you started. The code is also heavily commented, so it should be good if you’re comfortable enough to jump straight in.
How is this going to work?
Get set up with this project
Firstly, we’ll need a Clarifai account. You can get one here. Create a new application and take note of your Client ID and Client Secret - don’t share these with anyone else.
Now to create an keys.js file. It’s important that this is a separate file as it will house our Client ID and Secret. If you’re using git, please make sure to add this file to your .gitignore so you do not share this information.
Finally for project setup, I’d like you to create an options.js file. This is where our configuration will happen. Make it look like mine. Here’s a rundown of the options:
- The SFW_LOWER_LIMIT variable sets the lowest acceptable SFW value (on a scale from 0 to 1) which will pass the test.
- The FORM_ID variable is the ID given to the form we’re conducting a test on.
- The FILE_CLASS variable is the class which the specific file input is given.
- The function clarifaiCheckPass() will run if the user’s image passes the test, and clarifaiCheckFail() will run if it does not.
Let’s build this thing!
Next, a small bit of boilerplate code which will take an image file from an input and convert it to a Base64-encoded string. It’s this string which Clarifai needs to accept in order to run it through the NSFW model.
Let’s introduce a way to visually see what the state of a file input is. We’re going to do this by adding the class ‘working’ to the input when it has been submitted to Clarifai, and then changing it to either ‘approved’ or ‘rejected’ once we get a result. I suggest styling them with CSS for quick visual feedback of state.
Now for the heavy lifting, although luckily Clarifai will make it a lot easier than you might expect. In this next function, we’re going to make a call to Clarifai. Below is the code, and we’ll run through a detailed explanation afterwards.
We feed this function an image (by this point it would have been converted to a Base64-encoded string). We then use app.models.predict() and tell it which model we’re querying against (NSFW) and give it the image string.
It will then go through the results looking for the SFW concept and return true if the value is higher than the lower limit, and false if it does not. Finally, it calls the parseResponse() function which we’ll go through below with the true or false result.
parseResponse() is an incredibly simple function. It sets the correct state to the form input, and then calls clarifaiCheckPass() or clarifaiCheckFail(), which we define in the options file.
Pulling it all together
So far we’ve written all of these useful function together, but we’re not ever calling them. There are two pieces of functionality left to get this application working - the first is to actually trigger the NSFW checker when the form file input is changed, and the second is to stop the form being submitted until the check has passed.
So wait, how does this work again?
There you have it - your very own nudity checker for your online forms. You can use this as a lightweight way to solve many problems, like making sure users on your dating app can only submit tasteful, non-nude pics, or making sure the photo contest you're running doesn't sear your eyeballs unexpectedly. Share your particular use case with @Clarifai for a chance to be featured in the blog!