Last month, our friends at Disqus hosted a two-day company hackathon where their internal teams battled it out for bragging rights and the chance to be on the product roadmap. We were totally stoked when one of their winning teams built a hack using Clarifai’s new Not Safe for Work (NSFW) nudity recognition model!

When you’re a company whose business it is to help the denizens of the internet communicate with one another, you’re bound to run into some complications with unwanted content and nudity. We are talking about the internet, after all. So, when Disqus hosted its own internal hackathon last month, we were thrilled to see their team put our Not Safe for Work (NSFW) nudity recognition model to good use!

disqus

The Disqus nudity moderation app allows sites to filter out NSFW images automatically, ultimately making life better for moderators and communities, one blocked dick pic at a time. Hopefully, we’ll see this feature go live in Disqus’ commenting platform soon!

WHY WE ❤ IT

Obviously, we’re big fans of Disqus’ commenting and publisher products, so this hack was really exciting for us. Imagine you designed a t-shirt and suddenly, out of the blue, you saw Beyonce wearing it onstage at one of her concerts. That pretty much sums it up.

HOW YOU DO IT

We asked Tony Hue, a product marketer at Disqus, to walk us through how he built this nudity moderation app for Disqus.

Clarifai: What (visually traumatizing event) inspired your idea?

Tony: There’s a lot of bad stuff that can plague a comments section. Spam being the most common case and of course trolls and otherwise bad actors. Disqus combats this across 3 million+ sites.

NSFW content though is one category that requires some form of manual moderation to mitigate on non-adult content sites. Unlike spam or trolls, this isn’t something you’d expect to wander across while reading comments about the Batman vs. Superman movie.

A couple of months ago, I learned about Clarifai’s new NSFW model and immediately thought: we need this on Disqus! During a recent HackDay last month, I teamed up with another engineer to make that dream come true.

What did you use to build it?

Python. We use Django, and so take advantage of post save hooks to detect whenever an image is created/updated. When we detect a change, we upload the image to Clarifai for classification. We use the nsfw-v1.0 model, but also experimented with general classification for testing purposes (treating dogs/cats as NSFW images, etc.).

When we receive the response, we check the NSFW probability and see whether it’s above a defined threshold (depending on a site’s settings, we check for either above 0.8 for default or 0.5 for strict). If any image in a comment is above this threshold, we send the comment straight into the moderation queue so commenters won’t see it.

What was the best part about using the Clarifai API?

It was straightforward and simple to use. I liked that Clarifai returns a more detailed breakdown for specific time periods in animated gifs, although the difference in format was surprising at first. It would be nice to have a general standard NSFW number for all media, then the time breakdowns as a separate property.

I also found these Clarifai blog posts particularly useful:

Thanks for sharing, Tony!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Tony some mad props in the comments below. Until next time!