In Computer Vision, Developers, Built With Clarifai

Halloween at Clarifai HQ: How we built our Spooky or Not Model

By Natalie Fletcher

Happy Halloween! This year, to celebrate, a few members of the Clarifai team built an app that tells you how spooky (or not) your Halloween photos are. Last week, our Senior Dev Evangelist, Skip Everling, showed how he integrated Clarifai’s API with Twilio to add an SMS component to the app. Before they could build the app though, they first needed to build the model.

So, this week, I interviewed two of the minds behind the model: Keah Lonergan and Karen Herder, to find out what the process was like. As the Client Success Manager, Keah is part of the team that manages and maintains Clarifai’s customer relationships. Karen is one of the Senior Machine Learning Strategists on our Data Strategy team. They're responsible for optimizing and training data for Clarifai's powerful machine learning API, ensuring the data is clean, fair/unbiased and high-quality for all Clarifai users.

Want to know how easy it is to build your own custom model? Check out our interview below!

 

What inspired you guys to build the Spooky or Not app?

Keah: I lived abroad for several years, so this is only my second Halloween back in the US! This year, I was thinking about how costumes in the US can be scary but are usually funny, silly, or beautiful rather than frightening. That said, people in other countries tend to think of “Halloween” costumes as spooky-only. That sparked the idea, and we talked about how best to implement it. We all thought it would be fun to build a custom model that could decide whether any picture is spooky or not and brought in Karen for her expertise!

 

What did you need to have before you could start building?

Keah: The model was really easy to build. All we needed was the data! Karen provided nearly all of it, and we used Google search to find lots of images of Halloween and “spooky” concepts. She also had to source a bunch of images that showed people, places, and things that are not spooky, so the model could learn the difference. For some concepts, like “scary doll” and “creepy hallway,” we needed to provide examples of “normal" dolls and hallways so the model would know that only some dolls are spooky, like those with scary eyes and broken heads, whereas most Barbies and Cabbage Patch dolls are not (though what I discovered is that I kind of find all dolls and hallways spooky!)

 

Screen Shot 2018-10-31 at 1.58.22 PM

 (Jared Lerner, our Senior Support Engineer, as "Tech Startup Engineer Ken")

 

Karen: The first part of the data strategy (DST) process for model-building is to develop a taxonomy (i.e. the classification structure of objects within a particular domain). I decided that a binary model (spooky or not-spooky) would be the best starting point. From there, we could create a more refined model at a later point by splitting apart the “spooky” concept into multiple classes (this would be done by separating the various “spooky” images into visually distinct classes, like “jack-o-lantern,” “cauldron,” “black cat,” “witch’s broom” etc.)

After establishing the taxonomy, I sourced the training data. Some data was already on hand. Halloween has always been my favorite holiday, so I have a personal collection of Halloween-related imagery and also photographs I’ve taken myself (Editor's note: Karen is also a photographer!) When building ML models though, you can never have enough data, so I needed to gather more data from a mix of online sources (part of the DST magic!)

With a broad concept like “spooky,” it’s important that your data is comprehensive and reflective of the world, not just one individual or group’s viewpoint. For example, even though I personally don’t associate blood and gore with spookiness and Halloween, I know that some people do, so I needed to include that type of data in the training dataset. I sourced thousands of images relating to all things people around the world associated with Halloween to get an all-encompassing view, which is the foundation for any successful model.

 

How did you use our API and Explorer, our user interface, to build the model?

Keah: I just signed up for a Clarifai account, opened up Explorer, and created a “Spooky Not Spooky” model. Then, I googled examples of “spooky” things, like “haunted house,” “jack-o-lantern,” and “witch,” and saved all those images in a folder on my desktop. From there, I just uploaded them to Explorer and labeled the images as “spooky.”

Afterward, I did the same for “non-spooky” things, like kittens and flowers. Karen also added a lot of images to the model with this method. When we had enough images in there, we hit “train” and tested the model. The results were pretty good right off the bat!

Karen: What Keah said! :) I also used DST scripts in Terminal (a program that lets you interact with the API directly rather than through Clarifai’s Explorer UI) to upload large datasets to the model. Terminal is much more efficient when you’re working with thousands of images.

 

What is the best part about using Clarifai?

Keah: The best part of about Explorer is how easy it is to use. It’s great to be able to build a sophisticated computer vision model without having to write a single line of code. Plus, it was really fun to run the model through my own photos and see the predictions straight away. Within minutes of training the model, I uploaded some photos from my own Instagram account and immediately the model could tell me which ones were the spookiest! It’s really encouraging and satisfying that Clarifai makes computer vision so accessible, even to people like me who don’t know how to code and have no engineering background at all.

Karen: Right after building the model, I tested it with some photos of pumpkins and skulls in my apartment and the model predicted “spooky” immediately, so I knew I’d done a good job of decorating for the season! It’s satisfying and fun to build a model and then be able to apply it right away.

 

Awesome work to the whole team!

Check out some more examples featuring the Clarifam's costumes:

Screen Shot 2018-10-31 at 12.58.30 PM

(Keah as "Bubble Tea") 

Screen Shot 2018-10-31 at 12.57.03 PM

 (Karen as "Morticia Adams")

Screen Shot 2018-10-31 at 2.40.23 PM

(Natalie Fletcher as "sleep deprived author")

 

Disagree? Learn more about the difference between visual and human recognition here!

 

Want to build your own model? Sign up for our API to get started. 

Previous Next