Tech inclusion and diversity in machine learning

We’ve always believed that inclusion is for everyone (ha!). So, we were excited to make our commitment official with this year’s Global Entrepreneurship Summit initiative for tech inclusion.

zuckerbergInclusion means embracing diversity – people from different backgrounds with different skill sets coming together to make our company stronger. While we believe a culture of inclusion should be table stakes for any tech company, we think it’s particularly salient for machine learning and artificial intelligence (A.I.) companies. After all, we’re building the next generation of smart apps and machines that are supposed to amplify human intelligence – and you can’t amplify human intelligence without including perspectives from all facets of human life.

It’s not news that the tech industry has a diversity problem. However, artificial intelligence represents a new homogeneity problem in the tech sector. A.I. relies on human training to get “smart.” Thus, human biases can factor into artificial intelligence and how computers see the world. The danger, of course, being that biased views can be amplified with a far greater reach through A.I. than through any one human being.

If everyone teaching computers to act like humans are men, then the machines will have a view of the world that’s narrow by default and, through the curation of data sets, possibly biased.”

With that in mind, we try to build diversity into every part of our business at Clarifai, from hiring to building to shipping product. Diversity in A.I. doesn’t just mean hiring more diverse people. For image recognition, in particular, data scientists need a set of well-labeled images to train a model, which means someone has the manual task of categorizing those images in the first place. Building a diverse data team means looking at the problem as a whole, from the time that someone labels an image to the time a data scientist trains the model. That’s why we source data from men and women across six continents!

We’re excited that the Global Entrepreneurship Summit has given us the opportunity to commit officially to our goal of promoting inclusion and diversity, and serve as an example to others in the machine learning space. As part of our pledge, we’re bound to:

  • implement and publish company-specific goals to recruit, retain, and advance diverse technology talent, and operationalize concrete measures to create and sustain an inclusive culture;
  • annually publish data and progress metrics on the diversity of our technology workforce across functional areas and seniority levels;
  • invest in partnerships to build a diverse pipeline of technology talent to increase our ability to recognize, develop and support talent from all backgrounds

If you believe in building #RobotsNotBrobots, take the pledge with us and let us know!

I’m joining up with #Clarifai for more diversity + inclusion in machine learning and A.I. #RobotsNotBrobots http://bit.ly/28RS8Og


Clarifai Featured Hack: Block unwanted nudity in blog comments with Disqus

Last month, our friends at Disqus hosted a two-day company hackathon where their internal teams battled it out for bragging rights and the chance to be on the product roadmap. We were totally stoked when one of their winning teams built a hack using Clarifai’s new Not Safe for Work (NSFW) nudity recognition model!

When you’re a company whose business it is to help the denizens of the internet communicate with one another, you’re bound to run into some complications with unwanted content and nudity. We are talking about the internet, after all. So, when Disqus hosted its own internal hackathon last month, we were thrilled to see their team put our Not Safe for Work (NSFW) nudity recognition model to good use!

disqus

The Disqus nudity moderation app allows sites to filter out NSFW images automatically, ultimately making life better for moderators and communities, one blocked dick pic at a time. Hopefully, we’ll see this feature go live in Disqus’ commenting platform soon!

WHY WE ❤ IT

Obviously, we’re big fans of Disqus’ commenting and publisher products, so this hack was really exciting for us. Imagine you designed a t-shirt and suddenly, out of the blue, you saw Beyonce wearing it onstage at one of her concerts. That pretty much sums it up.

HOW YOU DO IT

We asked Tony Hue, a product marketer at Disqus, to walk us through how he built this nudity moderation app for Disqus.

Clarifai: What (visually traumatizing event) inspired your idea?

Tony: There’s a lot of bad stuff that can plague a comments section. Spam being the most common case and of course trolls and otherwise bad actors. Disqus combats this across 3 million+ sites.

NSFW content though is one category that requires some form of manual moderation to mitigate on non-adult content sites. Unlike spam or trolls, this isn’t something you’d expect to wander across while reading comments about the Batman vs. Superman movie.

A couple of months ago, I learned about Clarifai’s new NSFW model and immediately thought: we need this on Disqus! During a recent HackDay last month, I teamed up with another engineer to make that dream come true.

What did you use to build it?

Python. We use Django, and so take advantage of post save hooks to detect whenever an image is created/updated. When we detect a change, we upload the image to Clarifai for classification. We use the nsfw-v1.0 model, but also experimented with general classification for testing purposes (treating dogs/cats as NSFW images, etc.).

When we receive the response, we check the NSFW probability and see whether it’s above a defined threshold (depending on a site’s settings, we check for either above 0.8 for default or 0.5 for strict). If any image in a comment is above this threshold, we send the comment straight into the moderation queue so commenters won’t see it.

What was the best part about using the Clarifai API?

It was straightforward and simple to use. I liked that Clarifai returns a more detailed breakdown for specific time periods in animated gifs, although the difference in format was surprising at first. It would be nice to have a general standard NSFW number for all media, then the time breakdowns as a separate property.

I also found these Clarifai blog posts particularly useful:

Thanks for sharing, Tony!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Tony some mad props in the comments below. Until next time!


Build better travel apps with Clarifai's new travel recognition model

Our new Travel image recognition model automatically identifies travel-related concepts in pictures and video and can be used to build and improve apps in the travel, leisure, and hospitality industries.

We’re excited to announce our new Travel model, which allows you to build all sorts of cool apps in the travel, leisure, and hospitality space. With our new Travel model, you’re now able to auto-tag images and video with travel-related concepts like:

pool

bedroom

hog

The current Travel model is designed to identify specific features of residential, hotel, and travel-related properties. Use it alongside our core general model to add broader travel concepts like romance, relaxation, adventure, scenic, etc.:

themes

spa

And remember the Not Safe For Work (NSFW) nudity recognition model we released last month? You can use our Travel model with NSFW to find nude beaches. You know, if you’re into that sort of thing. 

Who should use the Travel model?

We built this model with businesses and developers in mind who rely on user-generated and professionally-taken content to inform their travel platforms. For example, Trivago uses this model to auto-tag images that hotel chains on their platform upload of their properties:

trivago

Whether you’re a travel booking platform like Expedia, Kayak, Orbitz, or Travelocity, or a travel review site like Tripadvisor, Yelp, or Foursquare, you’re probably getting a good stream of user-uploaded images and video every day. Our super accurate Travel model automatically categorizes and tags this media, so you don’t need a team of humans sorting through these uploads manually.

Not only can businesses use the Travel model to curate their content, developers can build cool apps with it, too! One hacker has already created a nifty Chrome extension that allows you to right-click on any image on the web to get recommendations on where to go next from travel sites like Tripadvisor! We’ll be sharing the hack next week, so make sure you follow us on Twitter if you want to try it out.

So, whether or not you’re traveling this summer, you should try our Travel image recognition model. Who knows, you might win an all-expense paid trip to Hawaii* by submitting an app to our travel-themed Clarifai Bounty!

*Or, really, you’ll win a Clarifai skateboard. Same thing.


#CrimesAgainstAI - Win prizes from Clarifai in the ultimate visual recognition test

Image recognition is hard. You know what’s harder? Stumping our image recognition API. But, the internet being the land of the weird and the home of the strange, we sometimes come across images that even our superior artificial (and human!) intelligence struggles to identify. These images are known as #CrimesAgainstHumanity and we want you to help us find them!

At Clarifai, we’re proud to be known as the universe’s greatest visual recognition API (by a ton of developers and our mothers, of course). Our image and video recognition results are always on point. But, the internet being the land of the weird and the home of the strange, we sometimes come across images that even our superior artificial (and human!) intelligence struggles to identify.

Case in point – is this a Chihuahua or a Muffin?

These are pictures we like to refer to at Clarifai as “Crimes Against A.I.” They’re adversarial images that challenge the understanding of even the most intelligent visual recognition technology. And as a company whose mission it is to teach machines to understand every image and video, we make it a point to train our A.I. on crazy images so that we continue to have the smartest tech on the market.

Because, seriously, what is this even:

blobfish

Or this:

minions_2015-wide

Or this:

article-1208913-062CB8F9000005DC-591_964x635

That’s where you come in! You can help us make our technology smarter by sharing all of your Crimes Against A.I. images with us on Twitter or Instagram using #CrimesAgainstAI. We’ll run your suggested images through our many visual recognition models and post the best ones on our social media accounts. And, if you manage to stump our technology, we’ll send you an amazing prize!

Here are the rules of the game:

  1. You must own the rights to image you send us, OR credit the person who owns it.
  2. The image you send must be clear – no blurry shots, no cropping or zooming in that would obscure the photo.
  3. No nudity (we get enough dick pics, thank you very much).
  4. Full legal mumbojumbo terms and conditions here!

How to play:

All you have to do is tweet a photo/link to us or tag us on Instagram @clarifai using the hashtag #CrimesAgainstAI.

Like this:

What you’ll win:

Every month, we’ll choose one random winner out of all the people who managed to stump our API to win a $25 Amazon gift card. We’ll holla at the winners on social media on the last Friday of every month, so make sure you follow us on Instagram, Twitter, and Facebook.


Clarifai Featured Hack: Make any picture punny with PicAPun

PicAPun is an app that turns any image into a hilarious and groan-worthy pun. Like, if you submit a picture of a field of wheat, you’ll get back a quote about how ‘grainy’ it looks. Get it? So punny!

There are hacks that try to change the world, cure diseases, end world hunger, solve the mysteries of the universe, etc., etc. Then, there are hacks like PicAPun, an app that delivers a relevant and punny witticism every time you feed it an image. Sometimes, the best hacks are the ones that make you laugh! If you’re in need of some levity today, check out the GitHub repo to try PicAPun.

picapun

WHY WE ❤ IT

We love it when people ‘build’ punny apps, especially when they’re open to ‘constructive’ criticism. Sometimes when you’re a developer, you just need a <br> from the serious things! You can read all about PicAPun’s development on Devpost.

HOW YOU DO IT

We asked Andrew Sack, Jose Bohorques, Joseph Cacioppo, Marcial Abrahantes, and Carl Amko, all University of Florida students, to explain how they built the PicAPun app.

picapun2Clarifai: What inspired your idea for PicAPun?

PicAPun Team: Swamphacks gave us the opportunity to use Clarifai for our 24 hours of hacking. One of our pun-loving members brilliantly thought that we could use the tags that Clarifai produces to create puns about pictures. Then, we made it happen!

What did you use to build your app?

Using one of our amazing resources, Clarifai, we created a Python API hosted on DigitalOcean that makes requests on the Clarifai API to analyze a picture and return its most confident tags. The same Python API then submits some of these tags into a pun generator website. The Python script then obtains the website source code and, with BeautifulSoup, parses the HTML content to obtain the different generated puns and places the image submitted, along with one of the random puns, into our own domain PicAPun.github.io domain.

What was the best part about working with Clarifai’s image recognition API?

This is such a remarkable API. It wasn’t too difficult to use, and its results were impressively correct. We had a great time using it.

Thanks for sharing, pun-dits!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Andrew, Jose, Joseph, Marcial, and Carl some props in the comments below. Until next time!


Clarifai Featured Hack: Classifi.me predicts your Myers-Briggs type based on your Instagram photos & videos

Classifi.me is an app that predicts your Myers-Briggs (MBTI) personality type based on your photos and videos. It also identifies your happiest photo and tells you whether you had an overall positive or negative year, so you can compare your actual feelings to the lies you’ve been telling on social media, hooray!

Classifi.me is a cool app that gleans insights about someone based on their social media photos and videos. Social media can tell you a lot about a person – usually that they’re big fat liars posting photos that make their lives seem more fab than they actually are (yes, we’re cynical). Through social media, Classifi.me can predict someone’s Myers-Briggs personality type, happiest photo, and overall sentiment and mood. It’s great for learning something new about yourself and even better for stalking that girl/guy you just matched with on Tinder.

classifime

Give it a try at Classifi.me, read more about it on Devpost, or check out the Github repo!

WHY WE ❤ IT

This app is scary accurate in predicting your Myers-Briggs (MBTI) personality type based on your Instagram photos. We took turns trying the app and then taking an actual Myers-Briggs test online and our results matched! But best of all, we now know the Myers-Briggs personality type of our office dog based on her Instagram pics – ENTJ, just like Voldemort and Tywin Lannister! Um, maybe we should back away slowly from the dog now …

HOW YOU DO IT

We asked Isabella Moreira, Tyler Leonhardt, and Eric Smithson, creators of Classifi.me and students at the University of Central Florida, to talk about their inspiration for the Classifi.me app.

Clarifai: How did you come up with the idea for Classifi.me?

Classifi.me Team: The idea hit us at 1 am at SwampHacks (UF’s hackathon) after pivoting from two previous poor ideas. We really wanted to use Clarifai and we were curious if people’s actual personalities matched what they portray themselves as on social media.

How did you build the app?

For the tech stack, we had a vanilla HTML/CSS/JS front-end (with a dash of jQuery) and a Node.js with Express backend. We used so many amazing APIs. Instagram, of course, for our source of pictures and videos. Clarifai, to help us take these pictures and videos and turn them into words (tags). And, a bunch of really awesome Machine Learning APIs by the folks over at uClassify (highly suggest checking it out) that would help compute the sentiment, mood, and Myers-Briggs personality types. The biggest challenge was being flooded with a bunch of raw data and making sense out of it. Thankfully, that’s what I love to do, so it was a lot of fun to work on! Am I magic? Duh.

We were working with a lot of different APIs, so when something broke on the server, we knew we could rule out Clarifai because Clarifai’s tagging API was very reliable and didn’t give us any trouble.

What personality type works best with Clarifai?

I really really liked how easy Clarifai was to use. Provide it a url or a bunch of urls and BOOM, words. I used the Node.js client and it was so simple! But seriously, Clarifai was a lot of fun to work with. I’m really excited to see what’s next for Clarifai and use your API more – you are the best!

Thanks for sharing, smarty pants!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Isabella, Tyler, and Eric some props in the comments below. Until next time!


Bring Digital Asset Management (DAM) into the future with automatic image tagging and keywording

Industry

Digital Asset Management (DAM)

Use case

Automatic keywording and image tagging

Result

2 x successful pilot apps launched in less than a month

Asset Bank is a leading Digital Asset Management (DAM) solution from Bright Interactive, used by 20 of the FTSE 100 companies and over 500 organizations around the world.

martinMartin Wilson, co-founder of Bright Interactive and founder of DAM Plus, has over 20 years of experience in the software development industry. He conceived Asset Bank and managed its growth into one of the leading enterprise DAM systems on the market.

Challenge

Asset Bank is a leading Digital Asset Management (DAM) solution that provides enterprises with highly-configurable solutions to manage their media libraries. As one of the industry’s innovators, Bright Interactive wanted to build artificial intelligence into their DAM product and leverage image auto-tagging to create a better product and user experience for their customers.

1. FAST DEVELOPMENT: What’s the best way to quickly build visual recognition into your product without breaking the bank?

Getting started with visual recognition and machine learning can be both challenging and expensive. Bright Interactive needed a cost-effective and fast way to build very advanced artificial intelligence technology into its product without incurring technical or financial debt.

2. IMPROVE ACCURACY: How do you ensure that your visual recognition results are accurate, relevant, and get better over time?

Organizations who use DAM solutions are very particular about their keywords and taxonomies. One brand’s “trash” keyword could be another brand’s “treasure,” so to speak. Bright Interactive needed a visual recognition solution that was not only accurate but also adaptable to its customers’ needs.

“DAM applications will need to start adopting machine learning and visual recognition technologies in order to stay innovative and competitive in the market.” – Martin Wilson, Bright Interactive Co-Founder

Solution

Bright Interactive uses Clarifai in their DAM products to automatically tag and keyword images, adding value and saving time for clients.

Bright Interactive launched two pilot apps incorporating Clarifai’s visual recognition technology to auto-tag and keyword images. The first app is built into Bright Interactive’s flagship Asset Bank DAM product and integrated with Asset Bank’s REST API to add tags automatically to assets as they are uploaded to the platform. The second app is a standalone product with its own user interface called Quicktagger, which is designed to be used with any DAM solution and allows clients to change, accept, or reject automatically suggested tags. With these two pilots in place, Bright Interactive can deliver added value to their clients and save time traditionally spent manually keywording images.

Implementation

Minimum costs, maximum results

Building and testing a new product feature can be costly, both in terms of time and money, particularly when it comes to cutting edge technologies that require a lot of infrastructure and expertise. Bright Interactive worked with the Clarifai API to avoid technical debt and launch two pilot apps in less than a month, bringing their new products to market in a cost-effective and expedient way.

“Clarifai’s API is dead simple – here’s an image, give me some tags, done. It only took a day or two to finish the first prototype of our apps, and less than a month for both apps to be ready for clients to start using.”

Personalized results and tags

Clarifai’s core model can recognize over 11,000 general concepts, which is more than enough for many clients to extract value from the results. For example, the general tags for “outdoors” and “indoors” are very meaningful for a travel and tourism client. In Bright Interactive’s Asset Bank DAM product, these general tags are automatically added via the Clarifai API as users upload images.

However, many brands have unique taxonomies and keywords they want to use. For example, a major brand might wish to identify their headquarters by name, rather than the general “building” tag. The obvious next step for Bright Interactive’s pilot is to incorporate Clarifai’s custom training feature, which will be made widely available in 2016.

“Today, image recognition technology is at a point where many of our clients should find its results useful, particularly in the travel and tourism industry. The next evolution is to make the technology even more personalized for clients’ unique needs through custom models and tags.”

Getting smarter, together

In its Quicktagger app, Bright Interactive built a user interface that allows clients to change, accept, or reject recommended tags. This feedback can then be returned to Clarifai and used to make the algorithm deliver even better results. While Bright Interactive is not yet sending feedback from its pilot apps, they believe that feedback loops like this are essential to get better results for their clients.

The feedback resource that Clarifai currently supports and the upcoming custom training feature provide the potential for us to teach the algorithm to deliver more accurate results as we use the API. It makes Clarifai a more compelling technology for us than the other APIs that don’t have the ability to get smarter the more you use them.

DIY with Clarifai

Now that you’ve been inspired by Bright Interactive’s innovation, it’s time to build your own app. Clarifai’s core model includes tags for over 11,000 concepts you can apply to your business. All it takes is three simple lines of code – sign up for a developer API account to get started for free!

Once you’ve signed up for a developer account, head over to Applications and make a new one.  Make sure you nab that Client ID and Client Secret:

image03

Now, head over to https://github.com/clarifai/clarifai-nodejs. There, you’ll find our Node.js client, which makes this process even easierTo set up your environment, download the clarifai node.js file and stick it in your project.

Boo yah. You’re set up. Now head over to your Node project and just require the Clarifai client:

var Clarifai = require('./YOUR_PATH_HERE/clarifai_node.js');

Remember that Client ID and Client Secret you nabbed earlier? We’re gonna use those now. You can either paste them in this function directly, or save them in an environment variable.

Clarifai.initAPI('YOUR_CLIENT_ID', 'YOUR_CLIENT_SECRET');

Now for the fun part. You can easily tag an image with just 3 lines of code:

var imageURL = 'MY_IMAGE_URL';
var ourId = 'my great image'; // any string that identifies the image to your system
Clarifai.tagURL(imageURL, ourId, handler); // “handler” is your basic error handler function

You’re all set! Now you can easily make like Yelp and tag and sort your images to your heart’s desire. If you’d like to see a more in-depth example, check out clarifai_sample.js in the GitHub repo.


How visual recognition is used to augment doctors and diagnose patients

Industry

Healthcare

Use case

Medical imaging and assisted diagnosis

Result

99% accuracy in detection, 86.6% accuracy in diagnosis

i-Nside is disrupting the healthcare industry, building a suite of cutting-edge new tools to help doctors provide patients with the best medical care. Through its patent portfolio, applied artificial intelligence systems, and sustainable mobile technology, i-Nside is bringing reliable and cost-efficient medical devices to physicians and patients all over the world, whether they’re in the biggest city or the most remote desert.

laurentschmollDr. Laurent Schmoll is the founder and CEO of i-Nside. He is an ear, nose, and throat (ENT) specialist with over 25 years of experience in the medical field. His company won the Health Pitch Challenge at Hacking Health Strasbourg in March 2015.  

inside

Challenge

i-Nside is a worldwide leader in endoscopic technology. With a small device you can attach to any smartphone, i-Nside can take professional-grade medical images of the human ear and use them to diagnose problems. With so many medical images on file, i-Nside wanted to build a diagnostic platform that would be able to assist doctors in identifying ear problems.
1. AVOID TECHNICAL DEBT: What’s the best way to build custom visual recognition into your product?

Getting started with visual recognition and machine learning can be both challenging and expensive. As a small team, i-Nside needed a cost-effective and easy way to build very advanced artificial intelligence technology into its product without incurring technical debt.

2. TRUSTWORTHY RESULTS: How do you ensure that your visual recognition results are accurate?

The stakes are pretty high when it comes to visual recognition and something as life-changing as a medical diagnosis. i-Nside needed a solution that would not only provide accurate results for a very esoteric data set (pictures of the insides of ears) but would also be able to improve with more training.

“Clarifai differentiates itself by providing tools and solutions that businesses and healthcare specialists can use right now, not in ten years. With Clarifai built into our product, we’re achieving 99% accuracy with our visual diagnostic tool!” – Dr. Laurent Schmoll, i-Nside CEO

Solution

i-Nside uses Clarifai’s visual recognition solution to build an accurate medical diagnosis platform that helps doctors all over the world provide the best medical care to their patients.

Diagnosing ear problems is a very specialized field of expertise within medicine. General practitioners usually refer people with ear problems to Ear, Nose, and Throat (ENT) specialists. i-Nside wanted to build a diagnostic tool that would assist general practitioners and nurses to identify and treat ear problems accurately, thereby making the best medical care accessible to anyone in the world.

With over 100,000 ear images collected from their widely distributed endoscopic tool, i-Nside asked Clarifai to build a custom visual recognition model especially for ear pictures and video. Now, Clarifai’s visual recognition technology powers the software layer in i-Nside’s line of endoscopic hardware, enabling the tool to not only take pictures of the ear but also to analyze the results – all in one small, affordable, mobile package that anyone can use!

Implementation

Minimum costs, maximum results

i-Nside had to prove that an assisted diagnosis tool could work before they could get the funding and approvals to put it into production. However, like many startups, they faced a “chicken or egg” problem – they didn’t have the funding to build an expensive A.I. visual recognition product, but they wouldn’t receive more support unless they proved visual recognition worked.

i-Nside partnered with Clarifai to create a cost-effective custom visual recognition model that they built into a beta product to demonstrate the power and accuracy of visual recognition without breaking the bank.

“Even though we work with the best hospitals in the world, we are still a small startup. Clarifai helped us prove our idea in an affordable and speedy way.”

A custom model for unique needs

While Clarifai’s core model can recognize over 11,000 general concepts, ear diseases unsurprisingly are not among those core tags. i-Nside needed a special custom model built for the sole purpose of analyzing ear patterns.

Clarifai’s team of data scientists used their expertise to train a custom model on i-Nside’s batch of ear images. It only took a couple of weeks for the custom model to be fully trained to recognize ear problems with near perfect accuracy. The i-Nside team was then able to access the custom model through Clarifai’s API with just a few lines of code.

“We looked into IBM Watson, but they were not a cost-effective option. Clarifai gave us stellar customer support and a simple, well-documented API that allowed us to plug-and-play without incurring financial or technical debt.”

Changing the world, one image at a time

Now that Clarifai’s custom model is powering their endoscopic diagnosis tool, i-Nside can deliver accurate diagnoses to doctors in every corner of the world. Traditionally underserved markets like some parts of Africa, Asia, and South America now have access to the best specialist knowledge in medical care. As i-Nside continues to collect more endoscopic imagery, Clarifai’s model gets smarter and delivers even more accurate results by learning from the feedback.

And endoscopic diagnoses are just the start. i-Nside is hoping to expand both their imaging hardware and their artificially intelligent diagnosis tool to other fields of medicine like oncology (cancer) and radiology (medical imaging like x-rays).

“Our diagnosis tool is meant to augment doctors, not replace them. We decided to work with Clarifai because our philosophies are very aligned – we believe that artificial intelligence can amplify human intelligence, but it’s not a substitute.”

DIY with Clarifai

Now that you’ve been inspired by i-Nside’s mission to change the world, it’s time to build your own app. Clarifai’s core model includes tags for over 11,000 concepts you can apply to your business. All it takes is three simple lines of code – sign up for a developer API account to get started for free!

Once you’ve signed up for a developer account, head over to Applications and make a new one.  Make sure you nab that Client ID and Client Secret:

image03

Now, head over to https://github.com/clarifai/clarifai-nodejs. There, you’ll find our Node.js client, which makes this process even easierTo set up your environment, download the clarifai node.js file and stick it in your project.

Boo yah. You’re set up. Now head over to your Node project and just require the Clarifai client:

var Clarifai = require('./YOUR_PATH_HERE/clarifai_node.js');

Remember that Client ID and Client Secret you nabbed earlier? We’re gonna use those now. You can either paste them in this function directly, or save them in an environment variable.

Clarifai.initAPI('YOUR_CLIENT_ID', 'YOUR_CLIENT_SECRET');

Now for the fun part. You can easily tag an image with just 3 lines of code:

var imageURL = 'MY_IMAGE_URL';
var ourId = 'my great image'; // any string that identifies the image to your system
Clarifai.tagURL(imageURL, ourId, handler); // “handler” is your basic error handler function

You’re all set! Now you can easily make like Yelp and tag and sort your images to your heart’s desire. If you’d like to see a more in-depth example, check out clarifai_sample.js in the GitHub repo.


Clarifai Featured Hack: Claridrobe is your new personal stylist A.I.

Claridrobe is a fashion app that takes clothing from your very own wardrobe and recommends outfits for you to wear based on weather, date, time, and style preference. You know you’ve made it in life when someone else dresses you every day.

Choosing your own outfit on the daily is so last season. Everyone who’s anyone knows that personal stylist A.I.s are all the rage this spring. That’s why you need the latest hack built with Clarifai – Claridrobe. It takes all your clothes and mixes and matches them to make recommendations on what you should wear based on time, date, weather, and style preference. 

claridrobe
WHY WE ❤ IT

Not only does this app save you precious “hit snooze on the alarm five times before you finally get out of bed” time, but it also helps you explore your closet in new and unexpected ways! They say people only wear 20% of their wardrobe regularly. This app helps you incorporate the other 80%, so you no longer have a closet full of regrets. The interface is really sleek, too – exactly what you’d expect from a fashion app! Read more about Claridrobe on Devpost or check out the GitHub repo and unleash your inner fashionista.

HOW YOU DO IT

We asked Rivu Khoda and Marvin Wu, creators of Claridrobe, to explain who they’re wearing right now (Clarifai hoodies by Karl Lagerfeld, of course) and how they built the Claridrobe app.

Clarifai: What inspired your idea for Claridrobe?

Rivu: We realized that there were a lot of clothes in our wardrobe that we didn’t use.  Whenever we go shopping, it’s important for us to know how the clothes we buy fit together with everything we have. Also, instead of always having to wear the “usual” combination of clothing, we thought, “Why not add some spontaneity and randomness?” Oftentimes, we’re just afraid to try out new things before we even give our set of clothing a chance.

Marvin: I have a passion for web development, and I want to create apps that harness the power of technology to make our lives a little bit easier. Waking up and getting dressed is a challenge that everyone faces in the morning. As students, we know the struggle of getting the day started quickly, and so we wanted to make something that could help people be as efficient as possible.

What did you use to build your app?

Rivu: We used Node.js, HTML, CSS, MongoDB, Express.js, Angular.js, JavaScript. We had a hard time understanding Angular.js since we were learning it on the fly. Connecting the RESTful API and the front end was also definitely another major challenge because we had never really done it before.

Marvin: We used the MEAN stack to power the app. I worked on the front end to design each page. My biggest challenge was developing the UI using Angular.js for the first time.

What was the best part about working with Clarifai, other than the chic and fashionable swag we gave out?

Rivu: The best part about Clarifai was that it was simple enough to use and gave us great results. It was even able to tag the picture of the clothes that I was wearing.

Marvin: I liked how you could take any picture and Clarifai would just KNOW what you meant.

Thanks for sharing, fashionistas!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Rivu and Marvin some props in the comments below. Until next time!


Moderate, filter, or curate adult content with Clarifai’s NSFW model

Have you ever been innocently browsing the interwebs when suddenly you see something you can’t unsee? Yes, we’ve been there, too. That’s why we created the Not Safe for Work (NSFW) adult content recognition model, available via the Clarifai API. Our NSFW model is the ultimate cockblock … in a good way!

nsfw1With the Not Safe For Work (NSFW) adult content moderation model, you’re able to determine if an image or video contains nudity or semi-nudity and take appropriate action – whether that means filtering out the image or saving it to your spank bank. We don’t judge.

When an image is passed through the NSFW model, it receives a probability rating for Safe for Work (SFW) or Not Safe for Work (NSFW). Not only does the model handle safe and explicit content, but it also delivers sensible intermediate responses for bikini and lingerie photos. That way, you can decide how you want to treat images across a wide spectrum of nude and semi-nude states.

You’d be surprised at the accuracy with which our model can distinguish NSFW from SFW content. Take this sexy pair of lamps, for instance:

bewbs

We know what you thought at first glance – get your minds outta the gutter! Even though this picture kind of looks NSFW, our model can detect that it’s not at all. In fact, our model can tell that this photo is of lighting! Pretty impressive, huh?

Now, take this pair of actual boobs … Just kidding, we’re not posting a pic of boobs in our company blog. Who do you think we are, [insert terrible, less awesome competitor name here]?! Just go ahead and sign-up for a free Clarifai account and test your own sketchy photos with our API.

You’ll notice that it’s tough to fool our NSFW model. Remember when Facebook thought an elbow was a boob?

nsfw_fb1

Or when Instagram thought this cake was a bunch of nipples?

nsfw_instag

Our NSFW model doesn’t make such rookie boob mistakes. And we’re particularly good at recognizing butts and dicks, though I’ll spare you the visual demonstration for now. Ok, maybe just one dick pic:

peen

We’ve already had quite a few developers and businesses build cool applications with our NSFW model – here are some of the  more common use cases we’ve seen:

nsfw4Protecting Marketplaces

The NSFW model comes in especially handy if you’re running a site that gets a lot of user-generated content, like auction sites or stock photography marketplaces. When users upload an image or video, you can use the NSFW model to check for nudity and then filter or flag the unwanted content for removal.

nsfw2Protecting Users

We’ve also seen a number of social media companies and dating sites using our NSFW model to protect their users from unwanted (and usually unsolicited) nudity. For instance, one dating app uses the NSFW model to say no to dick pics and prevent sexual harassment on their platform.

nsfw6Curating Adult Content

While most people who use our NSFW model want to moderate and filter out adult content, some people want to find and curate it instead! To each their own, of course. Some pr0n sites have used our NSFW model to find the best moments in videos to show as thumbnails to entice their viewers, while other adult sites highlight user-generated adult content with our model.

Now that you know all about NSFW, it’s time to get started and browse our documentation for NSFW, or better yet, read this great tutorial Cassidy wrote on how to create your own nudity filter!


Clarifai Featured Hack: How healthy (or unhealthy) is your meal? Foodifai knows!

Foodifai is an app that tells how healthy (or, let’s face it, more like unhealthy) your food is and gives you points proportional to the “health factor.” You can compete with your friends and totally judge their eating habits, yay!

Staying healthy is hard, especially when you work at a tech company that gives you infinite free snacks. Sure, there are apples and stuff, but who’s going to reach for that when there’s a Butterfinger bar on the shelf next to them? Ok, our CEO Matt would choose the apple, but the rest of us mere mortals need a little more motivation to stay healthy. And motivation is clearly best when it’s in the form of heated competition among your friends!

foodifai

That’s why the Foodifai app is so cool. It’s an app that helps you stay in shape by allowing you to take a picture of your food to earn health points dependent on your food’s nutritional value. You can compete with your friends and family to gain the top spot and quietly judge one another. The more unhealthy food you eat, the more points you lose.

But perhaps best of all, the app is also equipped with a cool “trash talk” feature, which delivers a heaping dose of snark every time you eat unhealthy food. It’s verbal equivalent of that Amy Schumer diet sketch:

WHY WE ❤ IT

There’s so much to love about this app – it combines our favorite elements of “being better than our friends” and “not eating ourselves into early diabetes.” You know, the usual. You can check out the app on Devpost and GitHub while you’re chomping down on that donut.

HOW YOU DO IT

We caught up with Parikshit Tiwari, actual marathoner AND hackathoner, of the Foodifai team to ask him how he maintains that gorgeous, chiseled Adonis physique built such a fun and useful app.

Clarifai: What inspired your idea for Foodifai?

Parikshit: The idea was conjured up at SwampHacks when last minute creativity struck my team. We’ve been gaining some weight lately, and we wondered, what are we missing? Probably some six pack abs. But most importantly some inspiration to eat healthy. So, we thought let’s build an app which will motivate us to get back in shape and stay fit.

What did you use to build your app?

We built a native Android app, which uses Clarifai APIs for image tagging. We’ve also used Parse to implement the Login and Register functionality along with the storing the data for leaderboard and groups. The biggest challenge was connecting the Parse and Clarifai APIs – after that, it was smooth sailing.

Currently, we’re using mock nutrition data. However, we look forward to connecting the application to more authentic nutrition data from sources like NSDA API, Food API, etc.

What’s better, working with the Clarifai API or eating a tub of ice cream?

The Clarifai API was so easy use! And I am still amazed at the accuracy of the tags, although it does come up with some funny tags sometimes.

Thanks for sharing, Parikshit!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Parikshit some props in the comments below. Until next time!


Uncover buried stock photography treasure with automatic image tagging & keywording

Industry

Stock photography, marketplace, e-commerce

Use case

User-generated content tagging & site search

Result

30% more tags per image, improved user experience

Foap is a photography marketplace that empowers mobile photographers from all over the world to make money from their pictures. Smartphones have been a great equalizer in photography, making high-quality shots attainable for every aspiring photographer. Now, Foap is connecting “everyday photographers” to brands who value authentic content, like Pernod Ricard, Hyatt, Intercontinental, TD Bank, and Chobani.

David Losdavidlos is the founder and CEO of Foap. He is a technology entrepreneur from Stockholm University with a ton of experience launching successful startups.

Challenge

Foap is a fast-growing photography marketplace that receives thousands of new user-uploaded photos every day. In order to connect brands with the images they’re looking for, Foap had to find a way to consistently tag all these photos and make them easily searchable.
1. HIGH VOLUMES: How do you know what you’re getting from user-generated content?

Because most of Foap’s photos come from individual photographers, the tags and metadata attached are often inconsistent. For example, an image of a dog might be tagged “dog” or it might be tagged “fuzzy wuzzy waggy tail fur baby”. They needed a solution that could “see” each image and apply the appropriate tags in an accurate and consistent way.

2. VALUE EXTRACTION: How do you ensure your users find what they’re looking for?

Foap is a marketplace, so it serves both buyers and sellers. With image recognition, Foap wanted to come up with new and exciting ways to add value to both categories of their users.

“Clarifai helps increase the chances for people to sell their photos by surfacing the right image to the right buyer at the right time. Clarifai is now the absolute core of Foap’s tagging mechanism, making images findable in the Foap marketplace.” – David Los, Foap Founder & CEO

Solution

Foap uses Clarifai’s visual recognition solution to scale its marketplace and improve the user experience for both buyers and sellers.

When a user uploads an image to Foap, the app automatically suggests relevant tags for every image using the Clarifai API. Users still have the option to add their own tags as well. That way, Foap ensures consistency across user-generated content from different creators but also allows for creativity and flexibility in the tags.

Clarifai’s tags are also baked into the Foap app’s search so that brands and photographers are able to find exactly what they’re looking for. Now, not only is it easier for photographers to manage their portfolios and make their offerings more attractive, it’s also easier to surface the right content to buyers so they make more purchases.

Implementation

Improving the user experience

Taking photos is fun – filling in the metadata? Not so much. Using Clarifai to suggest tags to users not only helps categorize and curate user-generated content, it also creates a great user experience. Foap’s users want to earn money from the photos on their phone in as seamless a way as possible, and suggested keywords are an easy and quick way for users to make sure their photos are found by the right buyers. And not only do the suggested tags connect users with buyers, they also enable users to search through their own archives and organize their libraries.

“In a stock photo marketplace, missing or incorrect keywords means sellers missing opportunities to connect with buyers. With Clarifai, our users get relevant suggested tags that make it easy for them to keyword their photos. People are adding 30% more tags to each image with greater accuracy than ever before.”

Building a two-way marketplace

A traditional stock photo marketplace allows users to upload images, and buyers to browse and purchase those images. But, Foap’s marketplace is especially innovative because they implemented a fun and creative way to allow buyers to source images from creators called “Foap Missions.” Missions are prompts given by brands to encourage submissions from creators. These uploads are also tagged with the Clarifai API, making them easy to search and sort.

Quick and easy implementation

Foap’s founder, David Los, implemented Clarifai as a cornerstone of his product for its accuracy, flexibility, and affordability. With a team of five developers, David was able to get Clarifai’s core model up and running in only a couple hours and a few lines of code. By investing in Clarifai, David’s team saved 40 hours per week in manual keywording and moderation!

“Clarifai saves us time and money, eliminates manual keywording, and creates a user experience that our customers love. It’s amazing how Clarifai’s solution “just works.” Plus, their customer service is amazing!

DIY with Clarifai

Now that you’ve been inspired by Foap’s solution, it’s time to build your own. Clarifai’s core model includes tags for over 11,000 concepts you can apply to your business. All it takes is three simple lines of code – sign up for a developer API account to get started for free!