How MRM//McCann built a smart banner ad that sees what you see

Industry

Advertising Technology

Use case

Real-time adaptable digital advertising

Result

Created a banner ad that could adapt its message to millions of different images in real-time

MRM//McCann is one of the world’s largest global digital marketing agencies. The agency provides original content creation and digital strategy for customers like Mastercard, the U.S. Army, Nikon, and Microsoft. As a top digital marketing agency, MRM//McCann innovates with the latest technologies and uses them to power marketing campaigns and deliver more value to clients.

Marcus Johanssonmarcus is a technical director at the digital agency MRM//McCann. He runs technical development for for MRM//McCann’s creative campaigns and is at the forefront of designing technology-driven marketing campaigns.

Challenge

How can you serve an ad that sees what a customer sees?

Contextual relevance is key to standing out from a vast range of products being advertised online at every second. MRM//McCann’s client Vattenfall, one of Europe’s biggest energy companies, needed a solution to be able to show potential customers the right product at the right time – in real time.

Besides their core business as an energy producer, Vattenfall also offers consumers smart solutions for their homes. Their online store sells a wide range of products that customers can control through their smartphones as well as products that help reduce energy consumption. In order to get these products in front of potential customers, MRM//McCann came up with a creative, interesting, and relevant way to showcase Vattenfall’s offerings through online banner advertising.

“Using Clarifai’s visual recognition solution, we were able to create a banner ad which could adapt its messages to millions of different images, in real-time.” – Marcus Johansson, Technical Director of MRM//McCann

Solution

MRM//McCann used Clarifai to create a banner ad that can adapt its message to millions of images in real-time, connecting potential customers with relevant products to purchase.

MRM//McCann first identified a target site on which they could run “smart” banner ads for Vattenfall’s vast offering of energy-friendly consumer products. Hemnet, Sweden’s largest real estate website, offered the perfect platform for Vattenfall to target potential customers. In order to get Hemnet visitors to consider Vattenfall’s smart home products with relevant ads, MRM//McCann combined real-time user data from Hemnet.se with Clarifai’s real-time image recognition API.

Implementation

Automatically “see” and understand images

Two million unique visitors browse Hemnet every week to look at images of interior and exterior design for purchasing decisions or inspiration. Every day, thousands of new images are posted. In order for MRM//McCann to build a banner ad that “sees” what a visitor sees, they needed to first automatically understand what was in every image.

Using Clarifai’s image recognition technology, MRM//McCAnn was able to combine real-time user data from Hemnet.se with over 11,000 image recognition concepts like “indoor,” “outdoor,” “pool,” and “lighting.” They then cross-linked this to a list of banner ads for products that were relevant for the image a Hemnet visitor was looking at. The result was a banner ad that could adapt its message in real-time according to what the visitor sees.

“Through the Clarifai API we were able to create a truly relevant and innovative solution for one of our clients.

Connecting buyers with products at the right time

MRM//McCann crafted a truly relevant ad experience for Vattenfall by enabling them to connect buyers with products based on their current online behavior. With Clarifai’s Visual Search product, MRM//McCann was able to understand millions of images on Hemnet.se and thereby understand consumer behavior in real-time and serve relevant ads based on that behavior.

Traditional banner ads and product recommendations are served based on either past online behavior or similar customer behavior. For example, if someone browsed or purchased an item in the past, they are advertised the same item in the future. Or, if someone of a certain age, location, and gender bought something in the past, a new visitor with a similar profile will be served that advertisement in the future. These two means of serving ads and recommendations are not the most relevant; nothing is a better indicator of a consumer’s current interests than their current browsing behavior.

For the first time ever, we were able to use image recognition in a large-scale ad campaign. This is truly a leap into the next era of banner advertising.

Quick and easy implementation

MRM//McCann’s Technical Director, Marcus Johansson, implemented Clarifai for its accuracy, flexibility, and ease of use, as well as the helpful Clarifai support in the setup of the solution. With a team of only one developer, Marcus was able to launch this innovative ad campaign in just over two weeks from start to finish.

“We chose Clarifai for their accuracy in dealing with image recognition and for their API, which was very easy to work with right from the start.”

DIY with Clarifai

Now that you’ve been inspired by MRM//McCann’s innovative banner ads, it’s time to build your own app. Clarifai’s core model includes tags for over 11,000 concepts you can apply to your business. Or, you can use Clarifai’s Custom Training solution to teach our AI new concepts. All it takes is three simple lines of code – sign up for a developer API account to get started for free!

Once you’ve signed up for a developer account, head over to Applications and make a new one.  Make sure you nab that Client ID and Client Secret:

image03

Now, head over to https://github.com/clarifai/clarifai-nodejs. There, you’ll find our Node.js client, which makes this process even easierTo set up your environment, download the clarifai node.js file and stick it in your project.

Boo yah. You’re set up. Now head over to your Node project and just require the Clarifai client:

var Clarifai = require('./YOUR_PATH_HERE/clarifai_node.js');

Remember that Client ID and Client Secret you nabbed earlier? We’re gonna use those now. You can either paste them in this function directly, or save them in an environment variable.

Clarifai.initAPI('YOUR_CLIENT_ID', 'YOUR_CLIENT_SECRET');

Now for the fun part. You can easily tag an image with just 3 lines of code:

var imageURL = 'MY_IMAGE_URL';
var ourId = 'my great image'; // any string that identifies the image to your system
Clarifai.tagURL(imageURL, ourId, handler); // “handler” is your basic error handler function

You’re all set! Now you can easily and tag and sort your images to your heart’s desire. If you’d like to see a more in-depth example, check out clarifai_sample.js in the GitHub repo.


Clarifai Featured Hack: Upload a photo to this app to find a lost pet

FindALostPet is a website and app where people can, well, find their lost pets. Next time your dog gets lost chasing squirrels, just upload Fido’s photo to FindALostPet and Clarifai’s image recognition technology can help connect you to sightings of dogs similar to yours.

Do you ever have one of those absent-minded days where you misplace everything – your keys, your wallet, your cell phone, your exotic longhair Persian cat? Well, we have a solution to your lost pet problem (you’re on your own with the other stuff). FindALostPet is a free online lost pet database where you can help bring pets home. The app and website allow members to post pets they have lost or report sightings of stray pets with pictures and location. Clarifai helps categorize these sightings and route them to the correct pet owners.

findalostpet

WHY WE ❤ IT

Any app that can reunite us with our fur babies has a special place in our hearts. Seriously, have you seen how cute our office dog is? Not only that, but the developer who created the app is only fifteen-years-old – talk about fresh, young talent! Check out the live app here.

HOW YOU DO IT

We asked Laef Kucheran, the Canadian programmer behind FindALostPet, to explain his inspiration for the project.

Clarifai: What inspired your idea for FindALostPet?

Laef: I really can’t remember. I remember that I proposed it to five or so friends and all but one shot it down. Then, a couple months later, my friend messaged me saying that some of the other projects I was working on were cool, but he really wanted to know when I would start on the lost pet thing.

What did you use to build your app?

The back-end is written in PHP (my favourite) using several different tools and APIs. We use Stamen Design for map tiles and LeafletJS to stitch the tiles together and we use a Google Maps API for geocoding. And of course we use Clarifai for image recognition.

Really, the biggest problem has not been in the programming, which comes naturally, but in spreading the project. We crowd-funded it on Indiegogo, and got some media attention for that, but since then, we’ve kind of been at a loss for what to do next. I expect most programmers and app developers come up against this problem, and like a good programmer, my first thought is to just make the service better, cooler and easier like making sightings easier to post by using Clarifai to route them behind-the-scenes to the correct pet owners.

What was the best part about working with Clarifai?

Really, it’s just extraordinarily easy-to-use, easy to learn and understand and quite useful for projects like these. There’s nothing to complain about, it’s got good support and, for a programmer, no complaints is pretty much the best compliment that can be given.

Thanks for sharing, Laef!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

Give Laef some props in the comments below and hit up the FindALostPet team on Twitter: @FindALostPet, @betterstefan, @brandonmayu, @betteralin, and @betterlaef. Until next time!


Clarifai Featured Hack: AiPet is the robot pet you can build yourself

AiPet is an artificially intelligent robot pet that can see and interpret its environment using the Clarifai API and interact and follow commands using the Amazon Alexa API. Who knew the harbinger of the robot apocalypse would be so cute?

AiPet, or “Pi” as its creator so lovingly nicknamed it, is an artificially intelligent GoPiGo robot pet that can see, interpret, and interact with its environment. With the holidays coming up, it’s pretty much the perfect gift for the person in your life who really wants a pet but also really shouldn’t be entrusted with an actual living creature. Rather than tell you about all the cool pet tricks “Pi” can do, we’ll show you instead!

“Pi” can engage in basic conversation:

“Pi” can follow commands:

“Pi” can see and explain the world it sees:

WHY WE ❤ IT

We love it when hackers pair our visual recognition API with other cool tech like Amazon’s Alexa and the GoPiGo robot. The more, the merrier! Bonus points – Uday also documented his AiPet adventure in an informative Medium post.

HOW YOU DO IT

We asked Uday Sandhar, creator of AiPet, to talk about his inspiration for building a lovable little smart robot and how he did it. Uday is an ex-Founder, Mobile and Web Product Leader who loves building and growing products that make users’ life 100X better through simple, intuitive design and built-in intelligence.

Clarifai: What inspired your idea for AiPet?

Uday: I came across the GoPiGo robot and Clarifai, and I thought would it be cool if my son could have a super smart pet.

What did you use to build your app?

I used Python and the Python modules shipped with GoPiGo to control the robot and click pictures. I then programmatically uploaded those pictures to S3 and created Amazon Lambda function to call the Clarifai APIs attaching the picture from S3. I also created custom skills along with intent and utterances for Amazon Alexa to be able to ask to interact with the robot verbally.

Since I was utilizing the Python modules shipped with the robot, I used Weaved service to expose the robot thru an ssh so I could use Python Lambda function to remote ssh into the robot and run the robot Python modules to control its movement and camera.

What was the best part about working with Clarifai?

The Clarifai API was easy to understand, quick to implement, and free to use for my prototype.

Thanks for sharing, Uday!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Uday some props in the comments below. Until next time!


Clarifai Featured Hack: ClownSpotter is an app that helps you spot (and avoid) killer clowns

By now, you’ve probably heard of the spate of killer clowns cropping up everywhere around the world. And if you haven’t, don’t worry – we’ve got you covered with ClownSpotter, a clown-tracking web app that keeps tabs on creepy unwanted guests. Find out where hostile clowns are lurking and avoid their funny business … or die.

If you’ve been following the news lately, you might have noticed there’s a worldwide infestation of killer clowns biding their time to murder you in your sleep. Yes, it’s true. This horrifying trend is the inspiration for ClownSpotter, a web app that lets you track, and thus avoid, killer clowns.

Users submit sightings of clowns in their area by uploading pictures along with the date and location. Of course, Clarifai’s visual recognition API is used to verify that the sighting is indeed a clown and not just some other horrifying beast in the wild. Check out the GitHub repo to look at where clowns have been spotted all across the globe!

clownspotter

WHY WE ❤ IT

We thoroughly enjoy hacks that help us avoid things we don’t like. We hate clowns, as does anyone with a soul and a healthy regard for human life, so naturally, we loved ClownSpotter. 

HOW YOU DO IT

In the spirit of spooky Halloween stories, we asked the ClownSpotter team to tell their not-so-terrifying tale of clown-tracking adventure in narrative prose instead of our usual Q&A format. Enjoy!

ClownSpotter Team: We got to the hackathon bright-eyed, with a laundry list of ideas to make a reality. We originally went with an idea that involved matching physical symptoms to a database of conditions using the image tag matching capabilities of the Clarifai API. At around midnight, however, we realized this project was not feasible, as we did not have the resources that we needed. However, we were set on completing a project by the end of the event.

clown4With only eleven hours, less than half of the event itself, left until our projects were due, we did the only thing we could do:

We panicked.

We took breathers and spent some time thinking about it. We walked around the building, ate some cookies, got some free T-shirts, and browsed through Facebook. We noticed a trend on our Facebook timelines — scary clowns roaming around parks and backyards. The more videos we saw of people getting chased by scary clowns, the harder we laughed. One of our team members, Jose, revealed to us that he was afraid of clowns. We could not believe that this trend had made it outside of the United States, to places like Great Britain and Australia.

clown3At one point during the night, we went for cupcakes at the Women’s session and decorated a cupcake for each member of our team. You could guess what we made our team member Jose …

A CUPCAKE WITH THE FACE OF A CLOWN. Needless to say, he was not amused.

Then, Susana convinced us that we should just do the funny idea of tracking clown sightings and ClownSpotter was born.

We honestly did not think that our project would win four awards. We were writing it for the exact same reason that people go to these kind of events – to have fun. It was something we had never done before; for school, we were always assigned projects that were practical, yet boring. This was a chance for us to write about something that was relevant and fun to us. Along with that, we all learned new pieces of technology and teamwork. We hope to get the site running soon and show the world something that we are proud to have made!

clown2After some hard work, we created a website that allows users to report creepy clown sightings and later see them on a map! Our website takes information like the location and the time to generate a database of clown sightings, which is then used to create a visual representation on a world map. This will allow our users to see where others in their area have spotted hostile clowns.

Our project is awesome because while it has a silly premise, it will still help alleviate the public’s worry. Concerned parents can look at our map to find out where clowns have been sighted in their area. They can rest assured knowing that other individuals can submit reports, which will help them steer clear from the clowns.

Our team did not expect to obtain the results of our project. We got prizes for the Most Topical Project, Best Hack Design, #HackHarassment, and Best Use of MongoDB! We were thrilled at this outcome and could not contain our excitement! We definitely learned a lot about technology but we also learned a lot about ourselves and how to work well with the rest of our team members.

What did each person do?

Ana: Helped the team brainstorm ideas and get organized, set up the Flask environment and showed other team members how to use git commands. Currently working on data visualization part of the site.

Susana: Designed the user interface, created a database using Eve and MongoDB, wrote php to translate the data between the form and the db.

Jose: Learned git commands, HTML and Flask. Currently working on getting images to transfer from upload site to database.

Sandra: Learned git commands and currently working on graphic design portions.

Diana: Worked on the front-end and used bootstrap to create the front page of the webapp.

clown1

Thanks for sharing, clowns!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Ana, Susana, Jose, Diana, and Sandra some props in the comments below. Until next time!


Clarifai Featured Hack: FireAcc is an emergency notification app that might save your life

When emergencies arise, a matter of seconds could mean the difference between life and death. FireAcc is an app that detects accidents and other casualties from CCTV or other surveillance camera feeds and sends instant messages in real-time notifying authorities of the location and time of the incident.

Live streaming video feeds are a huge source of actionable open-source intelligence. With more and more of our lives being filmed and monitored in real-time, visual recognition has the ability to save lives by recognizing something as it’s happening, like seeing threatening gestures or weapons in live streams before they turn into something worse.

FireAcc is an app that recognizes accidents like car crashes, fires, and other emergency events, as they happen. Using Clarifai, FireAcc monitors CCTV or other surveillance camera feeds and sends a notification to relevant authorities when an emergency occurs. This notification is location-based and instantaneous, greatly reducing the delay before an emergency response.

fireacc

WHY WE ❤ IT

We always love applications where the Clarifai API can be used to save lives. After all, our mission is to “understand every image and video to improve life” and we think “staying alive” is a huge improvement on “being dead,” don’t you? Check out the live demo on Heroku!

HOW YOU DO IT

We caught up with Vishwashri Sairam, Harsh Jadav, Darshit Patel, and Smit Thakkar, third-year computer science students from SVNIT Surat, to talk about their inspiration for the FireAcc app.

Clarifai: How did you come up with the idea for FireAcc?

FireAcc Team: We actually saw a road accident the morning we were meant to hack with Clarifai. We discussed with mentors and brainstormed a bit, then started hacking on it. Usually, when an accident happens, emergency services are notified by a person. Often, notifying authorities comes too late and people whose lives could have been saved by earlier action die. To solve this problem we came up with FireAcc, an app that removes such delays and makes use of real-time monitoring.

How did you build the app?

The hack assumes there are CCTV or other surveillance cameras in the vicinity to detect the casualty. So, we used OpenCV libraries available for Java to simulate the activity of a CCTV camera. It converts the webcam of a laptop to a security camera that continuously captures images after a particular timeframe and sends the images to the server. This footage is then sent to the backend server we built using Flask. The backend server calls the Clarifai API which processes the image and extracts features and tags relevant to the picture. The tags and features returned by the API are then compared with the pre-trained data (tags) to select whom to notify. For example, road accidents are reported to the police and ambulance services, which fire breakouts notify the fire brigade and ambulance services.

A log of all the data is maintained by a MongoDb database which can also be used for future analysis. Notification to the concerned authority is done through the Twillio API which includes the time and location of the casualty and it is done immediately after detection of casualty. Authorities are notified within one minute of the event. We also designed a web dashboard for displaying the logs and deployed on Heroku.

What was the best part about using Clarifai?

Clarifai has easy-to-use API calls and responses, well-written documentation, and most importantly, amazingly accurate tags and results returned by the API. We also liked that Clarifai can process both video and image.

Thanks for sharing, life savers!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Vishwashri, Harsh, Darshit, and Smit some props in the comments below. Until next time!


Train your own visual recognition model and search any image with Custom Training & Visual Search

We believe that the same AI technology that gives big tech companies a competitive edge should be available to developers or businesses of any size or budget. That’s why we built our new Custom Training and Visual Search products – to make it easy, quick, and inexpensive for developers and businesses to innovate with AI, go to market faster, and build better user experiences.

We’re excited to announce two new products that will empower developers and businesses everywhere to personalize AI to fit their specific needs and build AI into their workflows and apps. Custom Training allows you to build a Custom Model where you can “teach” AI to understand any concept, whether it’s a logo, product, aesthetic, or Pokemon. Visual Search lets you use these new Custom Models, in conjunction with our existing pre-built models (general, color, food, wedding, travel, NSFW), to browse or search through all your media assets using keyword tags and/or visual similarity. Read on to get the full rundown of our new products or go straight to our docs for more information.

Read on to get the full rundown of our new products, or go straight to our docs for more information!

Custom Training: Completely personalized AI just for you

Custom Training allows you to quickly and easily “teach” our visual recognition technology to understand any new concept relevant to you or your business. New concepts can be trained with high accuracy orders of magnitude faster than before, requiring less than ten images per concept as compared to the thousands of images per concept traditional AI training required. And, with each new image example, the AI will get smarter and better at recognizing your concept!

oreo-customtraining

Features
  • Create Custom Models containing unlimited concepts
  • Tag images with any concept you define
  • Train using the API or our new UI

Visual Search: Connect users with exactly what they’re looking for

Visual Search allows any user to easily organize, access, or recommend their images or products by keyword or visual similarity. Perform powerful searches across your own collection of images using our pre-built models or your own custom model. You can also build our Visual Search capability into your website, allowing a drag-and-drop search by image and/or tag, or enabling better real-time image-based recommendations.

combinedsearch

Features
  • Search your own image collection by tag
  • Advanced search using AND/OR/NOT operators
  • Search your own image collection using reverse image search
  • Group your own image collection by visual similarity
  • Search using the API or our new UI

Preview your data with our new UI

With our two new products, you’ll also have access to a brand new UI if you like looking at images as images instead of code. You can use this UI to get a quick view of your image collections, train or search your images without writing code, and check in on what people are posting to your site.

customtraining-adidas

Lastly, with all these new products comes a brand new way to interact with our API. You can now replace your image database with Clarifai – it works with either URLs or image bytes!

50,000 free API credits per month for a limited time

We’re gifting customers with 50,000 free API credits to try and test our new products. Pretty nifty, huh? This is the perfect time for you to explore our new products and see if you like them without having to pay a cent – sign up now!


Clarifai Featured Hack: Shop the world around you with Intellivision’s augmented reality app

Do you ever walk down the street, see someone way cooler than you, and wonder where you can get the outfit they’re wearing? Wouldn’t it be neat (and not super creepy at all) if you could just look at them and know? Intellivision works with Google Glass and MYO band to give you a rich shopping experience based on what you see in the world around you.

Intellivision is a live, augmented reality, image-tagging Android application that uses Google Glass and MYO armband to allow you to “shop” the world around you. Whatever you see through the Google Glass is automatically recognized and added to a list using the Clarifai API. If you see a tag pop up that you’re interested in, you can “click” it, and you’ll be taken to a link to buy the item on Amazon.

tumblr_m8gdd4zgVb1ql4atmo4_250WHY WE ❤ IT

Augmented reality opens up a whole new way for people to interact with their surroundings, and it’s a perfect fit for Clarifai’s visual recognition technology. We love the idea of being able to shop on the go, whether we’re commuting to work or at a baseball game. Plus, it’s always cool to discover some swag new gear and look like Ironman while we’re buying it! Check out the GitHub repo for this app and try it yourself.

HOW YOU DO IT

We asked Ron Wright, a Ph.D. student in Computer Engineering at the University of Illinois at Urbana-Champaigne and one of the creators of Intellivision, to explain his inspiration behind the app and how he built it.

Clarifai: What inspired your idea for Intellivision?

Ron: The inspiration for this project is centered around Google Glass and the Clarifai API. We knew that the image-capturing hardware and software built into Google glass was a natural fit for Clarifai API and that this combination could have a very universal application if implemented correctly.

What did you use to build your app?

We wrote the Android application in Java, and we used Android Studio to write it. We also provided a web frontend that displays a histogram of every instance of a word that was collected, and that was written as a NodeJS web application in HTML/JavaScript.  We interfaced the Android application and the web frontend using Firebase.  As for challenges, there was very little developer support available for Google Glass, so we had to seek some help from mentors or else find as many examples as possible to understand what we needed to do and what issues needed to be resolved. Also, finding the right SDK to get the development on Google Glass started was one big challenge.

What did you enjoy most about hacking with the Clarifai API?

The best part of the Clarifai API is its incredible ease of use.  You can send a stream of bytes representing a JPEG image and receive back the words that best fit the image.  Overall, the Clarifai API is not that challenging to use, although the most challenging part of it is understanding how to pass the image data correctly to the API call.

Thanks for sharing, Ron!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Ron some props in the comments below. Until next time!


Clarifai Featured Hack: Create color palettes from images with Hued

Hued is an app that recommends stylish color schemes based on images. Whether you’re looking for design inspiration or what the exact shade of beige is in your selfie (bisque, we’re told), you’ll love seeing what this image-to-color palette generator comes up with!

Color is a big part of how we silly humans perceive the world, so it’s only natural that we gravitate toward perfect combinations of color in neatly packaged palettes. Enter Hued, an app that takes any image and returns a color palette (using the Color Model from the Clarifai API) that developers can copy or download to use in their design projects.

hued2

You can try the live Hued app yourself – just make sure you have your Clarifai API credentials handy (sign up for free if you haven’t already) and some images you want to upload. The app is super fast, so get ready to be amazed by more than just all the pretty colors!

WHY WE ❤ IT

Hued helps developers take the inspiration from their favorite images and apply it to their projects in a fun and colorful way. In the wise words of Disney’s Pocahontas, the app basically lets you “paint with all the colors of the wind” – literally. Like, look at this photo of wind:

hued2

HOW YOU DO IT

We asked Cassidy Williams, creator of Hued and resident #Clarifool, to talk about her inspiration for the Hued app.

Clarifai: How did you come up with the idea for Hued?

Cassidy: At hack day at Clarifai, I really wanted to make something that would help developers. I thought it’d be cool to make something where a photo could “inspire” your own projects. So, initially, this is just a theme/color palette generator.  What I’m hoping to do after this is to build a full-on site generator based on a photo.

How did you build the app?

First, I used the Clarifai JavaScript Starter kit hereBuilding off of that, it was just pure JavaScript, CSS, and HTML with a touch of jQuery. Initially, I had the project set up in the same way as the starter kit, where you had to have a separate keys.js file that had your credentials, but that’s a hassle for a person to have to set up to get the site up and running. So, I tweaked it so that someone could input their credentials directly onto the web app. What’s fun about it is that everything is saved in localStorage, so you don’t have to add a new Client ID and Client Secret every single time you run it (so if you try out the project once on your computer, then go back to it later, the only input you’ll need to stick in there is the image you want to use).  Also, I’m magic.

What do you love most about Clarifai?

Clarifai is so easy to use. It’s LITERALLY the best thing ever. Also, I work there. Double whammy.

Thanks for sharing, Cassidoo!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Cassidy some props in the comments below. Until next time!


Clarifai Featured Hack: Generate better song lyrics with SubWoofer

SubWoofer is an app that generates relevant lyrics for songs using image recognition and a library of song lyrics. You can add lyrics to lyric-less songs or you can generate better lyrics to accompany a song’s music video.

Have you ever listened to a song without words and wondered what words would be sung if the song had words? Have you ever seen the music video for a song and wondered why the video is showing whipped cream bras while the lyrics are talking about California Gurls? Have you ever wanted to replace nonsensical “work work work work work” lyrics with something that actually makes sense?

Well, we have. That’s why we’re so excited about SubWoofer, an app that uses image recognition and a library of song lyrics to generate suitable lyrics for whatever song you choose. Just copy and paste a Youtube URL into the box and go!

subwoofer

WHY WE ❤ IT

Besides being a super polished and usable hack, we also love that SubWoofer lets us experience music in a whole new way. Check out SubWoofer’s GitHub repo to try the app for yourself!

HOW YOU DO IT

We asked trap-loving JavaScript devs from Florida – Juan Alvarado, Jacob Jenkins, and Jess Smith – to explain exactly how much Darude “Sandstorm” they listened to before deciding to build the SubWoofer app.

Clarifai: What inspired your idea for SubWoofer?

To those in the know, Darude’s Sandstorm might go down in history as the greatest musical composition of the 21st century. Despite its repute, its lack of lyrics could leave it untouched by future generations. What if we could generate lyrics for songs using image recognition and a library of other song lyrics? This is what inspired us t0 make SubWoofer at HackFSU.

What did you use to build your app?

We built this at HackFSU 16 using Node.js in the backend, and Vue.js in the frontend. We used FFMPEG from within Node.js to chop up the videos and generate freeze frames. We compiled a lot of rap and pop lyrics and removed all the adjectives and nouns from them to create templates, then we used WordPOS to classify the tags into parts of speech and Sentencer.js to put together sentences from the templates and the now categorized tags.

We ran into some issues generating the subtitle files. Initially, we generated SRT format subtitle files and attempted to embed them into the video itself via FFMPEG. When that failed, we had to modify the process to instead generate VTT subtitle files so we could serve them on the web using HTML5 video’s closed captioning feature. We then ran into an issue with tracks from previous videos being stacked on the video element. The solution was a single line of JQuery but we were far too exhausted to figure it out after being awake 20+ hours and with the submission deadline looming.

We also ran into the Clarifai request limit for the free tier by testing a bit too hard (Initially, we uploaded the entire video, rather than just a few freeze frames). We solved this by swapping keys a couple times, and by asking for a temporary limit increase.

What was the best part about working with Clarifai?

It was pretty easy to use. Tags were pretty solid for the most part. The fact that it takes video is pretty cool. But it does eat through the requests because it’s video.

Thanks for sharing, musical prodigies!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give the Triple J’s (Juan, Jacob, and Jess!) some props in the comments below. Until next time!


What food is this? Clarifai’s food recognition technology can tell you!

We’re excited to release our new Food Recognition model into the wild, so you can start building incredible (and tasty) apps that recognize over a thousand types of food down to the ingredient level!

At Clarifai, we spend a lot of time feeding yummy data to our artificial intelligence algorithm, so it’s only natural that we built a visual recognition model dedicated solely to food. With our Food Recognition model, you can now automatically recognize more than a thousand different foods in images and video, all the way down to the ingredient level. Check out these mouth-watering examples:

food4

food2

food5

food3

Who should use the Food Recognition model?

We built this model with health & fitness, lifestyle, travel, and food-centric apps and businesses in mind. For example, some developers have already created the Foodifai app which tells you how healthy (or, let’s be real, unhealthy) your meal is based on a photo. We’ll also be releasing a fun hack next week that recommends a recipe for you based on a picture of the contents of your refrigerator … follow our Facebook to get notified when it goes live!

Not only is the Food Recognition model great for building diet, cooking, and recipe apps, it’s also perfect for businesses in the travel and hospitality industry! Restaurant review platforms like Yelp, Foursquare, OpenTable, Caviar, GrubHub, Seamless, etc. can use the Food Recognition model to categorize user-generated content automatically or manage a library of photos submitted by restaurants. The Food Recognition model can make media management much, much easier and help surface the right content to users!

As for consumer brands like Pizza Hut, McDonald’s, Domino’s, Starbucks, Chipotle, etc. our Food Recognition model can provide deeper analytical insights into what dishes are most popularly photographed and shared by customers. So, if you’ve ever been curious whether pepperoni pizza or cheese pizza gets shared most on Twitter, here’s your chance to find out.

How do you use the Food Recognition model?

It’s all here in the docs, my friend. The Food Recognition model can be used alone or with any of our other visual recognition models to enhance food-related results. For example, you could use the Travel Recognition model we released last month with the Food Recognition model to categorize which foods are eaten where. And, remember our Not Safe for Work (NSFW) Nudity Recognition model? You could use the NSFW Recognition Model with the Food Recognition Model to find #foodporn in the most literal sense. You know, if you’re into that kind of thing.

So, if you’re ready to nom nom nom on some delicious visual recognition technology, try our Food Recognition model and build something awe-inspiring. Be sure to share with us by emailing your hack or GitHub repo to bounties@clarifai.com to qualify to win a Clarifai skateboard for our food-themed Clarifai Bounty!


Clarifai Featured Hack: Never let a reCAPTCHA question your humanity again

Google ReCAPTCHA Breaker does exactly what it sounds like it does – it helps you automatically solve those pesky reCAPTCHAs with image recognition from Clarifai. Never again will nine images bar the way to your online destination!

You know when you go to a website, and you’re asked to prove that you aren’t a robot by solving a puzzle? One of the most potentially annoying forms of these tests is Google’s reCAPTCHA, which asks you to identify which of a set of images contains a particular object, like a tree or a car. Sometimes, these images are pretty hard to read and probably the last thing you want to do before buying that ticket to the nearly sold out Justin Bieber concert, amirite?

Fear not, a group of geniuses has created the ultimate solution to your reCAPTCHA woes. Google reCAPTCHA breaker uses Clarifai’s image recognition to bypass image-based tests automatically, so you never have to guess again!

recaptcha

WHY WE ❤ IT

CAPTCHAs and reCAPTCHAs are meant to provide greater security to websites, even if they are sometimes quite the hassle. We couldn’t in good conscience break reCAPTCHA as Clarifai Inc. even though we knew our technology was capable of the feat. Luckily, a group of independent developers has done it for us, and it is every bit as awesome as we imagined! Read more on DevPost or check out the GitHub repo.

HOW YOU DO IT

We caught up with Dennis Chiang, Roni Tu, Eric Liang, and John Hu to ask them about their Google reCAPTCHA Breaker app.

Clarifai: What inspired your idea for Google ReCAPTCHA Breaker?

Team: Google image reCAPTCHA is sometimes annoying because the images are really tiny (130x130px). Through testing, we found that Clarifai was extremely accurate, so we just thought to combine the two.

What did you use to build your app?

The app basically ‘solves’ Google reCAPTCHA images using Clarifai. The program breaks up the reCAPTCHA into nine parts and passes them onto Clarifai. We then take the output from Clarifai and process it accordingly.

A C# program takes a picture of the screen and cuts up the reCAPTCHA. We then had a python script that passed the images onto the Clarifai-python client. Based on the results, the python script also attempted to ‘programmatically’ click the correct positions.

What was your favorite part of working with Clarifai, besides breaking Google’s stuff?

It was easy to use, and we thought it was pretty interesting how accurate it was.

Thanks for sharing, team!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Dennis, Roni, Eric, and John some props in the comments below. Until next time!


Clarifai Featured Hack: Explore the world through gestures and images with Iris

Iris is an application that helps people who can’t see experience vision in a new way, using gesture and image recognition to help blind people explore the world around them.

The first instinct many people have when they see our image recognition technology is, “Hey, this could be really helpful to blind people!” We’ve encountered a lot of apps that are meant to help the visually impaired “see” the world around them, but a lot of them fail to take into account the visually impaired user experience. For instance, we’ve seen apps require users to find and press a button on their (flat!) smartphone screens!

Iris is a new app that delivers a new way to see the world and really takes the user experience into account. With a simple gesture, Iris captures an image and gathers data about the image. Then, this information is spoken directly to the user without any physical interaction with the smartphone device.

WHY WE ❤ IT

We always try to deliver a smooth user experience that makes sense intuitively to our customers. That’s why we love the Iris team’s approach – from the beginning, they really thought about what would feel most natural and comfortable for a blind person to use. Read more about Iris on Devpost or check out the GitHub repo!

HOW YOU DO IT

We asked Chaitya Shah, a computer science student at Georgia Tech, to explain his team’s inspiration for Iris and how they built the app.

Clarifai: What inspired your idea for Iris?

Chaitya: My teammates Nick, Will and I came across the idea of Iris while exploring the unique ability of the Clarifai API to analyze images and parse them for descriptive tags. We broke Clarifai down into an equation: Picture + Clarifai = Tags. So why not flip that equation on its head and use tags to “create” a picture? Through the use of descriptive tags, Iris aurally illustrates a scene in the minds of the blind. After all, a picture is worth a thousand words.

What did you use to build your app?

Iris is an Android application coded entirely in Java. We also used the Clarifai API, Myo SDK, and Android’s TextToSpeech Engine. A lot of the challenges we faced were because of the many asynchronous threads required for Iris to function. We got around this by reorganizing our project in a certain way so that the correct functions were executed in the correct order.

What did you like about building an app with the Clarifai API?

In my opinion, the best part about creating Iris was showing people how it could really make a difference in the world. Demoing our app really made me feel proud of what we had done and proud of where my team’s knowledge has brought us.

Thanks for sharing, Iris team!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Chaitya, Nicholas Petosa, and William Ma of the Iris team some props in the comments below. Until next time!