Women in Machine Learning & Data Science @Clarifai

At Clarifai, we’re always striving to be a model of diversity and inclusion in the tech world. We were super excited to have the chance to host New York City’s Women in Machine Learning & Data Science meetup last week, where over fifty people crowded into our humble startup abode to chow down on pizza and listen to some Clarifools (catchy, huh) talk about machine learning and visual recognition.

Our CEO Matt Zeiler gave an enlightening talk on his PhD work in machine learning and convolutional neural networks – for those of you who missed it, we’ve uploaded his presentation to Slideshare so you can relive the glory:

Then, our magnificent developer evangelist Cassidy Williams shared some cool projects our hackers have built with Clarifai’s API:

  • Remember – an app that helps you remember where you put your stuff, silly forgetful human.
  • Reminisc – an app that lets users rediscover their old photos and drown in nostalgia.
  • Netra – a mobile app that helps blind people see their surroundings.
  • Hued – a photo-based color palette generator for the interior designer or fashionista in you.

Whether you’re a man or a woman, if you’re in NYC and interested in machine learning and data science, you should join WiMLDS … it’s a great group of people! Or, if you’re already part of a club or group, let us know – we host events and meetups for the developer community almost every week. Just tweet us @Clarifai!


Clarifai Featured Hack: Make sweet music with your photos using Photoverse

Photoverse is a web application that turns images into music by recommending 8tracks internet radio playlists to fit the mood and feel of any photo. Time to find the soundtrack to your life!

Have you ever heard of synesthesia? It’s a condition in which one sense is simultaneously perceived as if by another sense. You know, when people “taste” music or “hear” colors. Anyway, Photoverse is kind of like that – an app that lets you “hear” images!

photoversecover

Leveraging the photo tagging power of the Clarifai API, Photoverse is able to search through 8tracks‘ internet radio library and show the user the top playlists to fit the mood created by their photos. So, if you ever wondered what Donald Trump’s soundtrack would be, here’s your chance to find out. Spooky accurate, right?

spooky

WHY WE ❤ IT

Photoverse is cool because it’s a perfect blend of art and science! As the saying goes, a picture is worth a thousand words … but when you add in Clarifai, it can be worth a thousand songs! You can check out the live demo of Photoverse or the GitHub repo and try it for yourself.

HOW YOU DO IT

We asked the Photoverse team – Zackery Harley, Colin MacLeod, Jake Alsemgeest, and Andrés Kebe – to tell us how they came up with the app and what they used to build it. Here’s what they had to say!

Clarifai: What inspired your idea for Photoverse?

Zackery: Before coming to ConUHacks, Colin and I were talking about creating some sort of website or web app and were trying to brainstorm ideas. We both love music and technology so I suggested making something with the Spotify API. After we read through the list of sponsor APIs for ConUHacks, I came up with the idea to suggest playlist based of of tags generated by Clarifai’s API. We ended up joining up with Jake and Andrés at the team building session.

Andrés: I always end up spending more time finding a perfect playlist for the moment rather than actually listening to it. With Photoverse, all that extra time can finally be used to enjoy the music.

What sorcery is this? Or, how did you build your app?

Zackery: We used a Node.js server along with some dependencies (body-parser, express, js-sha512, request, clarifai, stylus, cheerio, socket.io) to run the backend for the project. We used this server to query Clarifai’s API, which would then return tags based on the image we supplied. Due to the fact that 8tracks no longer supplies API keys, we had to use a workaround to generate playlists. This involved sending a GET request to 8tracks’ search page using the generated tags, where we them extract the playlist IDs. We use this info to create iframes which show the embedded players on our page.

Andrés: I personally worked on the front end of the app, designing both the interface and the logo. The real magicians were Zach and Jake who reversed engineered a way to use 8tracks inside our web app.

What was the best part about working with the Clarifai API?

Zackery: Unlike a lot of other companies’ APIs (or lack thereof), the Clarifai API is well documented and easy to use. It made getting the info we required very simple and helped us reach our goal of generating playlists for photos.

Jake: Ease of use was definitely cool.  The tags it would generate were actually really interesting as they wouldn’t be something I would always think with.  Sometimes they wouldn’t be the best for our given uses, however we found ways to work with them all.

Andrés: It was incredible to see how accurate the Clarifai API is. It accurately tagged all the pics we applied it to.

Thanks for sharing, guys!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Zackery, Colin, Jake, and Andres some props in the comments below. Until next time!


Introducing Clarifai Bounties - earn rewards and recognition with your hacks

Love the Clarifai API but stuck on what to build next? Take a look at our new Clarifai Bounties page, where we give away money, swag, and fleeting internet fame to people who complete our missions. Get inspired by our wacky prompts and show us your creativity!

Clarifai Bounties are a bunch of fun ideas that we want you, our developer community, to help us build. Your mission, should you choose to accept it, is to complete a Clarifai Bounty, turn it in to bounties@clarifai.com, and get some rewards, recognition, and even some cash money!

bounties

We’re launching with twelve bounties to choose from – some of them are zany, some of them are practical, but all of them need you to put your own personal stamp on them. Every month, we’ll be adding more bounties to the list, so make sure you check back often or subscribe to our blog newsletter to be notified when there is a new bounty available.

Good luck, godspeed, and have fun hacking!


Clarifai Featured Hack: Real Estate Genius tells you what your home is really worth

Real Estate Genius is an app that uses a ton of data to tell you how much a house (or apartment or abode or mansion or shack or hovel) is really worth. The results may surprise you!

Anyone who’s ever lived in New York City can attest to the frustrating real estate market and the racket that is the broker industry (15%? Are you serious?!). Enter Real Estate Genius, an app that takes the expertise traditionally owned by a few people and makes it widely available to the general public!

genius

Real Estate Genius analyzes publicly available data provided on a real estate property and compares it to all the data from past transactions. This includes analyzing images of homes to help evaluate their price! The tool can use all this data to estimate the price of a property within a certain margin, thereby helping the general public understand the “real” general worth of the home they wish to sell, purchase, or rent.

You can try the app live or download all the code from Github.

WHY WE ❤ IT

As resident New Yorkers, we’re always concerned with skyrocketing real estate prices so Real Estate Genius is an app that hits close to our hearts. I mean, I voted for “The rent is too damn high” party, didn’t you? (You think I’m joking.)

HOW YOU DO IT

The geniuses behind Real Estate Genius – Felix La Rocque Carrier, Mathieu Gamache, Anthony Garant, and Sam Snow – are a bunch of engineering students from Polytechnique Montreal who love hackathons and entrepreneurship. Here’s what they had to say about hacking with the Clarifai API!

Clarifai: What inspired your idea for Real Estate Genius?

Real Estate Genius Team: We are always searching for the “Next Big Thing” in the software world to help us start our own company. We wanted to work on developing an A.I. that could analyze a problem way better than a human. The real estate industry was an awesome gateway to this challenge because it’s a market where the expertise is owned by few people and the general public is easily afraid of it. We were inspired by the quantity of data available for real estate and the fact that price approximations were still done by hand.

How did you build your app?

We used the Clarifai API to help us process pictures of homes and get tags on certain characteristics that help evaluate the price of each home. All of that information is then processed with machine learning to correlate between existing selling prices and the resulting approximate price.

We used Azure machine learning features with an AWS backend to provide the machine learning and the web interface to a potential client. We also mined data from various sites to get the real estate information.

What was your favorite thing about the Clarifai API?

The API is one of the simplest we worked with. Usually with APIs, you need to search for a platform specific SDK or else the communication is bloody hard. Clarifai API is literally just two API calls and you get the result you want.

Thanks for sharing, geniuses!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Felix, Mathieu, Anthony, and Sam some props in the comments below. Until next time!


Clarifai Featured Hack: Pocket Hipster, because you thought it was cool before everyone else

Pocket Hipster is a cheeky and hilarious app that generates hipster poetry for every image you feed it. Just put on your Warby Parkers, get a piping hot mug of free-range, farm-to-table, gluten-free, fairtrade coffee, and enjoy!

Pocket Hipster is one of those apps that will have you alternating between fits of giggles and existential ennui. How, you might ask? Upload any image and Pocket Hipster will spit out some lines of (usually non-sensical but always hilarious) prose worthy of the hippest hipster that’s ever hipped. You can try the app live here!

pockethip

With the help of some works from Edgar Allen Poe and Sigmund Freud, as well as the ever-entertaining posts under Tumblr’s #poetry, the Pocket Hipster team was able to build this brilliant app using something called Markov chains and the Clarifai API.

WHY WE ❤ IT

Obviously, Pocket Hipster is a post-modernist masterpiece with a healthy infusion of sarcasm and wit. We love it because it showcases a creative application of our API while also demonstrating some great technical chops.

HOW YOU DO IT

We caught up with the masterminds of Pocket Hipster – Jordan Hand, Logan Girvin, Katrina Ezis, and Oliver Stanley (all undergrads from the University of Washington Seattle) – to talk about what inspired them to make this #fetch (stop trying to make “fetch” happen, it’s not going to happen) app.

Clarifai: What inspired your idea for Pocket Hipster?

Pocket Hipster Team: At DubHacks 2015, Microsoft’s James Whittaker started his keynote talk by telling us all that we’re essentially worthless. He went on to talk about the value of sarcasm, creativity, and specific expertise, emphasizing the value of “making epic shit.” We decided to make something cheeky rather than academic or practical and started brainstorming. Ultimately, we came up with something you can pull out to make you seem much deeper than you actually are – a pocket-sized hipster.

We saw the Clarifai API and realized the possibilities were endless.

How did you build such an ingenious thing?

Our web app takes a user-uploaded image and produces free verse poetry using a Markov chain generator trained on Sigmund Freud’s “Dream Psychology: Psychoanalysis for Beginners”, volumes 1 and 2 of Widger and Traverso’s “The Works of Edgar Allen Poe”, and 2000 Tumblr text posts which had been tagged with ‘poetry’. Tags related to the user’s image are used to seed each line of the poem, sorted in descending order of the algorithm’s confidence, resulting in increasingly abstract lines.

We built the site on a Node server coupled with Express to write a custom REST API to handle poem generation. In order to prevent our website from looking too mainstream we used jQuery and Bootstrap to create a simple and sufficiently aesthetical UI. This was on top of the usual HTML, CSS, and JavaScript that goes into any dynamic web application. Our text aggregation and interaction with the Tumblr API.

The most difficult components were creating a Markov chain generator we were happy with and getting our data back from the site through Clarifai to the generator and back to the user.

On a scale of “mild hipster ennui” to “extreme hipster apathy,” how much did you enjoy working with the Clarifai API?

Despite our best efforts to immerse ourselves in hipsterish apathy, we had a ton of fun making this application.

Working with the Clarifai API and getting all of the pieces of our project to line up and interface correctly was a challenge. Getting to bother Cassidy with questions during the hackathon was a big help, as were her inspiring lip sync skills. We’re still a little shocked that we not only produced a functioning website, but went on to take third place out of over one-hundred teams and win Clarifai’s sponsor prize despite our inexperience.

Attending DubHacks was an amazing introduction to the world of hackathons and we are so grateful to Clarifai for helping to make it happen.

Thanks for sharing, hipsters!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Katrina, Jordan, Logan, and Oliver some props in the comments below. Until next time!


Clarifai Featured Hack: Find lost people with the Found app

Found is a webapp that acts as a smart “lost and found” for people. It’s is one of those apps that you always hope you never have to use, but you’re sure glad exists.

Found is a platform for connecting people who have “lost” someone with people who have “found” someone.

Imagine there is a large-scale disaster and you can’t get in contact with a loved one in the affected area. You go on Found, post pictures of your “lost” loved one along with some basic information about them, and provide your contact information. The Clarifai API memorizes your loved one’s face as you upload photos of them.

At the disaster site, early responders take photos of everyone they’ve “found.” When they upload these photos to Found and a face is recognized as a match to your “lost” loved one, you get an email notification with their whereabouts and status.

WHY WE ❤ IT

Found is one of those apps that could make a big difference to people in crisis all over the world. You can check out the GitHub repo and try it for yourself. Plus, it was created by one of our talented dev evangelists, so we’re biased.

HOW YOU DO IT

We asked Cassidy Williams of the Found team to tell us how she came up with the app and what she used to build it. Here’s what she had to say:

Clarifai: What inspired your idea for Found?

Cassidy: The hackathon I entered, Bluemixathon, prompted us to create something that could be useful in disastrous situations – I wanted to do something that used Clarifai’s image recognition API to help people in disasters.

What part does the Clarifai API play in your app?

Recognizing a human face is something every person can do in varying degrees. However, recognizing and memorizing hundreds and thousands of faces is very hard if not impossible for a single person. With machine learning capabilities like Clarifai, it has become easier and easier to train machines to detect minute and subtle changes, particularly in faces. We thought that it would make sense to use this facial recognition technology to help reconnect people to their loved ones.

What’s the magic sauce behind your Clarifai implementation?

I used pure JavaScript, baby. Found was built using the Clarifai API for machine learning and image recognition, Imgur API for image uploading, Mandrill for emails, and BlueMix to deploy it all.

What else do you hope to do with Found?

We would like to incorporate GPS locations as well as a build a phone app to help people assisting in rescue missions take pictures on the go.

Thanks for sharing, Cassidy!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Cassidy some props in the comments below. Until next time!


How StyleMePretty uses visual recognition to grow their business from publication to platform

Industry

Media/publishing

Use case

User-generated content tagging

Result

10x images tagged, time saved

StyleMePretty.com is a premier online destination for wedding inspiration, garnering over 25 million views a month. It began in 2005 as a blog featuring gorgeous, style-savvy, real weddings. Now, Stylemepretty is making the leap from wedding blog to wedding platform with features like image search, portfolios, vendors, and resources for weddings.

Tait Larsontait is the fearless founder and technical guru of stylemepretty.com and runs development, operations, and all things technical. His background is in Computer Science with an undergraduate degree from Vanderbilt and a masters from Stanford (NBD!).

Challenge

With over 2.5 million images, including tens of thousands of user submissions every day, Stylemepretty needed a way to handle the constant flow of inbound images. But, there are two major challenges with understanding and organizing user generated content:
1. HIGH VOLUMES: How do you know what you’re getting from user-generated content?

Stylemepretty receives six hundred weddings a week from professional photographers, wedding vendors, and brides, with anywhere from 150 to 250 photos per wedding! Some of the images they receive have captions, but most come with no information or unreliable metadata attached. At such volumes, it’s nearly impossible to go manually through each image and categorize it appropriately.

2. VALUE EXTRACTION: Once you know what the content is, what do you do with it?

Once you have an understanding of your content, you need to put your knowledge into action. For Stylemepretty, that meant finding a way to organize and curate relevant photos to their users across all their marketing channels, not just their website.

“Using Clarifai’s visual recognition solution, we’re now able to collect and analyze 10x as many images as we did using manual tagging. Clarifai scales with us as we grow, so the returns will be even higher in the future.” – Tait Larson, founder of Stylemepretty.com

Solution

Stylemepretty uses Clarifai’s visual recognition solution to scale their business from a wedding blog to a wedding platform.

When a user uploads an image to Stylemepretty’s platform, Clarifai automatically applies relevant tags to the image. Not only does this save time and resources, it also allows Stylemepretty to scale its content operations and improve its visual search solution.

In a week, Stylemepretty receives around 600 weddings and 100,000 images from wedding vendors. Out of all the user submissions, they are able to publish about 50 weddings or 10,000 images with manual tagging. With Clarifai, Stylemepretty is able to tag the full 100,000 uploads to build a bigger, better visual search solution in the weddings space. Automatic tagging is also a building block to increase editorial capacity.

Implementation

Getting smarter together

Stylemepretty uses two layers of review for user-generated content – Clarifai does the initial tagging and then passes the results onto a human curator to vet. The human curator classifies tags as relevant or not relevant, which then feeds back into Clarifai’s algorithm. Because of this feedback loop, Clarifai’s accuracy and results are constantly improving to fit Stylemepretty’s unique needs.

“Clarifai is a way for man and machine to work together to achieve the best results.”

Enhancing a multi-channel content strategy

With automated tagging from Clarifai, Stylemepretty has the opportunity to collect a lot more images and curate on top of them. A deeper understanding of their content allows Stylemepretty to enhance search curation by returning more relevant results to visitors, and also help their human editors develop more content supported by better visuals.

“Clarifai has allowed us to explore our own data very well and push it to other channels like social media.”

But, a publisher’s website isn’t the only channel they have to feed with great content. Stylemepretty also maintains popular Instagram, Facebook, Twitter, YouTube, Tumblr, and Pinterest accounts. In order to find the right content for each of these social media channels, Stylemepretty uses Clarifai to explore their collection of images and choose exactly the right image for the right campaign:

Quick and easy implementation

Stylemepretty’s founder, Tait Larson, implemented Clarifai for its accuracy, flexibility, and affordability after exploring several different machine learning options.

He evaluated out-of-the-box solutions like the Alchemy API and found them to be “nowhere near accurate.” He also looked at Amazon’s Mechanical Turk but found it to be very expensive for his desired workflow – three to five times as expensive as Clarifai – with limited capabilities.

“There are a lot of hidden costs associated with implementing a machine learning pipeline. I was looking for a clean API like Clarifai that would give me a flexible solution with the accuracy and specificity that we needed out-of-the-box, and where we would avoid technical debt.”

With a team of four developers, Tait wrote the initial PHP for both Clarifai’s feedback API and the regular API. He and his team also built an internal tool that provided a user interface for their needs. It took only a couple hours of developer time to get Stylemepretty.com up and running with Clarifai!

DIY with Clarifai

Now that you’ve been inspired by Stylemepretty’s solution, it’s time to build your own. Clarifai’s core model includes tags for over 11,000 concepts you can apply to your business. All it takes is three simple lines of code – sign up for a developer API account to get started for free!

Once you’ve signed up for a developer account, head over to Applications and make a new one.  Make sure you nab that Client ID and Client Secret:

image03

Now, head over to https://github.com/clarifai/clarifai-nodejs. There, you’ll find our Node.js client, which makes this process even easierTo set up your environment, download the clarifai node.js file and stick it in your project.

Boo yah. You’re set up. Now head over to your Node project and just require the Clarifai client:

var Clarifai = require('./YOUR_PATH_HERE/clarifai_node.js');

Remember that Client ID and Client Secret you nabbed earlier? We’re gonna use those now. You can either paste them in this function directly, or save them in an environment variable.

Clarifai.initAPI('YOUR_CLIENT_ID', 'YOUR_CLIENT_SECRET');

Now for the fun part. You can easily tag an image with just 3 lines of code:

var imageURL = 'MY_IMAGE_URL';
var ourId = 'my great image'; // any string that identifies the image to your system
Clarifai.tagURL(imageURL, ourId, handler); // “handler” is your basic error handler function

You’re all set! Now you can easily make like Yelp and tag and sort your images to your heart’s desire. If you’d like to see a more in-depth example, check out clarifai_sample.js in the GitHub repo.


How to tag, organize, and find photos on your iPhone with Forevery app

We designed the Forevery photo app to be intuitive and easy to use, but that doesn’t mean we won’t post some video tutorials for you anyway! Watch these two-minute video screencasts for a full walkthrough of our amazing app.

We’re pretty proud of building Forevery – not just the futuristic technology that makes it tick, but also the clean and intuitive user interface that makes it such a pleasure to use. When you open our app, you can pretty much figure out what to do right off the bat. But, we still want to make resources available to help you navigate all the cool features our app has to offer, so take a look at these short and sweet video tutorials on how to Forevery!

Part 1: Getting started

Download Forevery from the iTunes store and create an account using your phone number (it’s more secure than using a username/password to log in). The Forevery app will sync with all the photos on your camera roll, so let it run in the foreground or background on your phone until the import is complete. If you have privacy concerns about importing your photos, you can view our privacy policy in the app or right here.

Part 2: Searching your photos

You can find any photo on your phone using Forevery. Our app automatically applies tags to all of your photos – no more dealing with crowded folder hierarchies and difficult-to-maintain albums. So if you’re looking for that picture of your #hubby on the #beach playing #frisbee, you can find it in an instant!

Part 3: Tagging people

You can teach Forevery to recognize the important people in your life in your photos. All you have to do is select a few pictures of the person you are trying to tag, and our accurate facial recognition technology does the rest.

Part 4: Tagging places and things

Our app also automatically applies tags for places and things to your photos. All of your photos will be tagged by location according to your phone’s geodata. Your photos will also be tagged with over 11,000 objects, ideas, themes, and feelings that apply to each picture. And if 11,000 concepts aren’t enough, you can personalize the app by teaching it to recognize custom concepts like “Pikachu” or “Louis Vuitton.” Each new tag you create is automatically applied to relevant photos on your camera roll and also any future photo you take!

Part 5: Using the camera

If you use our in-app camera, tags will immediately be applied to every photo you take. It takes less than a second to see the magic happen!

Part 6: Sharing photos and stories

Forevery automatically generates photo stories with charming titles for you to share with your friends. You can share them at the push of a button to anyone in your contact list or the social media platform of your choice.

Part 7: App settings and stuff

You can send us feedback in-app via the settings menu. Tell us whether you think Forevery is excellent or excrement – we’d love to hear your honest feedback. You can also review the privacy policy and terms and conditions within the settings menu, and you can delete your account (nooooooo whyyyyyy).

We hope these tutorials have been helpful! Keep in touch with us on Facebook or Twitter and let us know how we can make Forevery even better!


Clarifai Featured Hack: Automatically organize photos on your computer into different folders with ImgSort

One of the top requests we’ve gotten since the launch of our mobile photo discovery app Forevery has been a desktop version that will automatically tag and sort photos on a hard drive. While we’re hard at work making that request into a real product, we’ve had some eager developers hacking away at their own versions of a desktop photo organizer using Clarifai’s API.

ImgSort was created by Nirupama Suneel and Nathan Wong at Dubhacks 2015. Their app automatically sorts images on your computer into different folders based on categories that the you set … so, if you ever wanted a folder of only cat photos, your dream is about to come true. Check out the GitHub repo to try it for yourself!gallery

WHY WE ❤ IT

ImgSort essentially tackles the same problem as our mobile app Forevery. It’s such a hassle to sort and categorize photos manually no matter what device you’re storing your images on, so the functionality of this hack really resonated with us.

HOW YOU DO IT

We asked Nathan Wong of the ImgSort team to share how they created their app. Here’s what they had to say!

Clarifai: What inspired your idea for such a handy tool?

Nathan: Our team didn’t have a clue of what to build when we arrived at Dubhacks 2015. We attended Clarifai’s machine learning workshop where we got to see your machine learning and image recognition APIs in action. We felt that the image recognition aspect of the Clarifai API was incredibly powerful and easy to use, and we decided to design a project based around that functionality.

How does the app work?

After tinkering around with the Clarifai Java client, we had the idea of creating a sorting algorithm that could sort image files by content rather than lexicographically. When you have a massive folder of images on your computer, it’s such a hassle to sort them into categories by hand! To solve this problem, we built a program called ImgSort that can automatically sort a folder of images on a computer’s local file system into categories that the user defines.

What’s the magic sauce behind your Clarifai implementation?

We accomplished this by generating tags for each image using Clarifai’s API and cross-referencing them with the names of the categories and related words such as synonyms and hyponyms. The program works pretty well! However, it generally works best with generic category names such as “food”, “people”, and “buildings”. We hope to improve the program by using Clarifai’s machine learning concept models to find a way to sort the images so that the user doesn’t even have to declare new category names.

Thanks for sharing, Nathan!

To learn more, check out our documentation and sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.

And give Nathan and Nirupama some props in the comments below. Until next time!


We launched the Forevery Photo app a week ago … so now what?

Thank you all for downloading our Forevery Photo app! And thanks in advance to everyone who is about to click this link right here to get our app and see what all the buzz is about (you know you want to).

Launching your first app is like raising a lion cub in your backyard and then finally releasing it into the wild. You worry that maybe you didn’t prepare your cub enough for the dangers of the savannah. You wonder if the other lions will accept him … or if they’ll eat him. And you hope that your lion cub grows into the biggest, baddest, boss-est lion that all the dentists want as trophies for their walls (too soon?).

With that said, it’s been a week since we launched Forevery, a personalized photo discovery and organization app, into the wild. We couldn’t be more thrilled with the traction we’re getting so far and we hope to keep it going into 2016! We rounded up some of the nice things our customers and the press said about our app so we could humblebrag a little (ok, a lot):

When we first shared Forevery with the world, we knew that people would be amazed by how accurate and fun it is. What we didn’t know was just how creative people would get with the tags. Check out some of the best custom tags our users have shared with us:

blogpost

We’ll continue to add more features and functionality to Forevery in 2016, so make sure you get your product feature requests to us! Forevery is just the first of many new apps and integrations we’ll be launching over the next year. We can’t wait to share them with you and hear your feedback, so follow us on Facebook or Twitter and keep in touch!


Clarifai Featured Hack: How to build an artificially intelligent, 3D-printed Lightsaber

What happens when you combine the power of Clarifai’s API with an accelerometer and a 3D-printed lightsaber? If your answer is SPACE MAGIC then you are correct, young Padawan.

Chad Ramey, a Georgia Institute of Technology computer science undergrad, created an awesome 3D printed Lightsaber that recognizes motion gestures (insert swishy Lightsaber sound here). Using a custom trained Clarifai classifier and an Inertial Measurement Unit (IMU) to record movement data, Chad was able to build an artificially intelligent Lightsaber in a single weekend.

image02

WHY WE  IT

Not only is Chad’s use of our image recognition API for gesture recognition a great demonstration of out-of-the-box thinking, it’s also a testament to the power of our technology. Chad’s project means that Clarifai’s artificial intelligence API can be used to recognize any non-visual data that can be represented graphically (for example, a song’s visual spectrogram).

HOW YOU DO IT

We had the chance to catch up with Chad about his epic Lightsaber project and how he managed to build it. If you want to learn from a real Jedi hackathon master, read on!

Clarifai: What inspired you to build this project for the UGAHacks hackathon?

Chad: My teammate Josh Marsh and I wanted to take sci-fi objects and translate the crazy things that happen in the movies to things that happen in real life. For example, if you were to wave Harry Potter’s magic wand in real life, we wanted to be able to recognize that gesture and make something happen as a result of it, like turn on a light. We ended up going with a Lightsaber because it was the only thing big enough to put the accelerometer and microcontroller inside.

Using our visual recognition API for gesture recognition is pretty innovative. How did you come up with this idea?

I have a background in machine learning so I was immediately interested in using the Clarifai API. Raw motion data is very noisy and difficult to translate into meaningful patterns. It traditionally involves a lot of complex, difficult math – I hoped that Clarifai’s AI would be able to pick up patterns in the data for us.

How did you implement Clarifai in your project?

“With Clarifai, you don’t have to worry about any of the hard math to detect patterns.”

We took Clarifai’s custom binary classifiers and trained it on a small data set of graphs to identify four different gestures – right, left, up, and down. We used a Python library called matplotlib to take all the data we were getting in our Lightsaber sensor, plot it as a chart, and save it as a .jpg image. We custom trained Clarifai to recognize that which plot meant left, right, etc. Super, super simple – you only need an image and none of the math.

LIGHTSABER

If someone else wanted to build a similar project, what technical skill level would they need?

“The really awesome thing that I got super excited about was how easy it was to work with the Clarifai API. It was an hour or two of work – just 100 lines of python code for me to write.”

Unlike open source toolsets, you don’t have to understand what’s going on under the Clarifai hood in order to use the technology properly. For example, the best open source tool available for motion right now is the Gesture Recognition Toolkit (GRT). But, you need a lot of machine learning knowledge to know what to do with it.

“With the Clarifai API, you don’t need to be a data scientist or machine learning expert. Any developer can use it.”

What did it take to 3D print and assemble the actual Lightsaber?

It took about 10 hours for the Lightsaber parts to print from our two 3D printers. We took a strip of individually adjustable RGB lights and put it on the inside of the saber for the glow. In the hilt, we stored the accelerometer and a microcontroller to collect data and control the lights. We used an Arduino microcontroller – a really awesome tool that you can build robots with. It’s very popular in the maker community and only costs $10-12.

image00

Are there real-world applications for how you used Clarifai in your Lightsaber project?

“Our project essentially proves that Clarifai can be used for gesture recognition, so there are limitless real-world motion-based applications that could be built with Clarifai.”

You could make make your own activity tracker (like a FitBit) or smartwatch using a bracelet with a motion sensor and Clarifai as the backend. You could train the model to understand whether you’re doing a push-up or a pull-up or if you’re running or walking, and you can track your fitness that way. Or, you can control wi-fi connected things through gestures. You could draw the letters “TV” in the air to turn on your wi-fi connected TV.

Thanks for sharing, Chad!

For everyone inspired by Chad’s epic project, sign-up for a free Clarifai account to start using our API – all it takes is three lines of code to get up and running! We’re super excited to share all the cool things built by our developer community, so don’t forget to tweet @Clarifai to show us your apps.


Meet Forevery, our new AI-powered mobile app for every photo in your camera roll

We’re excited to announce the launch of Forevery, a new photo organization and sharing app powered by Clarifai’s visual recognition platform.

At Clarifai, we’ve spent years developing artificial intelligence software used by companies around the world to better understand their image and video data. Along the way, we realized that the same technology that allows Trivago to enhance their hotel search and Style Me Pretty to curate wedding albums could also be applied to the personal photos we carry around on our smartphones every day.

So, we stepped back, looked at what people did with their photo collections, and built an app around making this better. The result is Forevery, a new app focused on three themes: rediscovering memories, sharing beautiful content, and making photo discovery personal.

Rediscovering Memories

Forevery automatically recognizes the people, places, things, and times in your photos and makes it simple to search for them however you want. But it doesn’t stop there. It digs deeper into the pictures to find emotions like “love” or “happiness” and concepts like “adventure” or “celebration”. It looks through your camera roll to identify your best shots, building an ever-changing home screen full of your favorite memories that you can use as a starting point to explore your entire collection.

pic1

Sharing Beautiful Content

Forevery automatically packages your photos into stories, each with a cover and a title. It lets you know when there’s something new to share and suggests who to share it with based on the people in the story and who you’ve shared with in the past. But you don’t have to wait for Forevery to create the story for you any search result, or selection of photos can be shared as a story with the touch of a button. Privately sharing your memories with friends and family has never been easier.

pic2

Making Photo Discovery Personal

What gives our photos meaning is often deeply personal: the people we care about, our pets, or objects of special significance. It’s easy to teach Forevery about these. Visit the “People” section to teach Forevery to recognize the people you know. After you select a few images of a person, Forevery will learn to automatically recognize them in all of your photos, even new ones you take with the camera. Similarly, you can add new concepts and objects to Forevery in the “Things” section.

pic3

Once you add a person or thing to Forevery, it becomes instantly searchable, both from the app and in Spotlight, and can be combined with other tags so you can find photos of that trip to
line

We think these features make Forevery a great way to revisit memories from your photo collection with the people you care about, and we’re excited to have the opportunity to share Forevery with you today.

Forevery is available for free on the App Store. Download it now, thank us later!