Side-by-side guide: How to transition from Clarifai v1 to v2 API

We’re excited to announce that our v2 API (including new Custom Training and Visual search features) will be graduating from developer preview to general availability. If you were previously using our v1 API, here’s a guide to help you easily and seamlessly transition into v2 so you can take advantage of our new usage-based pricing and access the full range of our new features.

Thanks to all you early testers and adopters, our v2 API is graduating from developer preview to general availability! What does that mean for everyone who’s already been using v1 of our API? In short, you can do everything in v2 that you could in v1, and more:

  • Teach our API to recognize any new concept with Custom Training
  • Search images by visual similarity, tag, or a combination of both with Visual Search
  • Add any custom information (like price, SKU, etc.) to images with custom metadata
  • Explore and manage your models and media assets with our clean and intuitive user interface
  • Get the full picture of your account activity with better usage reporting

Read on for a walkthrough on how to easily and seamlessly transition your projects from v1 to v2 so you can take advantage of our new usage-based pricing and access the full range of features in our v2 API!

“Tag” vs. “Predict” Endpoints

Let’s first cover some important details. As you may recall, in v1 the ‘Tag’ endpoint was used to tag the contents of your images or videos. In v2, we renamed this endpoint to ‘Predict’ to more accurately reflect what our technology is doing. Remember that when we ‘Tag’ an image in v1, we tell it which model to use. In v2, ‘Predict’ is used with models the exact same way.

Models

We have a variety of visual recognition models you can choose from. The default model is our General model which understands more than 11,000 concepts in images, like objects, themes, emotions, and more. Other models are available and can be used when you want to focus on more specific content: Not Safe For Work (NSFW), Weddings, Travel, Food, and Color. Furthermore, if you can train your own models in v2 using Custom Training, which you can learn more about in this post

In this guide, we will use the General model in our examples. If you need a refresher, you can find v1 documentation here. For v2, all the code below can be found through our client installation instructions and quickstart examples.

Javascript

Now, let’s start with the JavaScript client first.

For V1 and V2, you can install both on Npm. Just invoke: 

$ npm install clarifai

In the JavaScript client, you can replace the model id with other id aliases defined in the V2 client such as:

Clarifai.GENERAL_MODEL, Clarifai.NSFW_MODEL, Clarifai.WEDDING_MODEL, Clarifai.TRAVEL_MODEL, Clarifai.FOOD_MODEL, and Clarifai.COLOR_MODEL.

v1 API

// Initialization with CLIENT_ID and CLIENT_SECRET
Clarifai.initialize({
  'clientId': '{clientId}',
  'clientSecret': '{clientSecret}'
});

// Get tags for an image via url
Clarifai.getTagsByUrl('https://samples.clarifai.com/metro-north.jpg',{model: 'general-v1.3'}).then(
  handleResponse,
  handleError  
});

// Get tags for an image via image bytes
Clarifai.getTagsByImageBytes('G7p3m95uAl...', {model: 'general-v1.3'}).then(
  handleResponse,
  handleError
);

v2 API

// Instantiate a new Clarifai app passing in your clientId and clientSecret
var app = new Clarifai.App(
  '{clientId}',
  '{clientSecret}'
);

// Predict the contents of an image via url 
app.models.predict(Clarifai.GENERAL_MODEL, 'https://samples.clarifai.com/metro-north.jpg').then(
  handleResponse,
  handleError
);

// Predict the contents of an image via image bytes
app.models.predict(Clarifai.GENERAL_MODEL, {base64: 'G7p3m95uAl...'}).then(
  handleResponse,
  handleError
);

Python

For the Python client, v2 installation will be exactly the same as v1! As stated above, this is done by running the following command:

$ pip install clarifai

Now, let’s get to the code.

v1 API

# Import the client
from clarifai.client import ClarifaiApi

# Instantiation
api = ClarifaiApi("{clientId}", "{clientSecret}")

# Get tags for an image via url
results = api.tag_image_urls('https://samples.clarifai.com/metro-north.jpg', model='general-v1.3')

# Get tags for an image via image bytes
results = api.tag_images(open('/home/user/image.jpeg', 'rb'), model='general-v1.3')

v2 API

# Import the client
from clarifai.rest import ClarifaiApp

# Instantiation
app = ClarifaiApp("{clientId}", "{clientSecret}")

# Predict the contents of an image via url
results = app.models.get('general-v1.3').predict_by_url('https://samples.clarifai.com/metro-north.jpg')

# Predict the contents of an image via image bytes
results = app.models.get('general-v1.3').predict_by_filename('/home/user/image.jpeg')

Java

For the v2 Java client, if you are an Android developer, you can call executeAsync() at the end of your ‘Predict’ method which will be an easy way for you to take care of an asynchronous task instead of writing an AsyncTask boilerplate. Anywho, here’s how you can transition from v1 to v2 in Java.

v1 API

// Installation
// Maven
<dependency>
  <groupId>com.clarifai</groupId>
  <artifactId>clarifai-api-java</artifactId>
  <version>1.2.0</version>
</dependency>
// Or Gradle
compile "com.clarifai:clarifai-api-java:1.2.0"

// Instantiation
ClarifaiClient clarifai = new ClarifaiClient("{clientId}", "{clientSecret}");

// Get tags with the model via url
List<RecognitionResult> results = clarifai.recognize(new RecognitionRequest("https://samples.clarifai.com/metro-north.jpg")
        .setModel("general-v1.3"));

// Get tags with the model via image bytes
List<RecognitionResult> results = clarifai.recognize(new RecognitionRequest(new File("/home/user/image.jpeg")
        .setModel("general-v1.3"));

v2 API

// Installation
// Maven
<dependency>
  <groupId>com.clarifai.clarifai-api2</groupId>
  <artifactId>core</artifactId>
  <version>2.2.0</version>
</dependency>
// Or Gradle
compile "com.clarifai.clarifai-api2:core:2.2.0"

// Instantiation
ClarifaiClient client = new ClarifaiBuilder("{clientId}", "{clientSecret}").buildSync();

// Predict the contents of an image via url
List<ClarifaiOutput<Concept>> results = client.predict("general-v1.3")
        .withInputs(ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg")))
        .executeSync().getOrNull();

// Predict the contents of an image via image bytes
List<ClarifaiOutput<Concept>> results = client.predict("general-v1.3")
        .withInputs(ClarifaiInput.forImage(ClarifaiImage.of(new File("/home/user/image.jpeg"))))
        .executeSync().getOrNull();

Objective-C

For installation in ObjectiveC, add:

pod 'Clarifai'

to your Podfile first. Then install dependencies and generate workspace by: 

pod install

Now, it’s time to open YOUR_PROJECT_NAME.xcworkspace to start coding!

v1 API

// Import the client
#import "ClarifaiClient.h"

// Instantiation
ClarifaiClient *client = [[ClarifaiClient alloc] initWithAppID:@"{clientId}"
                                                     appSecret:@"{clientSecret}"];

// Get tags for an image via url
NSString *imageAsUrl = @"https://samples.clarifai.com/metro-north.jpg";
[client recognizeURLs:@[imageAsUrl] completion:^(NSArray *results, NSError *error) {
  NSLog(@"results: %@", results);
}];

// Get tags for an image via image bytes
UIImage *image = [UIImage imageNamed:@"dress.jpg"];
NSData *imageAsJpeg = UIImageJPEGRepresentation(image, 0.9);
[client recognizeJpegs:@[imageAsJpeg] completion:^(NSArray *results, NSError *error) {
  NSLog(@"results: %@", results);
}];

v2 API

// Import the client
#import "ClarifaiApp.h"

// Instantiation
ClarifaiApp *app = [[ClarifaiApp alloc] initWithAppID:@"{clientId}"
                                            appSecret:@"{clientSecret}"];

// Predict the contents of an image via url
ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/metro-north.jpg"];
[app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
  [model predictOnImages:@[image] completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
    NSLog(@"outputs: %@", outputs);
  }];
}];

// Predict the contents of an image via image bytes
UIImage *image = [UIImage imageNamed:@"dress.jpg"];
ClarifaiImage *clarifaiImage = [[ClarifaiImage alloc] initWithImage:image];
[app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
  [model predictOnImages:@[clarifaiImage] completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
    NSLog(@"outputs: %@", outputs);
  }];
}];

v1 vs. v2 Response

Now, you must be wondering if you will get the same response by calling ‘Predict’. The answer is no. Here is a visual illustration of how v2 response is different:

How Clarifai v2 API predicts

Make sure that you are accessing the outputs correctly by comparing the JSON format below.

v1 API

{
  "status_code": "OK",
  "status_msg": "All images in request have completed successfully. ",
  "meta": {
    "tag": {
      "timestamp": 1451945197.398036,
      "model": "general-v1.3",
      "config": "34fb1111b4d5f67cf1b8665ebc603704"
    }
  },
  "results": [
    {
      "docid": 15512461224882631443,
      "url": "https://samples.clarifai.com/metro-north.jpg",
      "status_code": "OK",
      "status_msg": "OK",
      "local_id": "",
      "result": {
        "tag": {
          "concept_ids": [
            "ai_HLmqFqBf",
            ...
          ],
          "classes": [
            "train",
            ...
          ],
          "probs": [
            0.9989112019538879,
            ...
          ]
        }
      },
      "docid_str": "31fdb2316ff87fb5d747554ba5267313"
    }
  ]
}

v2 API

{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "outputs": [
    {
      "id": "ea68cac87c304b28a8046557062f34a0",
      "status": {
        "code": 10000,
        "description": "Ok"
      },
      "created_at": "2016-11-22T16:50:25Z",
      "model": {
        "name": "general-v1.3",
        "id": "aaa03c23b3724a16a56b629203edc62c",
        "created_at": "2016-03-09T17:11:39Z",
        "app_id": null,
        "output_info": {
          "message": "Show output_info with: GET /models/{model_id}/output_info",
          "type": "concept"
        },
        "model_version": {
          "id": "aa9ca48295b37401f8af92ad1af0d91d",
          "created_at": "2016-07-13T01:19:12Z",
          "status": {
            "code": 21100,
            "description": "Model trained successfully"
          }
        }
      },
      "input": {
        "id": "ea68cac87c304b28a8046557062f34a0",
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        }
      },
      "data": {
        "concepts": [
          {
            "id": "ai_HLmqFqBf",
            "name": "train",
            "app_id": null,
            "value": 0.9989112
          },
          ...        
          {
            "id": "ai_VSVscs9k",
            "name": "terminal",
            "app_id": null,
            "value": 0.9230834
          }
        ]
      }
    }
  ]
}

That’s it! Hopefully, this guide helped you transition your application to the Clarifai v2 API successfully. Please let us know if you run into any trouble by shooting us an email at support@clarifai.com.


How to create an iOS app with React Native and the Clarifai API

Want to build an iOS application but don’t know Swift? Well, React Native lets you build a native mobile application with JavaScript and React so you don’t have to learn a whole new language in order to create an iOS app. Here’s a tutorial on how to create an iOS app with Clarifai’s API and React Native!

shirleyfaceAs you might already know, Swift is a programming language created by Apple for building apps for iOS, Mac, Apple TV, and Apple Watch. Whether or not you’re familiar with Swift, it may not be your favorite programming language to work with. So, today I will walk you through how to start an iOS app using React Native, a framework that lets you build mobile apps using only JavaScript and the Clarifai API so you can not only easily build iOS apps, but iOS apps powered by machine learning and visual recognition!

Let’s setup our dev environment first!

  • Npm
    Install Node.js if you don’t have npm installed
    Npm is a robust and consistent package manager for JavaScript. It will help us to specify and install package dependencies. Due to its consistency, npm is very quick to deploy and allows teams or users to quickly collaborate with others or set up projects of the same build with a simple npm install.
$ npm install -g react-native-cli

Add sudo to the front if you get a permission error. -g is global mode that the package will be installed globally.

Getting started

  • Initializing your project by running react-native command line
$ react-native init <YOUR_PROJECT_NAME> 
$ cd <YOUR_PROJECT_NAME>

react-native init calls npm install react react-native –save and npm init for you. So that after running the command line, a React-Native project will be created.

$ npm install clarifai --save

To access the Clarifai API, we first need to install our JavaScript client. This is so that we will be able to call the tag endpoint in order to get the contents of images or videos.

–save writes dependencies into the package.json file for you.

$ npm install react-native-image-picker@latest --save

A React Native module for creating native UI component to access camera or gallery folder.

Running the application

First, choose your favorite text editor. I use Sublime, however you can use Atom, Vim, or anything you prefer.

Once you open your project, your project folder should look like this:

image01

Now you must be curious about what the application looks like! You can either run the application through Xcode or through the following command:

$ react-native run-ios

If you run into “couldn’t find preset ‘es2015’ TransformError”, install the babel-preset-es2015:

$ npm install babel-preset-es2015 --save

Here is a screenshot of the application after you run the command through the terminal:

image05

Or if you are wondering how to run the application through xcode:

image07

Modifying the application

Great! Now that that’s done, time to add some components! This application will let user to take a picture or select one from the gallery. The Clarifai API will then analyze the image and tell you the inside visual contents.

This tutorial will focus on modifying index.ios.js only, for you to easily follow. When you master React Native, separating all the components into different files will be ideal.

  • Adding a clickable text

Update return value for render() function to following

<View style={styles.container}>
  <TouchableHighlight onPress={this.selectImage.bind(this)}>
    <Text>Select an image</Text>
  </TouchableHighlight>
</View>

Therefore, when a user clicks on ‘Select an image’, the function selectImage() will be triggered. The function bind(this) allows us to set YOUR_PROJECT_NAME object to be bound to this when selectImage() is called so that we won’t get a TypeError with the binding function at the end.

TouchableHighlight is one of the components for React Native, import it from ‘react-native’ as:

import {
  ...,
  TouchableHighlight
} from 'react-native';
  • Importing image picker

In order to import the downloaded react-native-image-picker package, we need to link it using the following steps:

In Xcode, right click on your project folder. Add Files to “YOUR_PROJECT_NAME” -> find RNImagePicker.xcodeproj under node_modules/react-native-image-picker/ios

image06

Scroll down to Linked Frameworks and Libraries, add libRNImagePicker.a from Workspace.

image08

For iOS 10+, add NSPhotoLibraryUsageDescription and NSCameraUsageDescription keys to your Info.plist for permissions.

image00

After setting up project libraries, let’s go back to our index.ios.js file.

Import react-native-image-picker package first:

import ImagePicker from 'react-native-image-picker';

Add an options variable for ImagePicker customization. We want to set value for title, skip iCloud backup, and maxWidth for better performance when we send the image to the Clarifai service.

var options = {
  title: 'Select an Image',
  storageOptions: {
    skipBackup: true,
  },
  maxWidth: 480
};
  • Implementing selectImage()

Paste following lines above your render() function inside the class:

selectImage(){
  ImagePicker.showImagePicker(options, (response) => {
    console.log('Response = ', response);
    if (response.didCancel) {
      console.log('User cancelled image picker');
    }
    else if (response.error) {
      console.log('ImagePicker Error: ', response.error);
    }
    else {
      // Do something with the selected image
    }
  });
}

Now run your application again, you should have an image picker available:

image03

  • Let’s show the image that the user selected

Display image in render() function with:

<View style={styles.container}>
  ...
  <Image
    source={{uri: this.state.imageSource}}
    style={styles.image}
  />
</View>

Image is one of the components. Lets import it from ‘react-native’:

import {
  ...,
  Image
} from 'react-native';

Update the style sheet for image display:

const styles = StyleSheet.create({
  ...,
  image: {
    width: 200,
    height:200 
  }
});

Set initial state under the constructor inside of the class:

constructor() {
  super();
  this.state = {imageSource:'https://community.clarifai.com/uploads/default/_emoji/clarifai.png'};
}

Update selectImage():

// Do something with the selected image
this.setState({imageSource: response.uri.replace('file://', '')});

Your application should look like this after selecting image from the gallery:

image04

The Clarifai API

Now we are getting to the most exciting part – getting visual content using the Clarifai API!

First, if you don’t have a Clarifai account yet, create one through developer.clarifai.com. The next step is to create your first application. Copy the Client Id and Client Secret values and replace {CLIENT_ID} and {CLIENT_SECRET} outside of your class:

var app = new Clarifai.App(
  '{CLIENT_ID}',
  '{CLIENT_SECRET}'
);

Don’t forget to import the Clarifai API:

import Clarifai from 'clarifai';

Because of the Clarifai API, we will be able to get visual contents using the image that a user selects. A simple way to show what has been returned by Clarifai is by adding a text under render() function:

<View style={styles.container}>
  ...
  <Text>{this.state.tagText}</Text>
</View>

First, we can initialize the state tagText as an empty string:

constructor() {
  ...
  this.state = {imageSource:'https://community.clarifai.com/uploads/default/_emoji/clarifai.png', tagText: ''};
  ...
}

One last thing is to update the selectImage() function and add following code under comment:

// Do something with the selected image
this.setState({imageSource: response.uri.replace('file://', '')});
app.models.predict(Clarifai.GENERAL_MODEL, {base64:response.data}).then(
(res) => {
  console.log('Clarifai response = ', res);
  let tags = '';
  for (let i = 0; i<res.data.outputs[0].data.concepts.length; i++) {
    tags += res.data.outputs[0].data.concepts[i].name + ' ';
  }
  this.setState({tagText:tags});
},
(error)=>{
  console.log(error);  
});

Hooray! Let’s test our application. You should get results by selecting images from your iOS device or simulator:

image02

Please check out the GitHub repo for this tutorial if you need more instruction and resources! Exponent, NativeBase, and Material Design are great tools you can use for designing, and also Deco is a nice React Native IDE for developing.

I learned how to create an iOS app with #JavaScript #Clarifai #ReactNative – here’s the awesome tutorial I used <3 


Inside Clarifai HQ: Life as a Developer Evangelist intern

Now that the summer is grinding to an abrupt and unwelcome halt, we’ve asked our summer interns to share their experiences at Clarifai and help usher in our new class of Fall and Winter interns with their hard-earned wisdom.

shirleyfaceThis summer, I had the distinct honor to be a Developer Evangelist intern at Clarifai. A lot of people ask me what exactly a developer evangelist does. I tell them, “It’s like being a celebrity, except without the money or the fame or the good looks.” But really, a developer evangelist acts as the liaison between a company and the developer community. That’s me!

I wanted to share a little bit about my internship – what I did this summer, what I learned, and what I’m up to next – to help future interns decide if being a Developer Evangelist is their calling. Read on if you want me to drop some knowledge on you.

IMG_5913

What does a developer evangelist intern do day-to-day?

During my Clarifai Developer Evangelist internship, I helped the front-end team with integrations, attended meetups to show off our demos, mentored at a women’s hackathon to help teach young students how to code, and wrote blog posts about the awesomeness that is Clarifai!

IMG_6171

What is the most important skill a developer evangelist can have?

While not exactly a “skill” per say, I think being really self-motivated and believing in the company and technology you’re representing is super important. Basically, if you can’t (or won’t) drink the Kool-Aid, you shouldn’t be a dev evangelist. The core values at Clarifai are community, learning, astonishment, responsibility, innovation, fire, appreciation, and impact. What keeps me motivated is how I’ve been able to apply our core values to my work every single day.

How has this internship differed compared to other experiences?

I learned how to build a technical project from scratch while at the same time, I helped the team add more features on our code base. Learning to do this has helped me to explain better when I mentor others. This is very exciting and different from other software engineer internships because I am not just an engineer, but also an advocate.

image01

Ok, but how fun can an internship really be?

Ohhh yeah, this internship was a blast! I know that there are large companies out there that bring all interns together, host a massive signature event, and give gifts to every single one of their interns at the end of the event. It certainly sounds exciting, and I have been there. However, at the big companies I’ve worked for, I was only really able to make one close friend at work through the summer. At Clarifai, it’s totally different! We are a team of around 30 people. We eat lunch together, we stretch and share together (yeah, it’s exactly what it sounds like – stretch and share about our day), we paint together, and we sing (badly) together! We have a tight-knit company, and it really shows when we work together.

What is the startup culture like?

It’s great! We have an open office in which I can ‘bother’ my manager more than a few times a day just because she sits right next to me. During company meetings, every conversation is open to the floor and our CEO, Matt Zeiler, attends all of them. Because of the openness of the environment, everyone can talk to him about pretty much anything.

image00

What happens after the end of an internship?

All sorts of great things! Through this internship, I managed to gain some new skills and make the ones I already had even better.

As someone who speaks English as a second language, this internship definitely helped improve my public speaking and mentorship skills. Also, I now have more experience in technical communication. Being surrounded by intelligent people creating state-of-the-art work allowed me to gain more domain expertise and A.I. knowledge.

Lastly, my favorite part of my internship was living that #HackathonLife! I was really pumped to be able to go to different hackathons and meet interesting people. It was always exciting to see the cool and creative things they were building with the Clarifai API! I loved leading hackathons so much that it helped me realize what I wanted to do with my life full-time … SPOILER ALERT: After my internship, I was lucky to have the Clarifai team offer me a full-time Developer Evangelist job … and I took it!

Does this internship give you major FOMO? Do you want to be part of the #Clarifam? Maybe you should take a look at our careers page – we’re hiring!


Building YUMMIFai: A recipe recommendation app powered by Clarifai’s Food Recognition model

Remember that Food Recognition model we released last week? Well, we wanted to build something fun to showcase the power of our technology so we created YUMMIFai, an Android app that lets you take a photo of your fridge (or any food, really) to get a relevant and appetizing recipe based on your pic.

shirleyfaceHave you ever taken a picture of a delicious meal at a restaurant and wondered how you could make it on your own? Have you ever looked at your #basic cooking and dreamed that you could make something better with similar ingredients? Have you ever had problems deciding what to make for dinner, given that you only have a bottle of ketchup and two slices of stale bread in your fridge?

Well, I have. After too many nights spent eating ketchup sandwiches, I decided to use the Clarifai API to build YUMMIFai, an app that recommends recipes based on pictures!

Yummifai

With YUMMIFai, you first take a photo or select one from your gallery. The Clarifai API will then analyze the image and identify the food items present. Once food items are recognized, you can select one or more of them to get a recipe recommendation from Pinterest. Then, you can save the recipe and create a reminder in Calendar to make the meal at your desired time and date!

While this isn’t one of our traditional Copy + Paste guides, you can still create your own recipe recommendation app using the GitHub repo for YUMMIFai! And, if you’ve always wondered how to build an Android app with the Clarifai API, we have a Clarifai Java client and a Clarifai Android Starter. Get building!

YUMMIFai recommends recipes based on photos so I tried it on a cat pic cuz #theinternet. 🐱 #Clarifai #nomnomnom


Inside NYC’s first A.I. Hackathon, hosted by Clarifai x General Assembly

The robots have won, thanks to the geniuses at this year’s Clarifai x General Assembly Artificial Intelligence (A.I.) Hackathon! Last week, 150 developers gathered at General Assembly’s swanky Flatiron office to build, hack, and make creative, robot apocalypse-inducing, A.I.-based smart apps. 35 teams came together, and over 20 of them used the Clarifai image and video recognition API.

The theme of our A.I. hackathon was to build something to advance the robot apocalypse – in a tongue-in-cheek way, obviously. Nothing sinister, we swear. 

image03

The winning hacks were incredibly creative and seriously devious, but everyone who participated went above and beyond any other event we’ve ever hosted. Eleven hours of coding with hackers from different backgrounds, schools, and industries really made this hackathon a success! So, a huge thank you to you hackers and builders, creators and makers, evil geniuses and criminal masterminds, for coming out and joining in the fun. We’re so appreciative of everyone taking an entire Saturday to hack with us, especially those who traveled from outside of New York just to be a part of it!

Here’s a recap of some awesome projects that came out of the A.I. hackathon!

Grand Prize

We awarded the Grand Prize based on creativity, technical difficulty, and usability. You might be surprised at how the winner fits into our hackathon theme. Well, we all know that robots are coming for our jobs, but most people think that humans will still be better at emotional and creative things like composing music, making art, and being able to feel all the feels.

Silly humans! Hide yo’ kids, hide yo’ wife, hide yo’ husband, too, because A.I. is coming for everything.

image00Tessellize is an iOS app that generates art-based picture filters in real-time using convolutional neural networks. With Tesselize, anyone can feel like an artistic genius like Picasso or Rembrandt. Meanwhile, human culture falls by the wayside as we rely on machines instead of our own creativity to make art. Sneaky, sneaky robots.

 

image04

Most Likely to Make Your Smart Home Kill You

We wanted our hackers to brainstorm the ways that a smart home could go horribly, horribly wrong and build something to incite a smart home’s murderous tendencies. You know, for science.

SafeHouseimage02 is an app that manages the Nest technology in your home and is capable of changing temperatures on thermostats and ovens for you. However, the app requires your full attention – once you stop looking at the app, it will automatically set your appliances to max temperature, or as we like to call it, “kill mode.” How does the machine know you are not paying attention to it? Clarifai’s facial expression recognition technology, of course!

Most Likely to Distract Humans from Inevitable Oncoming Danger

The world is a dangerous place, even without sentient computers. People are constantly walking off cliffs while trying to catch Pokemon, or getting hit by cars as they’re taking selfies. We challenged our hackers to build something that would take advantage of humans’ tiny attention spans.

M-Lua-rioimage05, a team of three high school students, made an app that teaches itself to pass levels of a Super Mario game. Watch this Mario speed run video if you’re not totally convinced that this hack is engrossing enough to distract you from more important things in life.

Most Likely to Slow Human Progress

Progress in the machine learning space has advanced in leaps and bounds in the past few years. We asked our hackers to build something that would make human progress slow to a crawl.

Miss-directionAlso, can we just note that the same team who created Miss-direction created Wikisneaks, so basically the world should watch out for these two evil masterminds. As the wise prophet Beyonce once said, “Who run the world? Girls.” Except, in this case, it’s more like “Who ruin the world?” Better stay on their good side, ‘cuz their evil side is +100! is a Chrome extension that fools you when you’re navigating, so you end up in a place close to where you want to go, but not exactly there. Subtle, yet effective.

Most Secretly Evil

The worst evils are the ones that are hidden deep in our dark souls. We wanted our hackers to channel their inner demons to build a sneakily evil app.

Wikisneaks is a Chrome extension that loads web pages faster. Neat, huh! However, it also secretly changes pictures, dates, and statistics on the page. Not in an obvious way, but just enough that the information you’re getting is totally plausible yet totally incorrect. Pretty damn sneaky. 

Most Not Secretly Evil

Some evils come in like a wrecking ball, leaving a wake of devastation and destruction while whistling a happy tune. We challenged our hackers to build something with EVIL practically stamped on its forehead.

TrumpBot is a crazy Facebook Messenger bot that responds to anything you say the way Donald Trump would – with vim and vigor, and a healthy dose of verbal abuse. The cool twist that makes this chatbot different than others is that you can send it images, and it uses visual recognition to send you a relevant (and scathing) Trump reply! Because one Trump in this world isn’t evil enough …

Didn’t make it to this event? No worries! You can check out all the hacks from this weekend on Devpost. Also, follow @Clarifai on Facebook to see all the pics from this weekend’s hackathon and get notified the next time we host an event!

Last but not least, we especially want to thank General Assembly who helped make this event possible and Mulesoft, Ebay, and Ziggeo for the prizes!

IMG_6092


Compete for glory and prizes at the Clarifai A.I. hackathon

Join us this Saturday, August 13th, for a day of merriment, hacking, and sweet prizes at the Clarifai x General Assembly AI hackathon. The hackathon theme is using AI to build something that will advance the inevitable robot apocalypse … you know, the usual.

I am pumped to announce our first Artificial Intelligence Hackathon, hosted in collaboration with General Assembly, this Saturday 8/13! Our hackathon theme is to make something with AI that will advance the inevitable robot apocalypse. Because the only way to prevent robots from killing us all someday is to imagine all the ways they could kill us, obviously.

image00

We’ll be giving away one mysterious and amazing Grand Prize, in addition to five more super awesome prizes for these special categories:

  • Most Likely to Make Your Smart Home Kill You
  • Most Likely to Distract Humans from Inevitable Oncoming Danger
  • Most Likely to Slow Human Progress
  • Most Secretly Evil
  • Most Not-So-Secretly Evil

Take your pick! Here are a few ideas to get you thinking – just remember to be unique and have FUN:

  • A smart home that watches your every single movement, body temperature, and even uses your identity without you knowing about it
  • A self-driving car AI that discreetly lures its users into inevitable danger with bad navigation
  • A smart thermostat hack that sets your house to really, really uncomfortable temperatures

Anywho, come join us this Saturday and unleash your AI skills. Remember to get your ticket here by Friday afternoon – we already have 160 signups, so make sure you reserve a spot now!