We're excited to announce that our v2 API (including new Custom Training and Visual search features) will be graduating from developer preview to general availability. If you were previously using our v1 API, here's a guide to help you easily and seamlessly transition into v2 so you can take advantage of our new usage-based pricing and access the full range of our new features.

Thanks to all you early testers and adopters, our v2 API is graduating from developer preview to general availability! What does that mean for everyone who's already been using v1 of our API? In short, you can do everything in v2 that you could in v1, and more:

  • Teach our API to recognize any new concept with Custom Training
  • Search images by visual similarity, tag, or a combination of both with Visual Search
  • Add any custom information (like price, SKU, etc.) to images with custom metadata
  • Explore and manage your models and media assets with our clean and intuitive user interface
  • Get the full picture of your account activity with better usage reporting

Read on for a walkthrough on how to easily and seamlessly transition your projects from v1 to v2 so you can take advantage of our new usage-based pricing and access the full range of features in our v2 API!

"Tag" vs. "Predict" Endpoints

Let's first cover some important details. As you may recall, in v1 the 'Tag' endpoint was used to tag the contents of your images or videos. In v2, we renamed this endpoint to 'Predict' to more accurately reflect what our technology is doing. Remember that when we 'Tag' an image in v1, we tell it which model to use. In v2, 'Predict' is used with models the exact same way.

Models

We have a variety of visual recognition models you can choose from. The default model is our General model which understands more than 11,000 concepts in images, like objects, themes, emotions, and more. Other models are available and can be used when you want to focus on more specific content: Not Safe For Work (NSFW), Weddings, Travel, Food, and Color. Furthermore, if you can train your own models in v2 using Custom Training, which you can learn more about in this post

In this guide, we will use the General model in our examples. If you need a refresher, you can find v1 documentation here. For v2, all the code below can be found through our client installation instructions and quickstart examples.

Javascript

Now, let’s start with the JavaScript client first.

For V1 and V2, you can install both on Npm. Just invoke: 

$ npm install clarifai

In the JavaScript client, you can replace the model id with other id aliases defined in the V2 client such as:

Clarifai.GENERAL_MODEL, Clarifai.NSFW_MODEL, Clarifai.WEDDING_MODEL, Clarifai.TRAVEL_MODEL, Clarifai.FOOD_MODEL, and Clarifai.COLOR_MODEL.

v1 API

// Initialization with CLIENT_ID and CLIENT_SECRET
Clarifai.initialize({
  'clientId': '{clientId}',
  'clientSecret': '{clientSecret}'
});

// Get tags for an image via url
Clarifai.getTagsByUrl('https://samples.clarifai.com/metro-north.jpg',{model: 'general-v1.3'}).then(
  handleResponse,
  handleError  
});

// Get tags for an image via image bytes
Clarifai.getTagsByImageBytes('G7p3m95uAl...', {model: 'general-v1.3'}).then(
  handleResponse,
  handleError
);

v2 API

// Instantiate a new Clarifai app passing in your clientId and clientSecret
var app = new Clarifai.App(
  '{clientId}',
  '{clientSecret}'
);

// Predict the contents of an image via url 
app.models.predict(Clarifai.GENERAL_MODEL, 'https://samples.clarifai.com/metro-north.jpg').then(
  handleResponse,
  handleError
);

// Predict the contents of an image via image bytes
app.models.predict(Clarifai.GENERAL_MODEL, {base64: 'G7p3m95uAl...'}).then(
  handleResponse,
  handleError
);

Python

For the Python client, v2 installation will be exactly the same as v1! As stated above, this is done by running the following command:

$ pip install clarifai

Now, let’s get to the code.

v1 API

# Import the client
from clarifai.client import ClarifaiApi

# Instantiation
api = ClarifaiApi("{clientId}", "{clientSecret}")

# Get tags for an image via url
results = api.tag_image_urls('https://samples.clarifai.com/metro-north.jpg', model='general-v1.3')

# Get tags for an image via image bytes
results = api.tag_images(open('/home/user/image.jpeg', 'rb'), model='general-v1.3')

v2 API

# Import the client
from clarifai.rest import ClarifaiApp

# Instantiation
app = ClarifaiApp("{clientId}", "{clientSecret}")

# Predict the contents of an image via url
results = app.models.get('general-v1.3').predict_by_url('https://samples.clarifai.com/metro-north.jpg')

# Predict the contents of an image via image bytes
results = app.models.get('general-v1.3').predict_by_filename('/home/user/image.jpeg')

Java

For the v2 Java client, if you are an Android developer, you can call executeAsync() at the end of your 'Predict' method which will be an easy way for you to take care of an asynchronous task instead of writing an AsyncTask boilerplate. Anywho, here's how you can transition from v1 to v2 in Java.

v1 API

// Installation
// Maven
<dependency>
  <groupId>com.clarifai</groupId>
  <artifactId>clarifai-api-java</artifactId>
  <version>1.2.0</version>
</dependency>
// Or Gradle
compile "com.clarifai:clarifai-api-java:1.2.0"

// Instantiation
ClarifaiClient clarifai = new ClarifaiClient("{clientId}", "{clientSecret}");

// Get tags with the model via url
List<RecognitionResult> results = clarifai.recognize(new RecognitionRequest("https://samples.clarifai.com/metro-north.jpg")
        .setModel("general-v1.3"));

// Get tags with the model via image bytes
List<RecognitionResult> results = clarifai.recognize(new RecognitionRequest(new File("/home/user/image.jpeg")
        .setModel("general-v1.3"));

v2 API

// Installation
// Maven
<dependency>
  <groupId>com.clarifai.clarifai-api2</groupId>
  <artifactId>core</artifactId>
  <version>2.2.0</version>
</dependency>
// Or Gradle
compile "com.clarifai.clarifai-api2:core:2.2.0"

// Instantiation
ClarifaiClient client = new ClarifaiBuilder("{clientId}", "{clientSecret}").buildSync();

// Predict the contents of an image via url
List<ClarifaiOutput<Concept>> results = client.predict("general-v1.3")
        .withInputs(ClarifaiInput.forImage(ClarifaiImage.of("https://samples.clarifai.com/metro-north.jpg")))
        .executeSync().getOrNull();

// Predict the contents of an image via image bytes
List<ClarifaiOutput<Concept>> results = client.predict("general-v1.3")
        .withInputs(ClarifaiInput.forImage(ClarifaiImage.of(new File("/home/user/image.jpeg"))))
        .executeSync().getOrNull();

Objective-C

For installation in ObjectiveC, add:

pod 'Clarifai'

to your Podfile first. Then install dependencies and generate workspace by: 

pod install

Now, it’s time to open YOUR_PROJECT_NAME.xcworkspace to start coding!

v1 API

// Import the client
#import "ClarifaiClient.h"

// Instantiation
ClarifaiClient *client = [[ClarifaiClient alloc] initWithAppID:@"{clientId}"
                                                     appSecret:@"{clientSecret}"];

// Get tags for an image via url
NSString *imageAsUrl = @"https://samples.clarifai.com/metro-north.jpg";
[client recognizeURLs:@[imageAsUrl] completion:^(NSArray *results, NSError *error) {
  NSLog(@"results: %@", results);
}];

// Get tags for an image via image bytes
UIImage *image = [UIImage imageNamed:@"dress.jpg"];
NSData *imageAsJpeg = UIImageJPEGRepresentation(image, 0.9);
[client recognizeJpegs:@[imageAsJpeg] completion:^(NSArray *results, NSError *error) {
  NSLog(@"results: %@", results);
}];

v2 API

// Import the client
#import "ClarifaiApp.h"

// Instantiation
ClarifaiApp *app = [[ClarifaiApp alloc] initWithAppID:@"{clientId}"
                                            appSecret:@"{clientSecret}"];

// Predict the contents of an image via url
ClarifaiImage *image = [[ClarifaiImage alloc] initWithURL:@"https://samples.clarifai.com/metro-north.jpg"];
[app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
  [model predictOnImages:@[image] completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
    NSLog(@"outputs: %@", outputs);
  }];
}];

// Predict the contents of an image via image bytes
UIImage *image = [UIImage imageNamed:@"dress.jpg"];
ClarifaiImage *clarifaiImage = [[ClarifaiImage alloc] initWithImage:image];
[app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
  [model predictOnImages:@[clarifaiImage] completion:^(NSArray<ClarifaiSearchResult *> *outputs, NSError *error) {
    NSLog(@"outputs: %@", outputs);
  }];
}];

v1 vs. v2 Response

Now, you must be wondering if you will get the same response by calling ‘Predict’. The answer is no. Here is a visual illustration of how v2 response is different:

How Clarifai v2 API predicts

Make sure that you are accessing the outputs correctly by comparing the JSON format below.

v1 API

{
  "status_code": "OK",
  "status_msg": "All images in request have completed successfully. ",
  "meta": {
    "tag": {
      "timestamp": 1451945197.398036,
      "model": "general-v1.3",
      "config": "34fb1111b4d5f67cf1b8665ebc603704"
    }
  },
  "results": [
    {
      "docid": 15512461224882631443,
      "url": "https://samples.clarifai.com/metro-north.jpg",
      "status_code": "OK",
      "status_msg": "OK",
      "local_id": "",
      "result": {
        "tag": {
          "concept_ids": [
            "ai_HLmqFqBf",
            ...
          ],
          "classes": [
            "train",
            ...
          ],
          "probs": [
            0.9989112019538879,
            ...
          ]
        }
      },
      "docid_str": "31fdb2316ff87fb5d747554ba5267313"
    }
  ]
}

v2 API

{
  "status": {
    "code": 10000,
    "description": "Ok"
  },
  "outputs": [
    {
      "id": "ea68cac87c304b28a8046557062f34a0",
      "status": {
        "code": 10000,
        "description": "Ok"
      },
      "created_at": "2016-11-22T16:50:25Z",
      "model": {
        "name": "general-v1.3",
        "id": "aaa03c23b3724a16a56b629203edc62c",
        "created_at": "2016-03-09T17:11:39Z",
        "app_id": null,
        "output_info": {
          "message": "Show output_info with: GET /models/{model_id}/output_info",
          "type": "concept"
        },
        "model_version": {
          "id": "aa9ca48295b37401f8af92ad1af0d91d",
          "created_at": "2016-07-13T01:19:12Z",
          "status": {
            "code": 21100,
            "description": "Model trained successfully"
          }
        }
      },
      "input": {
        "id": "ea68cac87c304b28a8046557062f34a0",
        "data": {
          "image": {
            "url": "https://samples.clarifai.com/metro-north.jpg"
          }
        }
      },
      "data": {
        "concepts": [
          {
            "id": "ai_HLmqFqBf",
            "name": "train",
            "app_id": null,
            "value": 0.9989112
          },
          ...        
          {
            "id": "ai_VSVscs9k",
            "name": "terminal",
            "app_id": null,
            "value": 0.9230834
          }
        ]
      }
    }
  ]
}

That's it! Hopefully, this guide helped you transition your application to the Clarifai v2 API successfully. Please let us know if you run into any trouble by shooting us an email at support@clarifai.com.