Clarifai-powered Adobe Lightroom plugin automatically sorts your photos by image quality

With our recently released Landscape Quality and Portrait Quality models, we built an Adobe Lightroom plugin for photographers to use to automatically sort high quality photos from low quality ones. This helps make the post-processing workflow more efficient by filtering out blurry, over-exposed, and otherwise low quality images!

Digital photography advances have made it easier and easier to take more and more photos. On the one hand, this helps photographers capture the perfect shot. On the other, it means there are a whole lot more bad photos to sort through, too. We built an Adobe Lightroom plugin from our recently released Photography Quality visual recognition model that will automatically sort your high quality photos from your low quality ones and save you tons of time and effort in selecting the best pics for post-processing!

Adobe Lightroom Clarifai Photo Quality Plugin Instructions

Make sure you have Adobe Lightroom Classic Installed.

Download the Plugin from: https://drive.google.com/drive/folders/1-hPYNh6eJihAyAAhyix8bDaEZX_gWsS_?usp=sharing and unzip the file.

To install the Clarifai Plugin. Go to File and Open the Plugin Manager

In the Plugin Manager Window, click on the Add button as shown below.

Click on the unzipped file which should be a directory labeled,
“ClarifaiLandscapeQuality.lrdevplugin”. This will install the plugin.  

You should see the following screen.

You just need to enter a Clarifai API Key to begin. If you don’t have one, sign up for an account over here: https://clarifai.com/developer/account/signup.

After signing up click on your name to access your account dashboard.

We have already created your first application for you. Click on the name to get your API KEY.

Copy your API Key and enter it in the lighroom Clarifai Plugin. (Note: The API Key in this image is fake. Please create your own for this to work)  

Enter the API Key in the Lightroom Plugin and click Done.

Select the photos you want to run the plugin on in Lightroom.

Click on “Library > Plug-in Extras > Check Landscape Photo Quality with Clarifai”

You will see a dialog window showcasing all the selected photos and if they are a High or Low Quality photo. Click Save so each photo can be tagged with that keyword and can then sort your photos by high or low quality.

You can sort photos by typing “high” or “low” in the text filter section of Lightroom.


Build a command line app that sorts your photos by quality

With our new Landscape Quality and Portrait Quality models, photographers can now quickly identify which images are of high quality and prioritize working on them while leaving the rest for later. In this post, we’ll walk through building a command line application that will sort all of your photos by quality using these models.

We’ve recently introduced two new public models that are specialized in recognizing photo quality. With camera memory ever increasing, the quantity of photos being taken is skyrocketing. With our new Landscape Quality and Portrait Quality models photographers can now quickly identify which images are of high quality and prioritize working on them while leaving the rest for later. In this post we’ll walk through building a command line application that will sort all of your photos by quality using these models.

Before we begin let’s break down how our visual recognition technology makes inferences on photo quality. Our Data Strategists trained these models to look at photos with strict criteria:  

Poor quality photo attributes:

  • Severe chromatic aberration
  • Red eyes
  • Extremely backlit
  • Unnatural vignetting, often digitally added
  • Check out the Model Gallery for more photo quality attributes!

Good quality photo attributes:

    • Good lighting
    • Sharp and in focus
    • If retouching is present, it is not obvious (no completely airbrushed skin)
    • Not too much grain/noise (**unless it’s the artist’s intention)

What you need to begin

  1. A Clarifai API Key. Sign up for free if you don’t have one.
  2. A dataset of images to sort. The ones I’m using are available here.
  3. The Python CLI library called Click. Install it by running ‘pip install click‘ from your terminal.

Let’s get started!

Our app will consist of two files. The first will be isort.py where our logic lives and the second will be setup.py which will help us use the program we’re writing as a command line tool.

Let’s start by writing a simple CLI that just outputs “hello world”. Once we have that setup we’ll add logic for it to accept parameters and call Clarifai.

isort.py

import click

@click.command()
def quality():
    click.echo("Hello World")

if __name__ == '__main__':
    quality()

setup.py

from setuptools import setup

setup(
    name="isort",
    version='0.1',
    py_modules=['isort'],
    install_requires=[
        'Click',
        'Clarifai',
    ],
    entry_points='''
        [console_scripts]
        isort=isort:quality
    ''',
)

The first file isort.py is fairly straightforward. We start by importing the Click library. We then wrap our function in `@click.command` decorator. This converts function quality(): into a command that can be invoked from the command line. It is also added to  Click’s help interface.

For setup.py we can specify the name of our app as well as version and requirements. This is useful when packaging your program to be distributed in PyPI. Click also looks at the `entry_points` field to see which function is the starting point.

After saving the two files you can run the program by entering the following in your terminal:

$ virtualenv tutorial
$ source tutorial/bin/activate .
$ pip install --editable .
$ isort

You should see Hello World as the response!

Adding Computer Vision with Clarifai

Next we’ll modify isort.py to add the Clarifai Python library. We’ll also add the ability to accept the image directory path as an argument.

import click
from clarifai.rest import ClarifaiApp
from clarifai.rest import Image as ClImage
import os
import json

app = ClarifaiApp(api_key='your_api_key')

@click.command()
@click.argument('path', type=click.Path(exists=True), required=True)
def quality(path):
    """Organizes photos based on quality."""
    click.echo("Hello World")
if __name__ == '__main__':
    quality()

We’ve created an instance of `ClarifaiApp` which will be used to make API requests. We also added Click’s argument handler which will ensure that a valid file path is passed in.

Adding logic to sort images with Clarifai

Finally let’s add the code for interacting with Clarifai and sorting the images by quality.

import click
from clarifai.rest import ClarifaiApp
from clarifai.rest import Image as ClImage
import os
import json

app = ClarifaiApp(api_key='your_api_key')

@click.command()
@click.argument('path', type=click.Path(exists=True), required=True)
def quality(path):
    model = app.models.get('Landscape Quality')
    """Organizes photos based on quality."""
    if not os.path.exists(path+"/"+'good'):
        os.mkdir(path+"/"+"good")
    if not os.path.exists(path+"/"+'bad'):
        os.mkdir(path+"/"+"bad")

    for filename in os.listdir(path):
        if filename.endswith(".png") or filename.endswith(".jpg") or filename.endswith("jpg") or filename.endswith("JPG"):
            image = ClImage(file_obj=open(path+"/"+filename, 'rb'))
            jsonResp = model.predict([image])
            goodphoto = jsonResp['outputs'][0]['data']['concepts'][0]['name']
            if goodphoto =='high quality':
               os.rename(path+"/"+filename, path+"/good/"+filename)
               print filename+": Good Photo"
            elif goodphoto =='low quality':
                os.rename(path+"/"+filename, path+"/bad/"+filename)
                print filename+": BAD Photo"

if __name__ == '__main__':
    quality()

We traverse through all of the image files in the given path and send them through the Landscape Print Quality model. Then we grab the output result and check if it is `1` which denotes a high quality photo. Finally we  move the file into the respective folder and print out a message to the terminal.

We’ve built a command line application using Click that will sort images by photo quality in a flash. For next steps, you can swap the model we’re using to the Portrait Photo Quality recognition model, any pre-built model from our model gallery, or any model you’ve custom trained. Email myself or hackers@clarifai.com on what you’ve built – we also pay $200 to write about what you’ve built using Clarifai!


Build your own visual similarity app with Clarifai's Face Embedding model

We’re excited to release our new Face Embedding model which you can use for organizing, filtering, and ranking images of faces according to visual similarity. Follow along with this guide to learn more about how embeddings work and build your own “Which Parent Do I Look Like?” app!

We recently released a new public model called Face Embedding which was developed from our Face Detection model. If you’re not familiar with the term ‘embedding, you can think of it as the numerical representation of a model’s input in 1024 dimensional space. Don’t worry if it’s your first time hearing this terminology — we’ll dive a little deeper in this tutorial. For now, just know we’ll be using embeddings to create a simple program that will let you compare how visually similar two people’s faces are! More specifically, this program will use the embedding model to see which one of my parents I look more visually similar to.

Dataset:

Here are the photos I will be using for this tutorial. Feel free to replace my photos with those of your own! You can also make your guess now on which parent I look more similar to. The bottom of this post will have Clarifai’s result for you to compare against.

Mom

kunalmom3

Me

kunal2

Dad

kunaldad2

Our Dependencies:

  1. You need a Clarifai account and can sign up for free here: https://developer.clarifai.com/signup/.
  2. We’ll be using our Python Client. The installation instructions are here: https://developer.clarifai.com/quick-start/. Make sure to click on ‘Py’ to see the python instructions.
  3. Install Numpy with the instructions for your environment here: https://www.scipy.org/install.html

Deeper Dive into Embeddings

You can think of an embedding as a low-dimensional representation of a model’s input that has  rich semantic information. That means the Face Embedding model will receive an image as an input, detect all faces in the image, and output a vector of length 1024 for each detected face. The response returned from the API will contain all the vectors generated from the image. These vectors can be thought of as coordinates in a 1024 dimensional space. Images that are visually similar will have embeddings that are “closer” to each other in that dimensional space.

Getting Visual Similarity

We can find how visually similar two faces are by calculating the distance between their corresponding embeddings. In this example, we will be using the Numpy library.

Let’s Write Some Code!

Start by setting an environment variable for your API Key so our Python module can use it to Authenticate.

export CLARIFAI_API_KEY=your_API_key

Paste the following into a file named `faceEmbed.py`.

import json
from clarifai.rest import ClarifaiApp
from math import sqrt
from numpy import linalg
from numpy import array

# Initalize Clarifai and get the Face Embedding model
app = ClarifaiApp()
model = app.models.get("d02b4508df58432fbb84e800597b8959")

# Dataset
kunalPhoto = "http://imageshack.com/a/img922/6780/2ceUHj.jpg"
momPhoto = "http://imageshack.com/a/img922/2448/tvuLfa.jpg"
dadPhoto = "http://imageshack.com/a/img923/1862/G1VINZ.png"

# Function to get embedding from image
def getEmbedding(image_url):
    # Call the Face Embedding Model
    jsonTags = model.predict_by_url(url=image_url)

    # Storage for all the vectors in a given photo
    faceEmbed = []

    # Iterate through every person and store each face embedding in an array
    for faces in jsonTags['outputs'][0]['data']['regions']:
        for face in faces['data']['embeddings']:
            embeddingVector = face['vector']
            faceEmbed.append(embeddingVector)
    return faceEmbed[0]

# Get embeddings and put them in an array format that Numpy can use
kunalEmbedding = array(getEmbedding(kunalPhoto))
momEmbedding = array(getEmbedding(momPhoto))
dadEmbedding = array(getEmbedding(dadPhoto))

# Get Distances useing Numpy
momDistance = linalg.norm(kunalEmbedding-momEmbedding)
print "Mom Distance: "+str(momDistance)

dadDistance = linalg.norm(kunalEmbedding-dadEmbedding)
print "Dad Distance: "+str(dadDistance)

# Print results
print ""
print "**************** Results are In: ******************"
if momDistance < dadDistance:
    print "Kunal looks more similar to his Mom"
elif momDistance > dadDistance:
    print "Kunal looks more similar to his Dad"
else:
    print "Kunal looks equally similar to both his mom and dad"
print ""

Run the file with:

python faceEmbed.py

How does this work?

The simple program above calls our Face Embedding model and gets the vectors for the three photos in our dataset. It then calculates the distance between vectors for myself and each of my parents. It then prints out the results on who I look more visually similar to. The smaller the distance, the more visually similar our faces are, according to the Face Embedding Model.

Moment of Truth

embeddingsgif

The results are in, and my mom is the winner!

image3

To recap, we walked through Embeddings and the use case for finding Visual Similarity. Try out the code above with your own images and let us know the results!

You can also try out some other interesting use cases, such as:

  • Creating your own search based on facial recognition
  • Authenticating using the face embeddings
  • Deduplicating similar photos in a large dataset

…and much more!

If you have any questions from this post or Clarifai in general, please let me know in the comments section below or chirp @Clarifai anytime!


Introducing Model Evaluation - test and improve the accuracy of Custom Training models

Last December, we announced a new feature that would let Clarifai users build their own custom trained models to predict against, aptly called Custom Training. Now, we’re excited to add on this functionality with a new tool called Model Evaluation. You now have the ability to test your model’s performance before using it in a production environment.

Since we launched Custom Training, we’ve seen a ton of interesting use cases and custom visual recognition models being built on our platform, from flower recognition models, to shoe recognition models, and even muppet recognition models! But the number one question we’ve gotten from users is, “How do I make my custom model more accurate?” Well, we’ve been hard at work coming up with a solution, and we’re proud to introduce the Model Evaluation tool for Custom Training. Here’s how it works!

How it works

Model Evaluation does a 5-split cross validation on the data used to train your custom model. You might be wondering what that means? Here is a graphic to visually showcase that.

image4

We take all the training data you’ve given us for your custom trained model. We then split that in 5 parts. Next, we set aside 1 part for a test set and use the remaining 80% of data to train a new model against.

Once that model is created we use the Test set and make predictions against it using this model. We then compare these predictions against the actual labels given for the inputs. After that we repeat this process for every Test set.

Let’s try it out

What better way to introduce this feature than creating a simple model and using Model Evaluation to make it better! Let’s begin by creating a model using our Preview UI, a visual way to build and train models. If you’re not familiar with it, you can get more info here.

The first couple of steps below will just quickly walk us through creating a model. If you already have one, feel free to skip to Step Four.

Step One: Sign up with Clarifai and create an application

If you haven’t already done so, you can quickly sign up here: https://developer.clarifai.com/signup/

After the signup process, go to your dashboard and create an application.

image3

I’ll be calling my application “modelEvaluation”. This application will house our model.

Step Two: Go to Our Preview UI.

Next, click on the eye icon to open up our Custom Model builder called PreviewUI. (Note: This step is only if you don’t already have your own custom built model.)

image2

Step Three: Custom Build and train a model.

For this blog post we are going to quickly custom build a model to run our evaluation tool against. The model that I’ll be building will be able to look at a photo of a Garage Door and let us know if it’s Open or Closed.

1. Add Inputs (Images)

In interface above I’ll just drag some photos for now with various open or closed garage images.

image6

2. Create Model & Concepts

After that, make sure you create a model and concepts with the menu on the left side. In this case, I created the model “garage_eval” and concepts “open” and “closed”.

image5

3. Label each image and train model

image7

Next, let’s go through each photo and label it with one of the concepts. If you don’t see the concepts you just created, make sure to click on the gear icon and select show all concepts. Once completed, click train and you now have a simple custom trained model that will let you know if it’s see a closed garage door or open one.

Step Four: Run Model Evaluation

If you click on the model name on the menu on the left hand side of the screen, it will take you to a new page that will showcase the model details. On this page, make sure to click on the versions tab.

1. Click on Versions Tab

image9

2. Click on the Evaluate button

image8

In this screen, we showcase all the versions of the model you have trained. You can evaluate any version by clicking on the Evaluate button.

Step Five: Interpreting Results

Once the evaluation completes, the “Evaluate” button will turn into a “View” button. Click to view the evaluation results, which should look similar to this:

image1

The results are shown in 3 main parts: Evaluation Summary Table, Concept by Concept Matrix, and Selection Details. The Evaluation Summary Table shows how the model performed when it predicted against the test set in 1 split. This is why the total number of labeled inputs on this table is around 20% of the size of the original training set you used. Feel free to adjust the threshold bar to find the right gauge for your recall and precision rates.

In this example, you can quickly see from the results above that our model will need more data to give stronger predictions for open garage doors in certain instances. The model predicted that a picture with a slightly open garage door has a low probability score for the “open” concept. You can see this particular input in the Selection Details section. To improve the model, we would want to add more images of a partially open door labeled as ‘open’ so the model can start to recognize those images as the “open” concept.

For a detailed breakdown on how to interpret the results and best practices around building your custom model, check out our docs.

We want to hear from you!

If you have any other questions or thoughts on this blog post, the Model Evaluation tool, or Clarifai in general, feel free to reach out, feedback@clarifai.com! We look forward to hearing from you.