OpenCV in Android – An Introduction (Part 1/2)

Hello world! I am very excited to write this particular blog on the setup of OpenCV in Android Studio. There are many solutions there online which include setting up OpenCV using Eclipse, Android NDK etc but I didn’t find a single reliable source for doing the same setup using Android Studio. So, we (Me and V.Avinash) finally came up with a feasible solution with which you can setup Native Development setup in Android environment for designing Computer Vision applications using OpenCV and C++!!!

A quick intro about me, I am a Computer Vision enthusiast with nearly 4 years of theoretical and practical experience in the field. That said, I am quite good at implementing CV algorithms on Matlab and Python. But with years, the same field has been developing rapidly from the mere academic interest to industrial interest. But most of the standard algorithms in this field are not really optimized to run in real-time (60 FPS) or not designed specifically for the mobile platform. This has caught my interest and I have been working on this since the Summer 2016. I think about various techniques and hacks for optimizing the existing algorithms for mobile platform and how to acquire (and play with) 3D data from the 2D camera during my free time from being a research assistant.

Before starting this project, I am assuming that you already have basic setup of Android Studio up and running on your machines and you have decent experience working on it.

  • If you don’t already have Android Studio, you can download and install it from the following link.
  • Once you have the Android Studio up and running, you can download OpenCV for Android from the following link. After downloading, extract the contents from the zip file and move it to a specific location. Let it be ‘/Users/user-name/OpenCV-android-sdk’. I am currently using Android Studio v2.2.3 and OpenCV v3.2
  • Now start the Android Studio and click on ‘Start a new Android Studio project’. This will open a new window. Specify your ‘Application Name’, ‘Company Domain’ and ‘Project Location’. Make sure you select the checkbox ‘Include C++ Support‘. Now click Next!
  • In the ‘Targeted Android Devices’ window, select ‘Phone and Tablet’ with Minimum SDK: ‘API 21: Android 5.0 (Lollipop)’. Click Next.screen-shot-2017-02-27-at-6-07-56-pm
  • In the Activity selection window select ‘Empty Activity’ and click Next.screen-shot-2017-02-27-at-6-34-08-pm
  • In the Activity customization window leave everything as it is without any edits and click Next.screen-shot-2017-02-27-at-6-37-17-pm
  • In the Customize C++ Support, select C++ Standard: Toolchain Default and leave all the other checkboxes unchecked (for now, but you are free to experiment) and click Finish!
  • The Android Studio will take some time to load the project with necessary settings. Since you are developing an app that depends on Camera of your mobile, you can’t test these apps on an emulator. You need to connect your Android Phone (with developer options enabled) to computer and select the device when you pressed the debug option. After running the application, you should see the following on your mobile if everything works fine!screenshot_20170227-184805
  • At this point of the project you have your basic native-development (C++ support) enabled in your app. Now let us start integrating OpenCV into your application.
  • Click on File -> New -> Import Module. In the pop-up window, give path to your ‘OpenCV-android-sdk/sdk/java’ directory and click on OK. You can find the module name as ‘openCVLibrary320’ and click Next, Finish to complete the importing.
  • Now, go to “openCVLibrary320/build.gradle” and change the following variables to those in the “app/build.gradle”: compileSdkVersion, buildToolsVersion, minSdkVersion, and targetSdkVersion. Sync the project after editing the gradle files. My “openCVLibrary320/build.gradle” file looks like this!
apply plugin: ''

android {
    compileSdkVersion 25
    buildToolsVersion "25.0.2"

    defaultConfig {
        minSdkVersion 21
        targetSdkVersion 25

    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
  • Add a new folder named ‘jniLibs’ to “app/src/main/” by right click -> New -> Directory. Copy and paste the directories in the ‘OpenCV-android-sdk/sdk/native/libs/’ to jniLibs folder in your app. After the import, remove all *.a files from your imported directories. At the end, you should have 7 directories with files in them.Screen Shot 2017-02-28 at 8.08.39 AM.png
  • Now go to ‘app/CMakeLists.txt’ and link the OpenCV by doing the following (Refer to those lines following the [EDIT] block for quick changes):
# library. You should either keep the default value or only pass a
# value of 3.4.0 or lower.

cmake_minimum_required(VERSION 3.4.1)

# [EDIT] Set Path to OpenCV and include the directories
# pathToOpenCV is just an example to how to write in Mac.
# General format: /Users/user-name/OpenCV-android-sdk/sdk/native
set(pathToOpenCV /Users/sriraghu95/OpenCV-android-sdk/sdk/native)

# Creates and names a library, sets it as either STATIC
# or SHARED, and provides the relative paths to its source code.
# You can define multiple libraries, and CMake builds it for you.
# Gradle automatically packages shared libraries with your APK.

add_library( # Sets the name of the library.

             # Sets the library as a shared library.

             # Provides a relative path to your source file(s).
             # Associated headers in the same location as their source
             # file are automatically included.
             src/main/cpp/native-lib.cpp )

# [EDIT] Similar to above lines, add the OpenCV library
add_library( lib_opencv SHARED IMPORTED )
set_target_properties(lib_opencv PROPERTIES IMPORTED_LOCATION /Users/sriraghu95/Documents/Projects/ComputerVision/OpenCVAndroid-AnIntroduction/app/src/main/jniLibs/${ANDROID_ABI}/

# Searches for a specified prebuilt library and stores the path as a
# variable. Because system libraries are included in the search path by
# default, you only need to specify the name of the public NDK library
# you want to add. CMake verifies that the library exists before
# completing its build.

find_library( # Sets the name of the path variable.

              # Specifies the name of the NDK library that
              # you want CMake to locate.
              log )

# Specifies libraries CMake should link to your target library. You
# can link multiple libraries, such as libraries you define in the
# build script, prebuilt third-party libraries, or system libraries.

target_link_libraries( # Specifies the target library.

                       # Links the target library to the log library
                       # included in the NDK.
                       ${log-lib} lib_opencv) #EDIT

  • Edit the ‘app/build.gradle’ set the cppFlags and refer to jniLibs source directories and some other minor changes, you can refer to the code below and replicate the same for your project. All new changes made on the pre-existing code are followed by comments “//EDIT”.
apply plugin: ''

android {
    compileSdkVersion 25
    buildToolsVersion "25.0.2"
    defaultConfig {
        applicationId "com.example.sriraghu95.opencvandroid_anintroduction"
        minSdkVersion 21
        targetSdkVersion 25
        versionCode 1
        versionName "1.0"
        testInstrumentationRunner ""
        externalNativeBuild {
            cmake {
                cppFlags "-std=c++11 -frtti -fexceptions" //EDIT
                abiFilters 'x86', 'x86_64', 'armeabi', 'armeabi-v7a', 'arm64-v8a', 'mips', 'mips64' //EDIT

    sourceSets {
        main {
            jniLibs.srcDirs = ['/Users/sriraghu95/Documents/Projects/ComputerVision/OpenCVAndroid-AnIntroduction/app/src/main/jniLibs'] //EDIT: Use your custom location to jniLibs. Path given is only for example purposes.

    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), ''
    externalNativeBuild {
        cmake {
            path "CMakeLists.txt"

dependencies {
    compile fileTree(dir: 'libs', include: ['*.jar'])
    androidTestCompile('', {
        exclude group: '', module: 'support-annotations'
    compile ''
    testCompile 'junit:junit:4.12'
    compile project(':openCVLibrary320') //EDIT
  • Once you are done with all the above steps, do sync the gradle and go to src/main/cpp/native-lib.cpp . To make sure that the project setup is done properly, start including OpenCV files in native-lib.cpp and it should not raise any errors.

extern "C"
        JNIEnv *env,
        jobject /* this */) {
    std::string hello = "Hello from C++";
    return env->NewStringUTF(hello.c_str());
  • Now make sure all your gradle files are in perfect sync and Rebuild the project once to check there are no errors in your setup.

By the end of this blog, we finished setting up OpenCV in your android project. This is a pre-requisite for any type of android application you want to build using OpenCV. Considering that there will be two types of possibilities using OpenCV in your application: a) Doing processing on images from your own personal library on mobiles and b) Doing real-time processing on live-feed from camera, I think this is best place to stop this part of the blog.

In my next post, I will be focusing on how to use camera in your application and do some simple processing on the data that you acquire from it.

Next: OpenCV in Android – An Introduction (Part 2/2)

Source Code: Link

[New] Android Application: Link

Wanna say thanks?

Like this blog? Found this blog useful and you feel that you learnt something at the end? Feel free to buy me a coffee 🙂 A lot of these blogs wouldn’t have been completed without the caffeine in my veins 😎


OpenCV in iOS – Face Detection

Hi, after a quite break I am back to my blog to post some new things related to optimized computer vision algorithms on mobile platform. I have been experimenting with android recently to come up with an easiest setup for OpenCV to start developing (I will be posting about it in my next blog). In this post, I will be explaining how to do Face detection in almost real time using OpenCV’s Haar cascades. This is not an advanced tutorial on detection/object recognition but it will help you to start working on your custom classification problems. Let us dive in!

A quick note before diving in, this blog expects that you have already read my previous blogs on OpenCV in iOS (An Introduction, The Camera)so that you can have the starter code up and running.

In this blog post, we are going to detect the faces and eyes from live video stream of your iOS device’s camera. Now start following the steps mentioned below!

  1. Import necessary frameworks into the project: opencv2, AVFoundation, Accelerate, CoreGraphics, CoreImage, QuartzCore, AssetsLibrary, CoreMedia, and UIKit frameworks.
  2. Rename ViewController.m to to start coding in Objective-C++.
  3. Add necessary haarcascade files from ‘<opencv-folder>/data/haarcascades/’ directory into your supporting files directory of Project. You can do this by right-click on Supporting Files and select ‘Add files to <your-project name>’
  4. Open and start adding the following lines of code for enabling Objective-C++ and let us also define some colors to draw to identify faces and eyes on the image.screen-shot-2017-02-24-at-3-30-57-pm
  5. Now you need to edit the ViewController interface to initialise the parameters for live view, OpenCV wrappers to get camera access through AVFoundation and Cascade Classifiers.screen-shot-2017-02-24-at-3-34-34-pm
  6. In the ViewController implementation’s viewDidLoad method write the following code to setup the OpenCV view.screen-shot-2017-02-24-at-3-40-07-pm
  7. The tricky part is reading the Cascade classifiers inside the project. Follow the steps suggested below to do the same and start the videoCamera!screen-shot-2017-02-24-at-3-45-32-pm
  8. Once the videoCamera is started, each image has to be processed inside the processImage method! Screen Shot 2017-02-25 at 7.27.15 PM.png
  9. Now the code is complete! Please note that I am not covering specific math topics behind the Haar-Cascades detection as I feel there are so many blogs out there which can explain it really good. For code related to this blog, you can contact me via E-mail (Contact). The screenshot of the execution of my code is placed below!

    Screen Shot

How to publish your game online?

This blog is intended for those people who are aspiring game designers and want to create an online protfolio of their games. I have recently experimented a lot on Unity developing games. But what is the point in developing a game, if others (atleast your friends) don’t get a chance to play your game? I tried searching online for the ways in which I can host my game online for free. All the techniques that I saw involve spending some money from your pocket and some techniques involve uploading your game to dropbox/google drive and share the link to people for playing it online. All these techniques didn’t attract me much. I tried to find a way in which people can play my game right in their system browser. After discussing with some of my friends and searching online if github can support it, I finally came up with a feasible solution!

For those who are familiar with Unity 3D, you should be knowing that thorugh Unity you can build a game to any type of platform that you want. You also should need a github account for hosting your game.

  1. After completing your game, go to File -> Build Settings…<\br>screen-shot-2016-11-05-at-4-02-01-pm
  2. This will take you to a new window named ‘Build Settings’. Select WebGL -> Select all the scenes that you want in your build -> and click on ‘Build’. <\br>screen-shot-2016-11-05-at-4-05-30-pm
  3. Type the name of the folder with the name you want to call your game (per say <gameNameFolder>) in the ‘Save As’ section and click on Save button. By default this folder will be saved under the folder where your game is saved.
  4. Open a terminal and navigate to the <gameNameFolder> location.
    • cd <gameNameFolder>/
    • open index.html
  5. The above commands should open your game in a new tab in your default browser. Check if your game is working properly and if it does, let us move to next step.
  6. Upload the <gameNameFolder> as a new repository into github (refer github docs).
  7. After successfully uploading your project on github, open terminal and write the following commands.
    • cd <gameNameFolder>/
    • git checkout -b gh-pages
    • git push origin gh-pages
  8. That’s it… You can find your game in the ‘https:// <your-github-username><gameNameFolder>’.
  9. Happy Gaming and share the links anywhere in the world to your friends to play in their web browser 🙂

OpenCV in iOS – The Camera

Hello everyone, this is my second blog post on ‘OpenCV in iOS’ series. Before starting this tutorial, it is recommended that you complete the ‘OpenCV in iOS – An Introduction‘ tutorial. In this blog post, I will be explaing how to use the camera inside your iOS app. For setting up the application in Xcode, please complete till step 6 in ‘OpenCV in iOS – An Introduction‘ tutorial before you proceed to the below mentioned steps!

  1. In this app, we need some additional frameworks to include in our project. They are listed as follows –
    • Accelerate.framework
    • AssetsLibrary.framework
    • AVFoundation.framework
    • CoreGraphics.framework
    • CoreImage.framework
    • CoreMedia.framework
    • opencv2.framework
    • QuartzCore.framework
    • UIKit.framework
  2. We already know how to add ‘opencv2.framework‘ from the previous blog post. I will go through the process of how to add one of the above mentioned frameworks (e.g: AVFoundation.framework), likewise you can add the rest. To add ‘AVFoundation.framework‘, go to ‘Linked Frameworks and Libraires‘ and click on the ‘+’ sign. Choose the ‘AVFoundation.framework‘ and click on ‘Add‘.

    Screen Shot 2016-07-23 at 11.36.53 pm

  3. Now your project navigator area should like this.

    Screen Shot 2016-07-24 at 10.36.20 pm

  4. It’s time to make our hands dirty! 🙂 Open ‘ViewController.h‘ and write the following lines of code.

    Screen Shot 2016-07-24 at 10.38.20 pm

  5. Now go to ‘‘ file and add some lines to include C++ code along with the Objective-C code.

    Screen Shot 2016-07-24 at 10.55.23 pm

  6. Let us initialise some variables for getting the camera access and for live output from camera.

    Screen Shot 2016-07-24 at 10.58.52 pm

  7. Now setup the live view such that it fills the whole app screen.

    Screen Shot 2016-07-24 at 5.36.54 pm.png

  8. Initialise the Camera parameters and start capturing! 

    Screen Shot 2016-07-24 at 11.02.01 pm.png

  9. But wait! we still have to do one more step before actually testing our app. If you observe the line “@implementation ViewController”, you will find a warning “Method ‘processImage:’ in protocol ‘CvVideoCameraDelegate’ not implemented”. To know more about CvVideoCameraDelegate, refer this link. Coming back to our tutorial, we have to add the following lines of code to overcome that warning.

    Screen Shot 2016-07-24 at 11.19.11 pm

  10. And now we are ready to run the app! In this application, we have to use the iPad/iPhone to run and test our application because we have to access the camera of the device. Now we can see the live view of our camera! 🙂 


  11.  Lets give some basic instaTouch! to our app 😉 .Add the following lines of code in the ‘processImage‘ method and run the application on your device.

    This slideshow requires JavaScript.

With this we are coming to an end of this tutorial! 🙂 We have learnt how to access Camera inside the app and apply some live operations on the video. Though this is a basic tutorial, this will act as a precursor for many Augmented Reality type applications! 😀 We will try to get into next level of development of Computer Vision apps in our next tutorial! Still then stay tuned… 🙂 Feel free to comment your suggestions/doubts related to this tutorial.

The SOURCE CODE for the following tutorial is available at the following GITHUB LINK.


OpenCV in iOS – An Introduction

Hello World! This is my first official blog post related to Computer Vision. I guess the title gives you the glimpse of what we are trying to achieve in this session.

What is OpenCV?

OpenCV is an open sourced BSD licensed computer vision library which is available on all major platforms (like Android, iOS, Linux, Mac OSX, Windows) and is primarily written in C++ (with bindings available for Python, Java and even MATLAB). You can check the documentation of OpenCV at

In this session, we are trying to design a basic iOS app which uses OpenCV for the image processing part of the app. I will be using the XCode v7.2.1 and OpenCV v2.4.13.

  1. Download opencv2.framework from the following link. Unzip the downloaded file and keep it in your workspace.
  2. Now open Xcode and start the new project by clicking on ‘Create a new Xcode project‘ in the left column of the following image.

    Screen Shot 2016-07-21 at 6.04.27 pm

  3. This will take you to a new window where you can select the template for your new project. Make sure that you select the ‘Single View Application‘ under the iOS -> Application and click ‘Next‘.

    Screen Shot 2016-07-21 at 6.11.46 pm

  4. Choose the name you want to keep to the application and fill it in the ‘Product Name‘. Choose the ‘Language‘ as Objective-C and ‘Devices‘ as Universal. Now click ‘Next‘ and choose the location of your workspace and click on ‘Create‘. From now onwards, I will refer to the project folder location as <project_folder>.
  5. Now move the unzipped version of opencv2.framework to “<project_folder>/” . Now go to settings of your Xcode project and select General -> Linked Frameworks and Libraries -> click on ‘+’ sign (to add a new framework to the project) -> Click on ‘Add Other…’ -> browse to “<workspace>/<project_folder>/opencv2.framework” -> Select it and click ‘Open’. Now your project navigator area will look like this.

    Screen Shot 2016-07-23 at 10.51.06 am.png

  6. In this tutorial, we will be using Objective-C and C++ to design the app. For this task to achive we have to rename ‘ViewController.m‘ file to ‘‘. This simple name convention will notify Xcode that we will be mixing Objective-C and C++.

    Screen Shot 2016-07-23 at 11.01.45 am.png

  7. Now lets start the Coding part 🙂 ! Right now, your ‘‘ file should look like this.

    Screen Shot 2016-07-23 at 11.08.36 am

  8. Now add the following code, so that we can mix C++ code in here.

    Screen Shot 2016-07-23 at 12.42.24 pm

  9. Let us setup the view.

    Screen Shot 2016-07-23 at 12.44.33 pm

  10. Now let us implement the ViewController part. We have to setup the imageView such that it takes the entire App screen. Let us also add a small part of code to correct the aspect ratio of the image.

    Screen Shot 2016-07-23 at 12.55.33 pm

  11. Now add our imageView as a sub view.

    Screen Shot 2016-07-23 at 1.04.24 pm

  12. Before moving to next part, copy an image of your choice to “<workspace>/<project_folder>/” location and go to project navigator area in Xcode and select ‘Supporting Files‘ and click on ‘Add Files to “<your_project_name>”… ‘. Navigate to <project_folder> location and select the image and click on ‘Add‘ button. You can find the image under the ‘Supporting Files‘ folder.

    Screen Shot 2016-07-23 at 1.07.06 pm

  13. Now, let us write code to read the image and display it on the screen. If the image file is not present, let us display some error message.

    Screen Shot 2016-07-23 at 1.20.16 pm

  14. Let us run the application and see the results. I am using iPhone 6s Plus emulator to check my results. Voila! it worked 😀 (I am attaching the screenshot of the emulator). If it didn’t show anything on the screen you can check the messages in the Debug area of Xcode.

    Simulator Screen Shot 23-Jul-2016 1.25.01 pm

  15. But wait, we didn’t use any OpenCV functionality till now! Before applying any of the OpenCV functions we have to convert the image from UIImage to OpenCV’s Mat datatype. And for displaying the image on the screen we have to convert from Mat to UIImage again.
    1. Convert the image from RGB to Grayscale:

      This slideshow requires JavaScript.

    2. Apply Gaussian Blur to the above Grayscale image: (you can observe the blurred version of the above result here)

      This slideshow requires JavaScript.

    3. Apply Canny Edge detection on the above result:

      This slideshow requires JavaScript.


With this we are coming to an end of the very first tutorial of ‘OpenCV in iOS’. The SOURCE CODE for the following tutorial is available at the following GITHUB LINK. I hope you enjoyed this tutorial. 🙂

Where to go next?
After this tutorial, you can check my next blog post about how to use the camera inside your app using OpenCV’s CvVideoCameraDelegate at OpenCV in iOS – The Camera.