Computer Vision in iOS – Swift+OpenCV

Hello all, I realised that it has been quite a while since I posted my last blog –  Computer Vision in iOS – Core Camera. In my last blog, I discussed about how we can setup Camera in our app without using OpenCV. Since the app has been designed in Swift 3, it is very easy for many budding iOS developers to understand what is going on in that code. I thought of going a step further and design some basic image processing algorithms from scratch. After designing few algorithms, I realised that it is quite hard for me to explain even simple RGB to grayscale conversion without scaring the readers. So, I thought of taking a few steps back and integrate OpenCV into the swift version of our Computer Vision app in hope that it can help the readers during their speed prototyping of proof-of-concepts. But many people have already discussed about how to integrate OpenCV into Swift based apps. The main purpose of this blog post is to introduce you to the data structure of the image and to explain why we are implementing certain things the way they are.

Before starting this blog, it is advised that you read this blog on setting up Core Camera using Swift.

  • Start by creating a new Xcode Project, select Single View Application. Name your project and organisation, set language as Swift.
  • For removing some constraints related to UI/UX and since most of the real-time performance apps in Vision either fix to Portrait or Landscape Left/Right orientation through out its usage, go to General -> Deployment Info and uncheck all unnecessary orientations of the app.

Screen Shot 2017-06-04 at 1.25.13 PM

  • Go to Main.storyboard and add the Image View to your app by drag-and-drop from the following menu to the storyboard.

Screen Shot 2017-06-04 at 1.29.39 PM

  • Go to “Show the Size Inspector” on the top-right corner and make the following changes.

Screen Shot 2017-06-04 at 1.35.40 PM

  • Now add some constraints to the Image View.

Screen Shot 2017-06-04 at 1.37.37 PM

  • After the above settings, you can observe that the Image View fills the whole screen on the app. Now go to ‘Show the attributes inspector’ on the top right corner and change ‘Content Mode’ from Scale To Fill to ‘Aspect Fill’.

Screen Shot 2017-06-04 at 1.40.18 PM

  • Now add an IBOutlet to the ImageView in ViewController.swift file. Also add the new swift file named ‘CameraBuffer.swift’ file and copy paste the code shown in the previous blog. Also change your ViewController.swift file as shown in previous blog. Now if you run your app, you can see a portrait mode camera app with ~30 FPS. (Note: Don’t forget to add permissions to use camera in Info.plist).
  • Let us dive into adding OpenCV into our app. First let us add the OpenCV Framework into our app. If you are following my blogs from starting, it should be easy for you.
  • Let us get into some theoretical discussion. (Disclaimer: It is totally fine to skip this bullet point if you only want the app working). What is an Image? From the signals and systems perspective, an Image is defined as a 2D discrete signal where each pixel signifies a value between 0-255 representing a specific gray level (0 represents black and 255 corresponds to white). To understand this better refer to the picture shown below (PC: Link). Now you might be wondering what is adding color to the image if each pixel is storing only the gray values. If you observe any documentation online you can see that the color image is actually referred as RGB image or RGBA image. The R,G, B in RGB image refers to the Red, Green and Blue Channels of the image and where each channel corresponds to the 2D grayscale signal with values between 0-255. The A channel in RGBA image represents the alpha channel or the opacity of that pixel. In OpenCV, the image is generally represented as a Matrix in BGR or BGRA format. In our code, we are getting access to the every single frame captured by camera in UIImage format. Hence, in order to do any image processing on these images we have to convert them from UIImage to cv::Mat and do all the processing that is required and send them back as UIImage to view it on the screen.



  • Add a new file -> ‘Cocoa Touch Class’, name it ‘OpenCVWrapper’ and set language to Objective-C. Click Next and select Create. When it prompted to create bridging header click on the ‘Create Bridging Header’ button. Now you can observe that there are 3 files created with names: OpenCVWrapper.h,, and -Bridging-Header.h. Open ‘-Bridging-Header.h’ and add the following line: #import “OpenCVWrapper.h”
  • Go to ‘OpenCVWrapper.h’ file and add the following lines of code. In this tutorial, let us do the simple RGB to Grayscale conversion.
#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>

@interface OpenCVWrapper : NSObject

- (UIImage *) makeGray: (UIImage *) image;


  • Rename OpenCVWrapper.m to “” for C++ support and add the following code.
#import "OpenCVWrapper.h"

// import necessary headers
#import <opencv2/core.hpp>
#import <opencv2/imgcodecs/ios.h>
#import <opencv2/imgproc/imgproc.hpp>

using namespace cv;

@implementation OpenCVWrapper

- (UIImage *) makeGray: (UIImage *) image {
    // Convert UIImage to cv::Mat
    Mat inputImage; UIImageToMat(image, inputImage);
    // If input image has only one channel, then return image.
    if (inputImage.channels() == 1) return image;
    // Convert the default OpenCV's BGR format to GrayScale.
    Mat gray; cvtColor(inputImage, gray, CV_BGR2GRAY);
    // Convert the GrayScale OpenCV Mat to UIImage and return it.
    return MatToUIImage(gray);


  • Now make some final changes to the to see the grey scale image on screen.
import UIKit

class ViewController: UIViewController, CameraBufferDelegate {

    var cameraBuffer: CameraBuffer!
    let opencvWrapper = OpenCVWrapper();
    @IBOutlet weak var imageView: UIImageView!

    override func viewDidLoad() {
        // Do any additional setup after loading the view, typically from a nib.
        cameraBuffer = CameraBuffer()
        cameraBuffer.delegate = self

    func captured(image: UIImage) {
        imageView.image = opencvWrapper.makeGray(image)

    override func didReceiveMemoryWarning() {
        // Dispose of any resources that can be recreated.

  • Here are the final screenshots of the working app. Hope you enjoyed this blog post. 🙂

OpenCV in Android – Native Development (C++)

Hello World!

In my previous blogs, I introduced you to how you can setup the OpenCV for doing Computer Vision Applications in Android platform. In this post, let us do some C++ coding to develop our own custom filters to apply on the images we capture. In case if you are visiting this blog for the first time, and want to learn how to setup development pipeline for Android platform, you can visit my previous blogs (links provided below):

  1. OpenCV in Android – An Introduction (Part 1/2)
  2. OpenCV in Android – An Introduction (Part 2/2)

If you don’t want to read those blogs and start diving into developing computer vision applications, then you can download the source code for this blog from the following link:

Get ready to code!

  • Let us first warm-up our-selves by implementing a very simple edge detection filter.
  • Create a new empty activity named ‘EdgeDetection’ and add most of the code from the file into it. At the end, the file should look like this:
package com.example.sriraghu95.opencvandroid_anintroduction;

import android.os.Bundle;
import android.util.Log;
import android.view.SurfaceView;
import android.view.WindowManager;

import org.opencv.core.Mat;

public class EdgeDetection extends AppCompatActivity implements CameraBridgeViewBase.CvCameraViewListener2 {

    private static final String TAG = "EdgeDetection";
    private CameraBridgeViewBase cameraBridgeViewBase;

    private BaseLoaderCallback baseLoaderCallback = new BaseLoaderCallback(this) {
        public void onManagerConnected(int status) {
            switch (status) {
                case LoaderCallbackInterface.SUCCESS:

    protected void onCreate(Bundle savedInstanceState) {
        cameraBridgeViewBase = (CameraBridgeViewBase) findViewById(;

    public void onPause() {
        if (cameraBridgeViewBase != null)

    public void onResume(){
        if (!OpenCVLoader.initDebug()) {
            Log.d(TAG, "Internal OpenCV library not found. Using OpenCV Manager for initialization");
            OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_3_1_0, this, baseLoaderCallback);
        } else {
            Log.d(TAG, "OpenCV library found inside package. Using it!");

    public void onDestroy() {
        if (cameraBridgeViewBase != null)

    public void onCameraViewStarted(int width, int height) {


    public void onCameraViewStopped() {


    public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
        return inputFrame.gray();
  • The only change we did until now in the above code is that we replaced ‘return inputFrame.rgba()’ in onCameraFrame method into ‘return inputFrame.gray()’. Now add the following code into ‘activity_edge_detection.xml’ file for setting the layout of the window in our app.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android=""      xmlns:tools=""      xmlns:opencv=""      android:id="@+id/activity_opencv_camera"      android:layout_width="match_parent"      android:layout_height="match_parent"      tools:context="com.example.sriraghu95.opencvandroid_anintroduction.EdgeDetection">

    <          android:layout_width="match_parent"          android:layout_height="match_parent"          android:visibility="gone"          android:id="@+id/camera_view"          opencv:show_fps="true"          opencv:camera_id="any"/>

  • Add the following lines of code to ‘AndroidManifest.xml’ to specify the theme of our activity.
<activity android:name=".EdgeDetection"      android:screenOrientation="landscape"      android:theme="@style/Theme.AppCompat.Light.NoActionBar.FullScreen">
  • Now, just add a button to the MainActivity which opens EdgeDetection activity and run the app on your mobile and test it. Now you should be seeing a grayscale image at ~30 FPS running on your mobile 🙂
  • But wait, we didn’t code any C++ until now! Let us write our first custom function in ‘native-lib.cpp’ file. In high level understanding, this function should take an image, do some processing on it and return it to show us on the screen. The general skeleton of a C++ native code looks like this:
extern "C"
JNIEXPORT void JNICALL Java_com_example_sriraghu95_opencvandroid_1anintroduction_EdgeDetection_detectEdges(
        JNIEnv*, jobject /* this */,
        jlong gray) {
    cv::Mat& edges = *(cv::Mat *) gray;
    cv::Canny(edges, edges, 50, 250);
  • It starts with a linkage specification extern “C”, JNIEXPORT & JNICALL, return data-types (here, void), method name (), and input data-types (here, jlong). In this scenario, we are passing memory location of the image to reduce the unnecessary duplication. We then applied the cv::Canny to do the edge detection on the image. Feel free to browse through the hyper-links and read more about them. Explaining those concepts is beyond the scope of this blog and I might explain them in detail in my future blogs.
  • We need to add few lines of code inside the onCameraFrame method of to apply the edge detection on every frame of the image. Also, add a line below the OnCameraFrame method referring to detectEdges method.
    public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
        Mat edges = inputFrame.gray();
        return edges;

    public native void detectEdges(long matGray);
  • Now, build the project and test it on your mobile! The results should look like this!


  • With this, you have the basic setup for C++ development in Android using OpenCV. Let us go a step further and design a simple filter that will produce a Cartoon effect of the image. Without explaining much details, the C++ code for the same should look like this.
extern "C"
JNIEXPORT void JNICALL Java_com_example_sriraghu95_opencvandroid_1anintroduction_EdgeDetection_cartoonify(
        JNIEnv*, jobject /* this */,
        jlong gray, jlong rgb) {
    const int MEDIAN_BLUR_FILTER_SIZE = 7;
    const int LAPLACIAN_FILTER_SIZE = 5;
    const int EDGES_THRESHOLD = 30;
    int repetitions = 5;
    int kSize = 9;
    double sigmaColor = 9;
    double sigmaSpace = 7;

    cv::Mat& edges = *(cv::Mat *) gray;
    cv::medianBlur(edges, edges, MEDIAN_BLUR_FILTER_SIZE);
    cv::Laplacian(edges, edges, CV_8U, LAPLACIAN_FILTER_SIZE);
    cv::Mat mask; cv::threshold(edges, mask, EDGES_THRESHOLD, 255, CV_THRESH_BINARY_INV);

    cv::Mat& src = *(cv::Mat *) rgb;
    cv::Size size = src.size();
    cv::Size smallSize;
    smallSize.width = size.width/4;
    smallSize.height = size.height/4;
    cv::Mat smallImg = cv::Mat(smallSize, CV_8UC3);
    resize(src, smallImg, smallSize, 0, 0, CV_INTER_LINEAR);

    cv::Mat tmp = cv::Mat(smallSize, CV_8UC3);

    for(int i=0; i<repetitions;i++){
        bilateralFilter(smallImg, tmp, kSize, sigmaColor, sigmaSpace);
        bilateralFilter(tmp, smallImg, kSize, sigmaColor, sigmaSpace);

    cv::Mat bigImg;
    resize(smallImg, bigImg, size, 0, 0, CV_INTER_LINEAR);
    cv::Mat dst; bigImg.copyTo(dst,mask);
    cv::medianBlur(dst, src, MEDIAN_BLUR_FILTER_SIZE-4);
  • After writing the above piece of code in native-lib.cpp file you can call it in your own custom class and see the results. Here is a screenshot of the above code’s result:


  • The above filter is actually trying to create a cartoon effect of the captured image.

Things to ponder:

In this blog you have seen how to integrate custom C++ code into your application. But if you carefully observe, the simple cartoon filter is consuming a lot of computation time and frame-rate for the same is ~1.2 FPS. Can you think of how to speed up algorithm and come up with a better algorithm to do the same task in real-time? Think about it 😉