OpenCV in Android – Native Development (C++)

Hello World!

In my previous blogs, I introduced you to how you can setup the OpenCV for doing Computer Vision Applications in Android platform. In this post, let us do some C++ coding to develop our own custom filters to apply on the images we capture. In case if you are visiting this blog for the first time, and want to learn how to setup development pipeline for Android platform, you can visit my previous blogs (links provided below):

  1. OpenCV in Android – An Introduction (Part 1/2)
  2. OpenCV in Android – An Introduction (Part 2/2)

If you don’t want to read those blogs and start diving into developing computer vision applications, then you can download the source code for this blog from the following link:

Get ready to code!

  • Let us first warm-up our-selves by implementing a very simple edge detection filter.
  • Create a new empty activity named ‘EdgeDetection’ and add most of the code from the file into it. At the end, the file should look like this:
package com.example.sriraghu95.opencvandroid_anintroduction;

import android.os.Bundle;
import android.util.Log;
import android.view.SurfaceView;
import android.view.WindowManager;

import org.opencv.core.Mat;

public class EdgeDetection extends AppCompatActivity implements CameraBridgeViewBase.CvCameraViewListener2 {

    private static final String TAG = "EdgeDetection";
    private CameraBridgeViewBase cameraBridgeViewBase;

    private BaseLoaderCallback baseLoaderCallback = new BaseLoaderCallback(this) {
        public void onManagerConnected(int status) {
            switch (status) {
                case LoaderCallbackInterface.SUCCESS:

    protected void onCreate(Bundle savedInstanceState) {
        cameraBridgeViewBase = (CameraBridgeViewBase) findViewById(;

    public void onPause() {
        if (cameraBridgeViewBase != null)

    public void onResume(){
        if (!OpenCVLoader.initDebug()) {
            Log.d(TAG, "Internal OpenCV library not found. Using OpenCV Manager for initialization");
            OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_3_1_0, this, baseLoaderCallback);
        } else {
            Log.d(TAG, "OpenCV library found inside package. Using it!");

    public void onDestroy() {
        if (cameraBridgeViewBase != null)

    public void onCameraViewStarted(int width, int height) {


    public void onCameraViewStopped() {


    public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
        return inputFrame.gray();
  • The only change we did until now in the above code is that we replaced ‘return inputFrame.rgba()’ in onCameraFrame method into ‘return inputFrame.gray()’. Now add the following code into ‘activity_edge_detection.xml’ file for setting the layout of the window in our app.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android=""      xmlns:tools=""      xmlns:opencv=""      android:id="@+id/activity_opencv_camera"      android:layout_width="match_parent"      android:layout_height="match_parent"      tools:context="com.example.sriraghu95.opencvandroid_anintroduction.EdgeDetection">

    <          android:layout_width="match_parent"          android:layout_height="match_parent"          android:visibility="gone"          android:id="@+id/camera_view"          opencv:show_fps="true"          opencv:camera_id="any"/>

  • Add the following lines of code to ‘AndroidManifest.xml’ to specify the theme of our activity.
<activity android:name=".EdgeDetection"      android:screenOrientation="landscape"      android:theme="@style/Theme.AppCompat.Light.NoActionBar.FullScreen">
  • Now, just add a button to the MainActivity which opens EdgeDetection activity and run the app on your mobile and test it. Now you should be seeing a grayscale image at ~30 FPS running on your mobile 🙂
  • But wait, we didn’t code any C++ until now! Let us write our first custom function in ‘native-lib.cpp’ file. In high level understanding, this function should take an image, do some processing on it and return it to show us on the screen. The general skeleton of a C++ native code looks like this:
extern "C"
JNIEXPORT void JNICALL Java_com_example_sriraghu95_opencvandroid_1anintroduction_EdgeDetection_detectEdges(
        JNIEnv*, jobject /* this */,
        jlong gray) {
    cv::Mat& edges = *(cv::Mat *) gray;
    cv::Canny(edges, edges, 50, 250);
  • It starts with a linkage specification extern “C”, JNIEXPORT & JNICALL, return data-types (here, void), method name (), and input data-types (here, jlong). In this scenario, we are passing memory location of the image to reduce the unnecessary duplication. We then applied the cv::Canny to do the edge detection on the image. Feel free to browse through the hyper-links and read more about them. Explaining those concepts is beyond the scope of this blog and I might explain them in detail in my future blogs.
  • We need to add few lines of code inside the onCameraFrame method of to apply the edge detection on every frame of the image. Also, add a line below the OnCameraFrame method referring to detectEdges method.
    public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
        Mat edges = inputFrame.gray();
        return edges;

    public native void detectEdges(long matGray);
  • Now, build the project and test it on your mobile! The results should look like this!


  • With this, you have the basic setup for C++ development in Android using OpenCV. Let us go a step further and design a simple filter that will produce a Cartoon effect of the image. Without explaining much details, the C++ code for the same should look like this.
extern "C"
JNIEXPORT void JNICALL Java_com_example_sriraghu95_opencvandroid_1anintroduction_EdgeDetection_cartoonify(
        JNIEnv*, jobject /* this */,
        jlong gray, jlong rgb) {
    const int MEDIAN_BLUR_FILTER_SIZE = 7;
    const int LAPLACIAN_FILTER_SIZE = 5;
    const int EDGES_THRESHOLD = 30;
    int repetitions = 5;
    int kSize = 9;
    double sigmaColor = 9;
    double sigmaSpace = 7;

    cv::Mat& edges = *(cv::Mat *) gray;
    cv::medianBlur(edges, edges, MEDIAN_BLUR_FILTER_SIZE);
    cv::Laplacian(edges, edges, CV_8U, LAPLACIAN_FILTER_SIZE);
    cv::Mat mask; cv::threshold(edges, mask, EDGES_THRESHOLD, 255, CV_THRESH_BINARY_INV);

    cv::Mat& src = *(cv::Mat *) rgb;
    cv::Size size = src.size();
    cv::Size smallSize;
    smallSize.width = size.width/4;
    smallSize.height = size.height/4;
    cv::Mat smallImg = cv::Mat(smallSize, CV_8UC3);
    resize(src, smallImg, smallSize, 0, 0, CV_INTER_LINEAR);

    cv::Mat tmp = cv::Mat(smallSize, CV_8UC3);

    for(int i=0; i<repetitions;i++){
        bilateralFilter(smallImg, tmp, kSize, sigmaColor, sigmaSpace);
        bilateralFilter(tmp, smallImg, kSize, sigmaColor, sigmaSpace);

    cv::Mat bigImg;
    resize(smallImg, bigImg, size, 0, 0, CV_INTER_LINEAR);
    cv::Mat dst; bigImg.copyTo(dst,mask);
    cv::medianBlur(dst, src, MEDIAN_BLUR_FILTER_SIZE-4);
  • After writing the above piece of code in native-lib.cpp file you can call it in your own custom class and see the results. Here is a screenshot of the above code’s result:


  • The above filter is actually trying to create a cartoon effect of the captured image.

Things to ponder:

In this blog you have seen how to integrate custom C++ code into your application. But if you carefully observe, the simple cartoon filter is consuming a lot of computation time and frame-rate for the same is ~1.2 FPS. Can you think of how to speed up algorithm and come up with a better algorithm to do the same task in real-time? Think about it 😉


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s