OpenCV in iOS – Face Detection

Hi, after a quite break I am back to my blog to post some new things related to optimized computer vision algorithms on mobile platform. I have been experimenting with android recently to come up with an easiest setup for OpenCV to start developing (I will be posting about it in my next blog). In this post, I will be explaining how to do Face detection in almost real time using OpenCV’s Haar cascades. This is not an advanced tutorial on detection/object recognition but it will help you to start working on your custom classification problems. Let us dive in!

A quick note before diving in, this blog expects that you have already read my previous blogs on OpenCV in iOS (An Introduction, The Camera)so that you can have the starter code up and running.

In this blog post, we are going to detect the faces and eyes from live video stream of your iOS device’s camera. Now start following the steps mentioned below!

  1. Import necessary frameworks into the project: opencv2, AVFoundation, Accelerate, CoreGraphics, CoreImage, QuartzCore, AssetsLibrary, CoreMedia, and UIKit frameworks.
  2. Rename ViewController.m to ViewController.mm to start coding in Objective-C++.
  3. Add necessary haarcascade files from ‘<opencv-folder>/data/haarcascades/’ directory into your supporting files directory of Project. You can do this by right-click on Supporting Files and select ‘Add files to <your-project name>’
  4. Open ViewController.mm and start adding the following lines of code for enabling Objective-C++ and let us also define some colors to draw to identify faces and eyes on the image.screen-shot-2017-02-24-at-3-30-57-pm
  5. Now you need to edit the ViewController interface to initialise the parameters for live view, OpenCV wrappers to get camera access through AVFoundation and Cascade Classifiers.screen-shot-2017-02-24-at-3-34-34-pm
  6. In the ViewController implementation’s viewDidLoad method write the following code to setup the OpenCV view.screen-shot-2017-02-24-at-3-40-07-pm
  7. The tricky part is reading the Cascade classifiers inside the project. Follow the steps suggested below to do the same and start the videoCamera!screen-shot-2017-02-24-at-3-45-32-pm
  8. Once the videoCamera is started, each image has to be processed inside the processImage method! Screen Shot 2017-02-25 at 7.27.15 PM.png
  9. Now the code is complete! Please note that I am not covering specific math topics behind the Haar-Cascades detection as I feel there are so many blogs out there which can explain it really good. For code related to this blog, you can contact me via E-mail (Contact). The screenshot of the execution of my code is placed below!

    img_1742
    Screen Shot

OpenCV in iOS – The Camera

Hello everyone, this is my second blog post on ‘OpenCV in iOS’ series. Before starting this tutorial, it is recommended that you complete the ‘OpenCV in iOS – An Introduction‘ tutorial. In this blog post, I will be explaing how to use the camera inside your iOS app. For setting up the application in Xcode, please complete till step 6 in ‘OpenCV in iOS – An Introduction‘ tutorial before you proceed to the below mentioned steps!

  1. In this app, we need some additional frameworks to include in our project. They are listed as follows –
    • Accelerate.framework
    • AssetsLibrary.framework
    • AVFoundation.framework
    • CoreGraphics.framework
    • CoreImage.framework
    • CoreMedia.framework
    • opencv2.framework
    • QuartzCore.framework
    • UIKit.framework
  2. We already know how to add ‘opencv2.framework‘ from the previous blog post. I will go through the process of how to add one of the above mentioned frameworks (e.g: AVFoundation.framework), likewise you can add the rest. To add ‘AVFoundation.framework‘, go to ‘Linked Frameworks and Libraires‘ and click on the ‘+’ sign. Choose the ‘AVFoundation.framework‘ and click on ‘Add‘.

    Screen Shot 2016-07-23 at 11.36.53 pm

  3. Now your project navigator area should like this.

    Screen Shot 2016-07-24 at 10.36.20 pm

  4. It’s time to make our hands dirty! 🙂 Open ‘ViewController.h‘ and write the following lines of code.

    Screen Shot 2016-07-24 at 10.38.20 pm

  5. Now go to ‘ViewController.mm‘ file and add some lines to include C++ code along with the Objective-C code.

    Screen Shot 2016-07-24 at 10.55.23 pm

  6. Let us initialise some variables for getting the camera access and for live output from camera.

    Screen Shot 2016-07-24 at 10.58.52 pm

  7. Now setup the live view such that it fills the whole app screen.

    Screen Shot 2016-07-24 at 5.36.54 pm.png

  8. Initialise the Camera parameters and start capturing! 

    Screen Shot 2016-07-24 at 11.02.01 pm.png

  9. But wait! we still have to do one more step before actually testing our app. If you observe the line “@implementation ViewController”, you will find a warning “Method ‘processImage:’ in protocol ‘CvVideoCameraDelegate’ not implemented”. To know more about CvVideoCameraDelegate, refer this link. Coming back to our tutorial, we have to add the following lines of code to overcome that warning.

    Screen Shot 2016-07-24 at 11.19.11 pm

  10. And now we are ready to run the app! In this application, we have to use the iPad/iPhone to run and test our application because we have to access the camera of the device. Now we can see the live view of our camera! 🙂 

    IMG_1713.jpg

  11.  Lets give some basic instaTouch! to our app 😉 .Add the following lines of code in the ‘processImage‘ method and run the application on your device.

    This slideshow requires JavaScript.

With this we are coming to an end of this tutorial! 🙂 We have learnt how to access Camera inside the app and apply some live operations on the video. Though this is a basic tutorial, this will act as a precursor for many Augmented Reality type applications! 😀 We will try to get into next level of development of Computer Vision apps in our next tutorial! Still then stay tuned… 🙂 Feel free to comment your suggestions/doubts related to this tutorial.

The SOURCE CODE for the following tutorial is available at the following GITHUB LINK.

 

OpenCV in iOS – An Introduction

Hello World! This is my first official blog post related to Computer Vision. I guess the title gives you the glimpse of what we are trying to achieve in this session.

What is OpenCV?

OpenCV is an open sourced BSD licensed computer vision library which is available on all major platforms (like Android, iOS, Linux, Mac OSX, Windows) and is primarily written in C++ (with bindings available for Python, Java and even MATLAB). You can check the documentation of OpenCV at http://docs.opencv.org.

In this session, we are trying to design a basic iOS app which uses OpenCV for the image processing part of the app. I will be using the XCode v7.2.1 and OpenCV v2.4.13.

  1. Download opencv2.framework from the following link. Unzip the downloaded file and keep it in your workspace.
  2. Now open Xcode and start the new project by clicking on ‘Create a new Xcode project‘ in the left column of the following image.

    Screen Shot 2016-07-21 at 6.04.27 pm

  3. This will take you to a new window where you can select the template for your new project. Make sure that you select the ‘Single View Application‘ under the iOS -> Application and click ‘Next‘.

    Screen Shot 2016-07-21 at 6.11.46 pm

  4. Choose the name you want to keep to the application and fill it in the ‘Product Name‘. Choose the ‘Language‘ as Objective-C and ‘Devices‘ as Universal. Now click ‘Next‘ and choose the location of your workspace and click on ‘Create‘. From now onwards, I will refer to the project folder location as <project_folder>.
  5. Now move the unzipped version of opencv2.framework to “<project_folder>/” . Now go to settings of your Xcode project and select General -> Linked Frameworks and Libraries -> click on ‘+’ sign (to add a new framework to the project) -> Click on ‘Add Other…’ -> browse to “<workspace>/<project_folder>/opencv2.framework” -> Select it and click ‘Open’. Now your project navigator area will look like this.

    Screen Shot 2016-07-23 at 10.51.06 am.png

  6. In this tutorial, we will be using Objective-C and C++ to design the app. For this task to achive we have to rename ‘ViewController.m‘ file to ‘ViewController.mm‘. This simple name convention will notify Xcode that we will be mixing Objective-C and C++.

    Screen Shot 2016-07-23 at 11.01.45 am.png

  7. Now lets start the Coding part 🙂 ! Right now, your ‘ViewController.mm‘ file should look like this.

    Screen Shot 2016-07-23 at 11.08.36 am

  8. Now add the following code, so that we can mix C++ code in here.

    Screen Shot 2016-07-23 at 12.42.24 pm

  9. Let us setup the view.

    Screen Shot 2016-07-23 at 12.44.33 pm

  10. Now let us implement the ViewController part. We have to setup the imageView such that it takes the entire App screen. Let us also add a small part of code to correct the aspect ratio of the image.

    Screen Shot 2016-07-23 at 12.55.33 pm

  11. Now add our imageView as a sub view.

    Screen Shot 2016-07-23 at 1.04.24 pm

  12. Before moving to next part, copy an image of your choice to “<workspace>/<project_folder>/” location and go to project navigator area in Xcode and select ‘Supporting Files‘ and click on ‘Add Files to “<your_project_name>”… ‘. Navigate to <project_folder> location and select the image and click on ‘Add‘ button. You can find the image under the ‘Supporting Files‘ folder.

    Screen Shot 2016-07-23 at 1.07.06 pm

  13. Now, let us write code to read the image and display it on the screen. If the image file is not present, let us display some error message.

    Screen Shot 2016-07-23 at 1.20.16 pm

  14. Let us run the application and see the results. I am using iPhone 6s Plus emulator to check my results. Voila! it worked 😀 (I am attaching the screenshot of the emulator). If it didn’t show anything on the screen you can check the messages in the Debug area of Xcode.

    Simulator Screen Shot 23-Jul-2016 1.25.01 pm

  15. But wait, we didn’t use any OpenCV functionality till now! Before applying any of the OpenCV functions we have to convert the image from UIImage to OpenCV’s Mat datatype. And for displaying the image on the screen we have to convert from Mat to UIImage again.
    1. Convert the image from RGB to Grayscale:

      This slideshow requires JavaScript.

    2. Apply Gaussian Blur to the above Grayscale image: (you can observe the blurred version of the above result here)

      This slideshow requires JavaScript.

    3. Apply Canny Edge detection on the above result:

      This slideshow requires JavaScript.

       

With this we are coming to an end of the very first tutorial of ‘OpenCV in iOS’. The SOURCE CODE for the following tutorial is available at the following GITHUB LINK. I hope you enjoyed this tutorial. 🙂

Where to go next?
After this tutorial, you can check my next blog post about how to use the camera inside your app using OpenCV’s CvVideoCameraDelegate at OpenCV in iOS – The Camera.