Hi, after a quite break I am back to my blog to post some new things related to optimized computer vision algorithms on mobile platform. I have been experimenting with android recently to come up with an easiest setup for OpenCV to start developing (I will be posting about it in my next blog). In this post, I will be explaining how to do Face detection in almost real time using OpenCV’s Haar cascades. This is not an advanced tutorial on detection/object recognition but it will help you to start working on your custom classification problems. Let us dive in!
A quick note before diving in, this blog expects that you have already read my previous blogs on OpenCV in iOS (An Introduction, The Camera)so that you can have the starter code up and running.
In this blog post, we are going to detect the faces and eyes from live video stream of your iOS device’s camera. Now start following the steps mentioned below!
Import necessary frameworks into the project: opencv2, AVFoundation, Accelerate, CoreGraphics, CoreImage, QuartzCore, AssetsLibrary, CoreMedia, and UIKit frameworks.
Rename ViewController.m to ViewController.mm to start coding in Objective-C++.
Add necessary haarcascade files from ‘<opencv-folder>/data/haarcascades/’ directory into your supporting files directory of Project. You can do this by right-click on Supporting Files and select ‘Add files to <your-project name>’
Open ViewController.mm and start adding the following lines of code for enabling Objective-C++ and let us also define some colors to draw to identify faces and eyes on the image.
Now you need to edit the ViewController interface to initialise the parameters for live view, OpenCV wrappers to get camera access through AVFoundation and Cascade Classifiers.
In the ViewController implementation’s viewDidLoad method write the following code to setup the OpenCV view.
The tricky part is reading the Cascade classifiers inside the project. Follow the steps suggested below to do the same and start the videoCamera!
Once the videoCamera is started, each image has to be processed inside the processImage method!
Now the code is complete! Please note that I am not covering specific math topics behind the Haar-Cascades detection as I feel there are so many blogs out there which can explain it really good. For code related to this blog, you can contact me via E-mail (Contact). The screenshot of the execution of my code is placed below!
Hello everyone, this is my second blog post on ‘OpenCV in iOS’ series. Before starting this tutorial, it is recommended that you complete the ‘OpenCV in iOS – An Introduction‘ tutorial. In this blog post, I will be explaing how to use the camera inside your iOS app. For setting up the application in Xcode, please complete till step 6 in ‘OpenCV in iOS – An Introduction‘ tutorial before you proceed to the below mentioned steps!
In this app, we need some additional frameworks to include in our project. They are listed as follows –
We already know how to add ‘opencv2.framework‘ from the previous blog post. I will go through the process of how to add one of the above mentioned frameworks (e.g: AVFoundation.framework), likewise you can add the rest. To add ‘AVFoundation.framework‘, go to ‘Linked Frameworks and Libraires‘ and click on the ‘+’ sign. Choose the ‘AVFoundation.framework‘ and click on ‘Add‘.
Now your project navigator area should like this.
It’s time to make our hands dirty! 🙂 Open ‘ViewController.h‘ and write the following lines of code.
Now go to ‘ViewController.mm‘ file and add some lines to include C++ code along with the Objective-C code.
Let us initialise some variables for getting the camera access and for live output from camera.
Now setup the live view such that it fills the whole app screen.
Initialise the Camera parameters and start capturing!
But wait! we still have to do one more step before actually testing our app. If you observe the line “@implementation ViewController”, you will find a warning “Method ‘processImage:’ in protocol ‘CvVideoCameraDelegate’ not implemented”. To know more about CvVideoCameraDelegate, refer this link. Coming back to our tutorial, we have to add the following lines of code to overcome that warning.
And now we are ready to run the app! In this application, we have to use the iPad/iPhone to run and test our application because we have to access the camera of the device. Now we can see the live view of our camera! 🙂
Lets give some basic instaTouch! to our app 😉 .Add the following lines of code in the ‘processImage‘ method and run the application on your device.
With this we are coming to an end of this tutorial! 🙂 We have learnt how to access Camera inside the app and apply some live operations on the video. Though this is a basic tutorial, this will act as a precursor for many Augmented Reality type applications! 😀 We will try to get into next level of development of Computer Vision apps in our next tutorial! Still then stay tuned… 🙂 Feel free to comment your suggestions/doubts related to this tutorial.
The SOURCE CODE for the following tutorial is available at the following GITHUB LINK.
Hello World! This is my first official blog post related to Computer Vision. I guess the title gives you the glimpse of what we are trying to achieve in this session.
What is OpenCV?
OpenCV is an open sourced BSD licensed computer vision library which is available on all major platforms (like Android, iOS, Linux, Mac OSX, Windows) and is primarily written in C++ (with bindings available for Python, Java and even MATLAB). You can check the documentation of OpenCV at http://docs.opencv.org.
In this session, we are trying to design a basic iOS app which uses OpenCV for the image processing part of the app. I will be using the XCode v7.2.1 and OpenCV v2.4.13.
Download opencv2.framework from the following link. Unzip the downloaded file and keep it in your workspace.
Now open Xcode and start the new project by clicking on ‘Create a new Xcode project‘ in the left column of the following image.
This will take you to a new window where you can select the template for your new project. Make sure that you select the ‘Single View Application‘ under the iOS -> Application and click ‘Next‘.
Choose the name you want to keep to the application and fill it in the ‘Product Name‘. Choose the ‘Language‘ as Objective-C and ‘Devices‘ as Universal. Now click ‘Next‘ and choose the location of your workspace and click on ‘Create‘. From now onwards, I will refer to the project folder location as <project_folder>.
Now move the unzipped version of opencv2.framework to “<project_folder>/” . Now go to settings of your Xcode project and select General -> Linked Frameworks and Libraries -> click on ‘+’ sign (to add a new framework to the project) -> Click on ‘Add Other…’ -> browse to “<workspace>/<project_folder>/opencv2.framework” -> Select it and click ‘Open’. Now your project navigator area will look like this.
In this tutorial, we will be using Objective-C and C++ to design the app. For this task to achive we have to rename ‘ViewController.m‘ file to ‘ViewController.mm‘. This simple name convention will notify Xcode that we will be mixing Objective-C and C++.
Now lets start the Coding part 🙂 ! Right now, your ‘ViewController.mm‘ file should look like this.
Now add the following code, so that we can mix C++ code in here.
Let us setup the view.
Now let us implement the ViewController part. We have to setup the imageView such that it takes the entire App screen. Let us also add a small part of code to correct the aspect ratio of the image.
Now add our imageView as a sub view.
Before moving to next part, copy an image of your choice to “<workspace>/<project_folder>/” location and go to project navigator area in Xcode and select ‘Supporting Files‘ and click on ‘Add Files to “<your_project_name>”… ‘. Navigate to <project_folder> location and select the image and click on ‘Add‘ button. You can find the image under the ‘Supporting Files‘ folder.
Now, let us write code to read the image and display it on the screen. If the image file is not present, let us display some error message.
Let us run the application and see the results. I am using iPhone 6s Plus emulator to check my results. Voila! it worked 😀 (I am attaching the screenshot of the emulator). If it didn’t show anything on the screen you can check the messages in the Debug area of Xcode.
But wait, we didn’t use any OpenCV functionality till now! Before applying any of the OpenCV functions we have to convert the image from UIImage to OpenCV’s Mat datatype. And for displaying the image on the screen we have to convert from Mat to UIImage again.
Convert the image from RGB to Grayscale:
Apply Gaussian Blur to the above Grayscale image: (you can observe the blurred version of the above result here)
Apply Canny Edge detection on the above result:
With this we are coming to an end of the very first tutorial of ‘OpenCV in iOS’. The SOURCE CODE for the following tutorial is available at the following GITHUB LINK. I hope you enjoyed this tutorial. 🙂
Where to go next?
After this tutorial, you can check my next blog post about how to use the camera inside your app using OpenCV’s CvVideoCameraDelegate at OpenCV in iOS – The Camera.