Emobot: Driver Emotion Detection

An iOS app with Dlib and OpenCV

Emobot: Driver Emotion Detection

Jan 06, 2019

Before starting this project, I didn't really have much experience in Swift or iOS development. However, the really difficult part was not how to write Swift codes or to design a emotion detection algorithm. Instead, what brought me much trouble was how to import OpenCV and Dlib in an Swift program. While this is not a very complicated computer vision app, it is a good example of building an iOS app with OpenCV and Dlib.

Like I said, I am really new to iOS programming. I tried hard to follow good iOS designs and conventions. Please let me know if there are any mistakes and feel free provide any advice. I really appreciate it.

1 Project design

Emobot
|   README.md
+---dlib
|   \---(dlib source code)
+---Emobot
|   |   AppDelegate.swift
|   |   Emobot-Bridging-Header.h
|   |   Emobot.h
|   |   Emobot.mm
|   |   FirstViewController.swift
|   |   Info.plist
|   |   libdlib.a
|   |   SceneDelegate.swift
|   |   SecondViewController.swift
|   |   shape_predictor_68_face_landmarks.dat
|   +---Assets.xcassets
|   |   |   Contents.json
|   |   +---AppIcon.appiconset
|   |   |   \---(icons)
|   |   +---first.imageset
|   |   |   \---(icons for first view)
|   |   +---second.imagest
|   |       \---(icons for second view)
|   +---Base.lproj
|   |       LaunchScreen.storyboard
|   |       Main.storyboard
|   \---opencv2.framework
|       \---(opencv headers)
\---Emobot.xcodeproj
    \---(Xcode data)

Emobot aims to detect driver's emotion (looking left or right, fatigue, distraction) from the frontal camera of an iOS app and send status code to BMW iDrive system for HUD notifications. In this app, I used Swift AVFoundation module to use the frontal camera and send images to backend algorithms. Since Dlib and OpenCV are written in C++, we choose Objective-C++ for the emotion detection algorithm for easily bridging between Swift and C++.

More specifically, FirstViewController.swift captures the camera input and send the image to Emobot class in (Emobot.h and Emobot.mm) for detection. SecondViewController.swift serves as the "about page".

2 Import OpenCV

Step 1: Download OpenCV 3.4.5.

Step 2: Add OpenCV to Xcode. Drag and drop opencv2.framework into the project. Make sure to check the "Copy items if needed" and "Create groups" options.

Step 3: Link OpenCV with the project. Make sure the path of opencv2.framework is listed in "Build Settings - Library Search Paths" and the framework itself is listed in "Build Phases - Link Binary With Libraries".

References:
[1] Anurag Ajwani, "Building a simple lane detection iOS app using OpenCV", link

3 Import Dlib

Step 1: Download Dlib 19.19.

Step 2: Build dlib for iOS. An Xcode project will be created in ./examples/build/dlib_build/.

cd examples
mkdir build
cd build
cmake -G Xcode ..
cmake --build . --config Release

Step 3: Pre-compile the Dlib library. Set the following compiler flags in Xcode and build libdlib.a .

-DDLIB_JPEG_SUPPORT
-DDLIB_NO_GUI_SUPPORT
-DNDEBUG
-DDLIB_USE_BLAS
-DDLIB_USE_LAPACK

Step 4: Add Dlib to iOS app project. Drag and drop libdlib.a to the iOS app project. Make sure to check the "Copy items if needed" and "Create groups" options. Add the path to "Build Settings - Library Search Paths" and make sure it is listed in "Build Phases - Link Binary With Libraries". Copy the Dlib source folder to the project directory (do not add to Xcode project). Add the path (to the Dlib source folder, not the folder itself) to "Build Settings - Header Search Paths".

Step 5: Add Accelerate framework. Add Accelerate.framework, which contains necessary blas symbols, to "Build Phases - Link Binary With Libraries".

References:
[1] davisking@Github, "Instruction to build dlib", link
[2] Rob Sanders, "How to Build Dlib for iOS", link
[3] lbsweek@StackOverflow, "How to Build Dlib for iOS", link

4 Build iOS app View

References:
[1] zweigraf@Github, "Face Landmarking on iPhone", link
[2] Anurag Ajwani, "Building a Simply Lane Detection iOS App using OpenCV", link

5 Build Backend Algorithms

Step 1: Create Objective-C++ detector class. Choose "Create Bridging Header" when asked how to configue an Objective-C bridging header.

Step 2: Build detector. Add detection algorithms to the detector class. In my project, I used Dlib for frontal face detector and shape predictor; and OpenCV takes care of the image modification for output.

Step 3: Invoke the detector class in Swift ViewController.

References:
[1] Anurag Ajwani, "Building a Simply Lane Detection iOS App using OpenCV", link

Important details

1. If you receive this error when building the iOS app:

_USER_ERROR__missing_dlib_all_source_cpp_file__OR__inconsistent_use_of_DEBUG_or_ENABLE_ASSERTS_preprocessor_directives
Make sure you are in the same mode, "DEBUG" or "RELEASE", when building the dlib library and the iOS app.

2. If you receive errors such as

error: undefined reference to
`dlib::base64::base64()'
error: undefined reference to
`dlib::base64::~base64()'
A fix that worked for me was to add dlib/all/source.cpp to "Build Phases - Compile Sources" in the iOS app project.

3. If you receive errors such as

In matrix_trsm.h:15:18: error: conflicting types for 'clbas_strsm'
void clbas_strsm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side,
In matrix_trsm.h:21:18: error: conflicting types for 'cblas_dtrsm'
void cblas_dtrsm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side,
								
This issue is due to name conflicts with the Accelerate framework. As is mentioned in the SourceForge reference below, you could either rename the functions to cblas_strsmFIX and cblas_dtrsmFIX respectively; or surround the cblas_strsm and cblas_dtrsm declarations in dlib's matrix_trsm.h file with #ifndef __VECLIB__.

References:
[1]Kevin Wood, David, and Robert Nitsch, "iOS: cblas_conflict: in matrix_trsm.h", link

4. If you receive errors such as

Conflicting types for 'cblas_segmv' in cblas.h
Conflicting types for 'cblas_dgemv' in cblas.h
This is becuase there are name conflicts in vecLib of Accelerate framework and Dlib when BLAS is enabled. I did not find a good workaround for these name conflicts (other than the fix in the previous issue). However, I noticed that in many other projects on Github, people separate pipelines into multiple parts and try to avoid using heavy OpenCV and Dlib methods in one file. More specifically, do not import opencv2/imgcodecs/ios.h in the detection algorithms that use Dlib. (To be honest, I didn't really figure out how these name conflicts happen or a good way to fix these. Please leave me a message if you know how to fix this.)