Before starting this project, I didn't really have much experience in Swift or iOS development. However, the really difficult part was not how to write Swift codes or to design a emotion detection algorithm. Instead, what brought me much trouble was how to import OpenCV and Dlib in an Swift program. While this is not a very complicated computer vision app, it gives a good example of how to building an iOS app with OpenCV and Dlib.
Like I said, I am really new to iOS programming. I tried hard to follow good iOS designs and conventions. Please let me know if there are any mistakes and feel free provide any advice. I really appreciate it.
1 Project design
Emobot | README.md +---dlib | \---(dlib source code) +---Emobot | | AppDelegate.swift | | Emobot-Bridging-Header.h | | Emobot.h | | Emobot.mm | | FirstViewController.swift | | Info.plist | | libdlib.a | | SceneDelegate.swift | | SecondViewController.swift | | shape_predictor_68_face_landmarks.dat | +---Assets.xcassets | | | Contents.json | | +---AppIcon.appiconset | | | \---(icons) | | +---first.imageset | | | \---(icons for first view) | | +---second.imagest | | \---(icons for second view) | +---Base.lproj | | LaunchScreen.storyboard | | Main.storyboard | \---opencv2.framework | \---(opencv headers) \---Emobot.xcodeproj \---(Xcode data)
Emobot aims to detect driver's emotion (looking left or right, fatigue, distraction) from the frontal camera of an iOS app and send status code to BMW iDrive system for HUD notifications. In this app, I used Swift AVFoundation module to use the frontal camera and send images to backend algorithms. Since Dlib and OpenCV are written in C++, we choose Objective-C++ for the emotion detection algorithm for easily bridging between Swift and C++.
FirstViewController.swift captures the camera input and send the image to
Emobot class in (
Emobot.mm) for detection.
SecondViewController.swift serves as the "about page".
2 Import OpenCV
Step 1: Download OpenCV 3.4.5.
Step 2: Add OpenCV to Xcode. Drag and drop
opencv2.framework into the project. Make sure to check the "Copy items if needed" and "Create groups" options.
Step 3: Link OpenCV with the project. Make sure the path of
opencv2.framework is listed in "Build Settings - Library Search Paths" and the framework itself is listed in "Build Phases - Link Binary With Libraries".
 Anurag Ajwani, "Building a simple lane detection iOS app using OpenCV", link
3 Import Dlib
Step 1: Download Dlib 19.19.
Step 2: Build dlib for iOS. An Xcode project will be created in
cd examples mkdir build cd build cmake -G Xcode .. cmake --build . --config Release
Step 3: Pre-compile the Dlib library. Set the following compiler flags in Xcode and build
-DDLIB_JPEG_SUPPORT -DDLIB_NO_GUI_SUPPORT -DNDEBUG -DDLIB_USE_BLAS -DDLIB_USE_LAPACK
Step 4: Add Dlib to iOS app project. Drag and drop
libdlib.a to the iOS app project. Make sure to check the "Copy items if needed" and "Create groups" options. Add the path to "Build Settings - Library Search Paths" and make sure it is listed in "Build Phases - Link Binary With Libraries". Copy the Dlib source folder to the project directory (do not add to Xcode project). Add the path (to the Dlib source folder, not the folder itself) to "Build Settings - Header Search Paths".
Step 5: Add Accelerate framework. Add
Accelerate.framework, which contains necessary blas symbols, to "Build Phases - Link Binary With Libraries".
4 Build iOS app View
5 Build Backend Algorithms
Step 1: Create Objective-C++ detector class. Choose "Create Bridging Header" when asked how to configue an Objective-C bridging header.
Step 2: Build detector. Add detection algorithms to the detector class. In my project, I used Dlib for frontal face detector and shape predictor; and OpenCV takes care of the image modification for output.
Step 3: Invoke the detector class in Swift ViewController.
 Anurag Ajwani, "Building a Simply Lane Detection iOS App using OpenCV", link
1. If you receive this error when building the iOS app:
Make sure you are in the same mode, "DEBUG" or "RELEASE", when building the dlib library and the iOS app.
2. If you receive errors such as
A fix that worked for me was to add
error: undefined reference to `dlib::base64::base64()' error: undefined reference to `dlib::base64::~base64()'
dlib/all/source.cppto "Build Phases - Compile Sources" in the iOS app project.
3. If you receive errors such as
This issue is due to name conflicts with the Accelerate framework. As is mentioned in the SourceForge reference below, you could either rename the functions to
In matrix_trsm.h:15:18: error: conflicting types for 'clbas_strsm' void clbas_strsm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, In matrix_trsm.h:21:18: error: conflicting types for 'cblas_dtrsm' void cblas_dtrsm(const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side,
cblas_dtrsmFIXrespectively; or surround the cblas_strsm and cblas_dtrsm declarations in dlib's matrix_trsm.h file with
Kevin Wood, David, and Robert Nitsch, "iOS: cblas_conflict: in matrix_trsm.h", link
4. If you receive errors such as
This is becuase there are name conflicts in vecLib of Accelerate framework and Dlib when BLAS is enabled. I did not find a good workaround for these name conflicts (other than the fix in the previous issue). However, I noticed that in many other projects on Github, people separate pipelines into multiple parts and try to avoid using heavy OpenCV and Dlib methods in one file. More specifically, do not import
Conflicting types for 'cblas_segmv' in cblas.h Conflicting types for 'cblas_dgemv' in cblas.h
opencv2/imgcodecs/ios.hin the detection algorithms that use Dlib. (To be honest, I didn't really figure out how these name conflicts happen or a good way to fix these. Please leave me a message if you know how to fix this.)