Core ML

Core ML After Dark

Derek Andre Articles, Development Technologies & Tools, Machine Learning, Mobile, Tutorial Leave a Comment

Attention: The following article was published over 7 years ago, and the information provided may be aged or outdated. Please keep that in mind as you read the post.

So you’ve made this great social media app, and you are about to sit back and wait for the money to roll in. But, there is a problem: people keep trying to upload nude photos to it.

What if we could have a trained machine learning model that could detect not safe for work (NSFW) content and do it on a iOS device, before any image would be uploaded to a server?

Developing this trained machine learning model is way out of scope for this blog post. Luckily, the good people at Yahoo have already done this with their open-sourced trained Caffe models. You can check it all out here. The question now is, how can we use this on an iOS device?

The sultry side of your iPhone can collide with acceptable use policies. We introduce a machine learning solution that can help your application decide what is truly too hot for the internet using Core ML on iOS.

Introducing Core ML

Core ML was introduced in iOS 11. It is a way for you to run trained machine learning models on your iOS device without having to send the data to a server.

One major benefit is that if you don’t have to send data anywhere, you don’t need a data connection. This is good for an Offline-First design. Also, if you don’t have to send data, then it is an extra layer of user privacy.

Here is Apple’s documentation on Core ML. This can tell you more about the framework. It will also tell you about the APIs that sit on top of Core ML, like Vision, Foundation, and GameplayKit. To solve our problem of detecting NSFW content in images, we are going to use the Vision API.

The first thing you need to do to work with Core ML in your app is a MLModel file. Apple has created a Python package called Core ML Tools that will convert supported trained models and tools to an MLModel file. You can find a list of supported models and tools on this page.

If you don’t have a machine learning model or don’t feel like converting one, you can download MLModel files from http://coreml.store. Here is the MLModel file that I used for this example app.

The completed code is available at https://github.com/dcandre/Core-ML-After-Dark.

Our Reference Application

To start, create a Single Page Application in Xcode. I am using Xcode 9.2, Swift 4.03, and my iPhone is running 11.2.1. Once you have created your application, create a new group called PhotoSelector.

Related Posts:  100x Engineering Starts Now: Windsurf’s Game-Changing IDE Experience Part 1

Adding an MLModel file to an Xcode project is super simple. All you do is drag the file into your Project Navigator. Go ahead and drag the Nudity.mlmodel file into our PhotoSelector group. You should get a popup to “choose options for adding these files”. Make sure that “Copy items if needed” is checked. After that, Xcode will create an interface for you from the MLModel file. In our case, that class will be called Nudity.

Image Detector

Now we can create a class that will detect NSFW content in images. Create two Swift files called NudityDetectorDelegate.swift and NudityDetector.swift.

NudityDetector Protocol

import Foundation

protocol NudityDetectorDelegate {
    func nudityDetectorResults(doesContainNudity: Bool, confidence: Float)
}

NudityDetector Class

import Foundation
import CoreML
import Vision

class NudityDetector {
    
    var delegate:NudityDetectorDelegate?
    
    func doesContainNudity(_ ciImage:CIImage) {
        
        guard let vnCoreMLModel = try? VNCoreMLModel(for: Nudity().model) else {
            
            fatalError("The Core ML model could not be loaded.")
            
        }
        
        let vnCoreMLRequest = VNCoreMLRequest(model: vnCoreMLModel) { (vnRequest, error) in
            
            guard let vnClassificationObservations = vnRequest.results as? [VNClassificationObservation] else {
                fatalError("The image could not be processed by the model.")
            }
            
            if let firstObservation = vnClassificationObservations.first {
                
                print(firstObservation.identifier)
                
                let doesContainNudity = (firstObservation.identifier == "NSFW") ? true : false
                
                if let nudityDetectorDelegate = self.delegate {
                    
                    nudityDetectorDelegate.nudityDetectorResults(doesContainNudity: doesContainNudity, confidence: firstObservation.confidence)
                    
                }
                
            }
            
        }
        
        let vnImageRequestHandler = VNImageRequestHandler(ciImage: ciImage)
        
        do {
            
            try vnImageRequestHandler.perform([vnCoreMLRequest])
            
        }
        catch {
            
            print(error)
            
        }
    }
}

The NudityDetector class is an interface that has one method, doesContainNudity. You can pass in a CIImage and it will detect NSFW content using the Core ML model.

To use the model, in code, we instantiate a VNCoreMLModel object, using the Nudity model. Next we create a VNCoreMlRequest.

The interesting thing about the VNCoreMLRequest is it has a completion handler method. This is where we will call the nudityDetectorResults function on the NudityDetectorDelegate. In the completion handler, for the VNCoreMlRequest, we unbox vnRequest.results to an array of VNClassificationObservation objects.

This will have the identifier, which will be either “NSFW” or “SFW”, and the confidence number, which is a Float from 0-1. I use the first observation, because the array is ordered from highest confidence to lowest. 

Now that the VNCoreMLRequest is ready, I can create a VNImageRequestHandler, which will actually process the Vision image analysis request. Now we have an interface that uses our Core ML model to detect nudity in our images.

Selecting An Image

We need to allow the users to select an image. Create Swift files called ImageSelectorDelegate.swift and ImageSelector.swift.

ImageSelectorDelegate Protocol

import Foundation
import UIKit

protocol ImageSelectorDelegate {
    func imageSelected(selectedImage:UIImage)
}

ImageSelector Class

import Foundation
import UIKit

class ImageSelector: NSObject, UIImagePickerControllerDelegate, UINavigationControllerDelegate {
    
    private let uiImagePickerController = UIImagePickerController()
    
    var delegate: ImageSelectorDelegate?
    
    func showCamera(parentUIViewController:UIViewController) {
        showImageSelector(uiImagePickerControllerSourceType: .camera, parentUIViewController: parentUIViewController)
    }
    
    func showPhotoLibrary(parentUIViewController:UIViewController) {
        showImageSelector(uiImagePickerControllerSourceType: .photoLibrary, parentUIViewController: parentUIViewController)
    }
    
    func hideImageSelector() {
        
        uiImagePickerController.dismiss(animated: true, completion: nil)
        
    }
    
    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
        
        if let userSelectedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
            
            if let imageSelectorDelegate = delegate {
                
                imageSelectorDelegate.imageSelected(selectedImage: userSelectedImage)
                
            }
            
        }
        
    }
    
    private func showImageSelector(uiImagePickerControllerSourceType:UIImagePickerControllerSourceType, parentUIViewController:UIViewController) {
        
        if (UIImagePickerController.isSourceTypeAvailable(uiImagePickerControllerSourceType)) {
            
            uiImagePickerController.sourceType = uiImagePickerControllerSourceType
            uiImagePickerController.delegate = self
            uiImagePickerController.allowsEditing = true
            
            parentUIViewController.present(uiImagePickerController, animated: true, completion: nil)
            
        }
        
    }
    
}

Basically, I am going to encapsulate UIImagePickerController in the ImageSelector class. The imageProcessed function will be called on the ImageSelectorDelegate when a user has chosen a photo. Just a reminder, if you are going to use UIImagePickerController, then your class needs to implement the UIImagePickerControllerDelegate and UINavigationControllerDelegate protocols.

Related Posts:  What is Cross-Platform Mobile Development?

How They Work Together

We can create a UI that will use the ImageSelector class to choose an image and the NudityDetector class to tell if that image is too hot for the internet. I renamed the default ViewController class to PhotoSelectorViewController and moved it into the PhotoSelector group.

On my main Storyboard, I embedded by PhotoSelectorViewController view in a Navigation Controller view

I added a camera Bar Button Item and a UIImageView to the PhotoSelectorViewController view. Then I created an outlet for the image view and an action for the camera button on the PhotoSelectorViewController.

The PhotoSelectorViewController class has to implement the ImageSelectorDelegate and NudityDetectorDelegate protocols to use the ImageSelector and NudityDetector classes. The most interesting parts of the PhotoSelectorViewController class are the implementations of the delegate functions.

PhotoSelectorViewController Class

    func imageSelected(selectedImage: UIImage) {
        
        selectedImageView.image = selectedImage
        
        guard let ciImage = CIImage(image: selectedImage) else {
            fatalError("The selected UIImage could not be converted to a CIImage.")
        }
        
        if let detector = nudityDetector {
            detector.doesContainNudity(ciImage)
        }
        
        hideImageSelector()
        
    }
    
    func nudityDetectorResults(doesContainNudity: Bool, confidence: Float) {
        
        print(doesContainNudity)
        print(confidence)
        
        if (doesContainNudity && confidence > 0.5) {
            
            self.navigationItem.title = ""
            
            showNudityDetectedAlert()
            
        }
        else {
            
            self.navigationItem.title = "SFW Photo, Probably"
            
        }
        
    }

The imageSelected method from the ImageSelectorDelegate protocol will receive the chosen image by the user. The selected image is converted to a CIImage and then passed to the NudityDetector class.

The nudityDetectorResults function received a boolean, whose value is determined by the VNClassificationObservation identifier from the Nudity Core ML model. If the identifier is equal to “NSFW,” then the doesContainNudity boolean is set to true. If it is “SFW,” then it is set to false. The confidence number is a floating-point value between 0-1. I show an alert if the model finds that the image is “NSFW” and the confidence is greater than 0.5.

Before we test this on a device, we want to add two Information Property List items. Go to your Info.plist file and add Privacy - Camera Usage Description and Privacy - Photo Library Usage Description. Your app will crash without these keys, because the UIImagePickerController will be accessing your camera. I also added a photo library method in the ImageSelector class, so you also have to add the Privacy - Photo Library Usage Description key.

Final Thoughts

You will need to test this app on an actual device. The iOS simulator does not have a camera. If you are at work and don’t want to search for a NSFW image to test, you can use the awesome photoshopped one below.

One last thing: what constitutes an inappropriate image is very subjective. The results you get from Yahoo’s model might not match the requirements of your platform, so be cognizant of that.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments