This repository contains a sample project demonstrating how to develop and integrate a local NSFW (Not Safe For Work) content detection model for iOS apps. The project uses Apple's CreateML, CoreML, and Vision frameworks.
The primary goal of this project is to enhance app safety by filtering or blurring inappropriate content while preserving user privacy through on-device processing. This approach is particularly beneficial for end-to-end encrypted (E2EE) applications, where server-side content filtering is not possible.
For a detailed explanation of the project, refer to the Medium article.
-
Open the project in Xcode:
open NSFWClassifierApp.xcodeproj
-
Run the app on a real iOS device:
CoreML models do not work in the simulator, so you need to run the app on a real device.
-
Classify and blur images:
The app displays a feed of posts with images. NSFW images are automatically blurred using the integrated model.
Feel free to submit issues and pull requests for any improvements or additional features.
This project is licensed under the MIT License.
For more detailed information, refer to the Medium article.