This iOS app classifies the lightsaber and R2D2 sounds from Star Wars from the microphone input.
The purpose of creating this project is to learn a Create ML and Sound Analysis framework in practice.
Because the app uses mic input, it works only on the device. There might be a way to set up the app to work on a Simulator, but it is out of the scope of this project. The main focus is to learn how to use Create ML and Sound Analysis framework in practice.
To run the app:
- Clone the repo
- Configure code signing
- Run the app on the device
After the app is launched, start your favorite Star Wars movie, tap on the Play button, and observe the results:
The Core ML model was trained in the Create ML (Version 5.0 Beta (119)
) using sounds from YouTube videos and from the website http://www.galaxyfaraway.com.
Sounds were grouped into 3 folders:
We need to provide a variety of different sounds other ones we want to classify (lightsaber and R2D2) to properly train the model.
The training was performed with default settings:
- The app doesn't work on Simulator
- The app doesn't handle audio interruptions (for example incoming phone calls during sound analysis)
- Apple's "Classifying Live Audio Input with a Built-in Sound Classifier" sample project (link). This project is a great sample on how to detect different sounds using built-in sound classifications. In addition, it has a code to properly handle audio interruptions and a neat SwiftUI meter view that was copied into this project.
- WWDC video "Training Sound Classification Models in Create ML" (link)
- Apple's article "Classifying Sounds in an Audio Stream" (link)
- Sounds for training ML model were downloaded from http://www.galaxyfaraway.com
- R2D2 and lightsaber icons were downloaded from https://icons8.com