Skip to content

Latest commit

 

History

History
149 lines (127 loc) · 5.96 KB

README.md

File metadata and controls

149 lines (127 loc) · 5.96 KB

objML

This Project contains basic Proof-of-concept code with Google ML Kit Vision API, demonstrating how Object Detection/ Tracking and Image Labeling can be implemented together

Some files are intentionally left out, this only contains the code of the app

Documentation

This Readme - Steps to create an enviroment for working with ML Kit

Code/ - Code explanation in depth, too long to fit inside a comment

Extras/ - Some extra stuff about how to train/ update CoreML models on device

Preparation

Hardware

  • device with macOS (iMac, Macbook)
  • iOS device with camera, iPhone X and newer is preferred.

Software

  • XCode
  • CocoaPods

Online

  1. Go to Firebase and create a new project
  2. No need to follow the guidelines one-by-one, this readme contains all information you need
  3. When creating a new project for iOS, it may ask you to verify the connection, skip if for now
  4. Download the GoogleService-Info.plist it generated, it will be used later
  5. Decide upgrade to Blaze for Cloud API or not. See Pricing
Spark Plan (Free) Blaze Plan (Pay as you use)
on Device Vision API (Object detection and tracking, image labeling) Cloud API (Precise image labeling)

Local

  1. Get CocoaPods
    • Open Terminal, type sudo gem install cocoapods
    • Enter Super User password, it should be the same with your macOS account password
    • Wait for the installation
  2. Create a new XCode Project
    • Choose Single View App under iOS tab
    • Configure as following
Field Value
Product Name Create your own
Language Swift
User Interface Storyboard
  • And leave these options unchecked

  • Use Core Data

  • Include Unit Test

  • Include UI Test

    • When creating a new project, make sure the BundleID display in XCode matches with Firebase.
  • Your project configuration will look like this

  • After the project has been created, close XCode

  1. Prepare for CocoaPods
    • Open Finder, navigate to the XCode project directory created earlier
    • In the Finder Window, find a gear icon on the top, click it and choose copy ... as Pathname
    • Go back to Terminal, type cd "..." then paste the path you copied before inside the qoute
    • (eg: cd "/Users/scarlet/Documents/App/visionAPITest" )
    • You will see the directory name appear on the left
    • Type pod init to initiate CocoaPods
    • After it has finished, type nano Podfile to open the Podfile Cocoapods created during initialzation
    • Terminal window will change to nano editor, copy and paste the following under the line # Pods for ...
    pod 'Firebase/Analytics'
    pod 'Firebase/MLVision'
    pod 'Firebase/MLVisionLabelModel'
    pod 'Firebase/MLVisionObjectDetection'
    • These are Pods for Google Firebase Vision API, including ML Kit Vision Image Label (Both on device and Cloud) and on device ML Kit Vision Object Detection. New pods can be added using the same method
    • Your nano will look like this -
    • Once finish, press control-O to write to file, Return to confirm and control-X to leave nano
    • Back to Terminal, type pod install to download and install the Pods we specific in the Podfile
    • After all Pods have been downloaded, CocoaPods will create a .xcworkspace file, always open this Workspace file when working with this project, otherwise Firebase will not be loaded correctly.
  2. Add Firebase to project
    • Open the workspace file, drag the GoogleService-Info.plist file into the root of Project Navigator
    • Follow these options
    • Open AppDelegate.swift
    • Under
    import UIKit
    • Add
    import Firebase
    • Any other Swift file needs this line if Vision API is needed
    • Then, inside the function
    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool
    • Add
    FirebaseApp.configure()
    • before return true
    • Your AppDelegate.swift will look like this
//
//  AppDelegate.swift
//  visionAPITest
//
//  Created by Scarlet on A2020/J/7.
//  Copyright © 2020 Scarlet. All rights reserved.
//

import UIKit
import Firebase

@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {

  func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
      // Override point for customization after application launch.
      FirebaseApp.configure()
      return true
  }

  // MARK: UISceneSession Lifecycle

  func application(_ application: UIApplication, configurationForConnecting connectingSceneSession: UISceneSession, options: UIScene.ConnectionOptions) -> UISceneConfiguration {
      // Called when a new scene session is being created.
      // Use this method to select a configuration to create the new scene with.
      return UISceneConfiguration(name: "Default Configuration", sessionRole: connectingSceneSession.role)
  }

  func application(_ application: UIApplication, didDiscardSceneSessions sceneSessions: Set<UISceneSession>) {
      // Called when the user discards a scene session.
      // If any sessions were discarded while the application was not running, this will be called shortly after application:didFinishLaunchingWithOptions.
      // Use this method to release any resources that were specific to the discarded scenes, as they will not return.
  }


}

All preparation have been completed