Mallow's Blog

Introduction to Core ML and ARKit in iOS 11

Last week Apple have released two big frameworks as part of WWDC 2017 to take the iPhone and iPad users to next level. They are called core ML and Augmented reality. A developer can make use of core ML framework to develop intelligent apps and augmented reality framework to develop augmented reality experience in real world. Let’s see small introduction about each framework,

Core ML:

  • Now you can build more intelligent apps with the power of Core ML which is recently released by Apple at WWDC 2017. With the new foundational machine learning framework which is used in following apple’s default apps camera, Siri and Quicktype, we can build high-performance intelligence apps with just a few lines of code.
  • We can easily integrate trained machine learning models to our app. Trained model is the result of applying a machine learning algorithm to a set of training data. The model makes predictions based on new input data. Core ML allows you to integrate wide variety machine learning model in your app, which also supports some standards models like tree ensembles, SVM’s and generalised linear models.
  • It is built on top of low-level technologies like METAL and ACCELERATE frameworks.
  • Core ML uses the power of CPU and GPU to provide the high performance in data processing and analyse.
  • All the analyses of your data will happen only in device. So the data won’t leave your computing device, it ensures the security of our data.

Where to use core ML in your app?

We can easily integrate ML in your app for following features

  1. Face tracking
  2. Landmarks
  3. Rectangle detection
  4. Face detection
  5. Text detection
  6. Barcode detection
  7. Object tracking and
  8. Image registration

 

  • The natural language processing api in foundation framework in uses machine learning to deeply understand the text using following,
  1. Language identification
  2. Tokenization
  3. Lemmatization
  4. Part of speech
  5. Named entity recognition
  • Core ML also supports
  1. Vision framework(Debuted in iOS 11) – For image analysis
  2. Foundation framework – For Natural language processing
  3. GamePlayKit framework – For evaluating learned trees

 

The below image will clearly explain how core ML builds and works in your app

Core ML layer structure

Core ML working

These are the basic intro about machine learning framework.

 

ARKit:[Augmented reality]

  • ARKit is a framework which uses your device camera and motion sensor to create the augmented reality experience in your app or game.
  • ARKit adds the 2D or 3D view in your real world views, which is taken by using a device camera.
  • AR combines device motion tracking, camera scene capture, advanced scene processing and display conveniences to simplify the process of building AR experience.
  • AR will run on the device which has A9 and above chips set.
  • The following steps involved to build AR experience in real world,
    • Tracking – Matching real world and visual-inertial odometry
      • The ARKit uses the visual-inertial odometry(VIO) method to create the correspondence between the real world and virtual spaces. This method uses data from the motion sensor and computer vision analysis of a scene from device camera. ARKit analyses the features in the scene using data gathered from different video frames and it combines this scene data with motion sensor data to provide the high precision information about device’s position and motion.
    • Scene understanding – Plane detection, Hit testing and light estimation
      • Plane Detection – We can detect the plane surface in our scene by enabling “planeDetection” setting in our ARSessionConfiguration. As the result of plane detection, we can get the position and size of the detected plane in our scene. We can use this to place our virtual content in the real world scene.
      • Hit testing – Using this we can find the real world surface corresponding to the point the camera image.
      • Light estimation – Lighting of the scene we are tracking also plays a major role in AR. So scene with low light and a blank wall will reduce the tracking quality.
    • Rendering – Easy integration, ARView and Custom rendering
      • After completing tracking and scene understanding we can place virtual elements in real world scene using ARKit.

The following image will explain how AR actually works,

These are the small intro about the apple’s new ARKit. We will see about ARKit in detail with a hands-on demo in future blog post.

 


Karthick Selvaraj,
Junior iOS Developer,
Mallow Technologies

Leave a Comment

Your email address will not be published. Required fields are marked *