Mallow's Blog

How to use ARKit using SceneKit

This is the continuation of the previous blog about ARKit introduction. Today we can see how we can implement ARKit by simply placing the default object in the real world using SceneKit.  To implement ARKit we need some basic knowledge in one of the following technologies. Each technology is different in their own ways.  

The technologies are,

  1. Scenekit
  2. SpriteKit
  3. Metal

SceneKit:  

  1. It is used to render 3D scenes in our app or game.
  2. We can easily add physical simulation, animations, effects.
  3. SceneKit combines high-performance rendering engine with descriptive API to render a 3D scene.
  4. Unlike Metal and OpenGL which requires a detailed algorithm to render the scenes in view, SceneKit only requires the description of our scene like lighting, the position of a scene, etc…

Following are the Important classes In SceneKit which May come into picture when we implementing ARKit

  1. SCNScene: – Its like view hierarchy like camera node, light node, etc…
  2. SCNView: – A view for displaying 3D SceneKit content.
  3. SCNNode: – A Structured element of the scene graph to represent the position and transform in 3D space.

NOTE: Today we are going to use SceneKit for our demo since we want to place 3D contents in the real world.  

SpriteKit:

  1. It is used to create the 2D scenes for games which support only 2 Dimensional views.
  2. You can learn more about SpriteKit in Apple docs.

Metal:

  1. It is used to create advanced 3D graphics using GPU.
  2. Metal encompasses Metal frameworks, metalKit, Metal shading language and Metal standard library.
  3. It is advanced than SceneKit and SpriteKit, but it requires GPU programming knowledge.  
  4. You can learn more about Metal on their documentation page.

Small brush up about ARKit:

  1. ARKit uses device camera and motion sensor to analyse the real world scenes and to render virtual objects in it.
  2. ARKit is Available on iPhone and iPad which has an A9 processor and above. We need high computation and processing power to analyse the real world scenes and to do lot more maths to render our virtual objects. That’s why Apple sets minimum required processing capability as A9.
  3. To make your app available in app store only for devices which contain A9 and above use UIRequiredDeviceCapabilities in info.plist.
  4. If AR is a secondary feature of your app we can check AR capability using isSupported in ARConfiguration class.

Important classes in ARKit:

  1. ARWorldTrackingConfiguration – This world tracking configuration uses device rear camera, motion data to find orientation and position and to find a flat surface.
  2. AROrientationTrakingConfiguration – This orientation tracking configuration uses only device rear camera and tracks only the orientation of the device.
  3. ARSCNView – Gives 3D AR experience using SceneKit.
  4. ARSKView – Gives 2D AR experience using SpriteKit.
  5. ARHittestResult – Contains info about real-world surface using device camera view.

This info is more than enough to create a simple ARKit demo.

Assumptions:

  1. You have latest Xcode installed on your iMac or MacBook
  2. You have iPhone or iPad with A9 and above processor.
  3. You are familiar with basic Project setup in Xcode.

Okay AR developer, let’s start,

  1. Open Xcode and create new Project in Xcode.
  2. To implement AR related things we can simply use Augmented Reality app template. We can use any other template to create AR demo. The reason we are using Augmented Reality app template is that it will reduce our time by setting up basic things for this demo.
  3. Name your Project as you want.
  4. Choose your development team.
  5. Make sure SceneKit technology is selected in content technology.
  6. Hit next and create.
  7. As soon as Project created just run your app on your iPhone or iPad you can see Apple’s default example for AR.

Now open your viewController.swift file,

Now you are going to add a ball(Sphere geometry) in the real-world scene at where user tapped. In SceneKit, we have many default shapes/objects like Sphere, Cone, Cylinder, etc., Now we are going to use a sphere to get a ball-like structure.  

We need to get the point where the user taps on the screen to place our ball where the user taps. For that, we are going to use user touch began method from UIResponder class. So in the following method just below didReceiveMemoryWarning() method.

override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { 

       let touch = touches.first. // 1

       let results = sceneView.hitTest((touch?.location(in: sceneView))!, types: .featurePoint) // 2

       guard let firstresult = results.first else { return } // 3

       let transform = firstresult.worldTransform // 4

       let matrix = SCNMatrix4(transform) // 5

       let position = SCNVector3(matrix.m41, matrix.m42, matrix.m43) // 6

       sceneView.scene.rootNode.addChildNode(createNode(at: position)) // 7

}

What’s happening in the above method,

  1. We are getting the first touch from the set of touches.
  2. Here we are trying to get the real world details from the point where the user taps on the screen. The “result” is an array of ARHittestResult.
  3. We are getting the first hitTest result for more accuracy.
  4. We are getting world scene details as a numeric value. The value will be type of matrix_float4x4
  5. Convert matrix_float4x4(kind of float value) into real 4 x 4 matrix using SCNMatrix4.
  6. Calculating the position where the user taps from the x, y and z coordinate values from the matrix which we formed on above step. These elements, m41, m42 and m43 will have real-time coordinates data. That’s why we are using those elements.
  7. We are creating a new node(Creating new ball) and adding it to sceneview’s root node. Don’t worry about createNode(at: ) method, we are going to create it now.

Add a new method to create a new ball to place at given position. The method will be something like below.

func createNode(at position: SCNVector3) -> SCNNode { 

       let geometry = SCNSphere(radius: 0.15) // 1

       let matrial = SCNMaterial() // 2

    matrial.diffuse.contents = UIImage(named: "art.scnassets/football.jpg”) // 3

    geometry.firstMaterial = matrial // 4

       let node = SCNNode(geometry: geometry) // 5

    node.position = position // 6

       return node // 7

}

What’s happening in the above method,

  1. We are creating a new sphere with a radius of 0.15 metre.
  2. If we create a sphere we won’t get balls like the look and feel. Simply we will get white round shape. So we need to do some extra stuff to make our sphere look like a ball. So to create that look and feel we have a class called SCNMaterial. Here we are just initialising it.
  3. Here we are giving ball image to your material as diffuse content. Here the contents are the type of Any so we can give whatever we want like UIColor, String, etc… So download any ball image from the internet and place it inside art.scnassets and give proper image name while initialising UIImage.
  4. Assigning create material to your sphere. We can give an array of material using .materials property in the sphere, here we are giving only one material, so we are assigning material to spheres by using its firstMaterial property.
  5. Creating a new node using our sphere geometry. Here node is like UIView.
  6. Providing the node position in real world. Here is position is the point where the user taps.
  7. Just returning the node what we created.

Now go to viewDidLoad(:-) method and comment or delete following lines, we are going to have our own object so we don’t need these lines anymore,

       // Create a new scene

       let scene = SCNScene(named: "art.scnassets/ship.scn")!

       // Set the scene to the view

       sceneView.scene = scene

What’s in viewWillAppear()

override func viewWillAppear(_ animated: Bool) {

       super.viewWillAppear(animated)

     

       // Create a session configuration

       let configuration = ARWorldTrackingConfiguration() // 1

 

       // Run the view's session

       sceneView.session.run(configuration) // 2

}

  1. Just creating new session configuration to start analysing your real-world scenes using the device camera.
  2. Start running the session configuration, after this only we will get data about the real-world scene.

What’s in viewWillDisappear()

 override func viewWillDisappear(_ animated: Bool) {

       super.viewWillDisappear(animated)

      

       // Pause the view's session

       sceneView.session.pause() // 1

  1. No need to analyse our real-world scenes whenever this view disappears. So just pausing the running session.

How to test?

You have done everything from your side remaining will be handled by ARKit, just Run your app. Wait for some time to run your session configuration and then taps anywhere on the screen, you can see your ball in the real world where you have tapped.  

In the next blog, we will see how to place a custom object in the real-world using ARKit.


Karthick Selvaraj,
iOS Team,
Mallow Technologies.

 

Leave a Reply

%d bloggers like this: