Entrance to the World of ARKit

ARKit is a pretty fresh API that was released by Apple in June 2017. It has opened a wide range of possibilities for developers to create iOS applications with a completely new approach.
For people that are not very familiar with that technology, it may seem a little bit overwhelming to dive into augmented reality world and start developing all of the new features that are available right now. Fortunately, things are not that bad at all and in this short blogpost I will try to present it to you.
Requirements
Before going straight into coding we need several things in order to kick-off:
- Xcode 10 or later with the latest iOS SDK
- Apple device with iOS 12 or later
A project that we will develop is based on ARKit 2 which introduced several new things and improvements to the API (for more curious readers I am leaving a helpful link here). That’s why we need the latest Development Tools. Unfortunately, there is no way to debug the application and see Augmented Reality objects on the simulator as it doesn’t provide a camera and other sensors which are required to measure and configure AR world properly.
Initial Setup
The first thing we need to do is to create new Xcode Project from Simple View App template. For the people that don’t want to configure the AR scene by themselves, there is also Augmented Reality App template which has ARSCNView added to the ViewController via Storyboards.
I decided to go the first way. The initial thing that needs to be done is to add previously mentioned ARSCNView to our ViewController. It’s the view that integrates ARSession into SceneKit and is responsible for lighting, camera drawing and other crucial things like managing nodes for anchors. It also provides several properties that could be helpful for debugging purposes.
let sceneView = ARSCNView()
sceneView.showsStatistics = true
sceneView.debugOptions = ARSCNDebugOptions.showFeaturePoints
sceneView.automaticallyUpdatesLighting = true
The above piece of code will create ARSCNView and setup basic things like showing statistics of the rendering or force to update lighting of the scene automatically. The debug options property is an interesting one, as it allows us to see feature points which are detected 3D points ready to interact with. This option gives us information about how our device is scanning the environment and how the conditions like proper lighting and contrasts are affecting the quality of the scan.
After the scene view is added to our view hierarchy and constraints are already applied we need two more things to start the scan process. First, we need to set up the delegate of the ARSCNView. The ARSCNViewDelegate is providing several key methods that can inform us about the status of the rendering. We will use those methods extensively in the future. The second and last thing of the setup is to invoke the run method on our ARSCNView. Run method takes two parameters: ARConfiguration object and ARSession.RunOptions which is just a set of options for running the session.
The first thing that we are going to implement is the surface detector. It will place the floor or wall texture on a detected surface so it will act as some kind of automatic surface painter.
Let’s dive into coding
After setting up the ARSCNView let’s run its session with ARConfiguration. For the needs of our case, we will use ARWorldTrackingConfiguration which will give access to the planeDetection property. It’s just a variable that determines the types of planes that should be detected in the scene. Let’s create a configuration and run the session somewhere in our ViewController:
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
sceneView.session.run(configuration)
}
When you run the application on your device you should see an output from the camera and some yellow points which are the feature points mentioned a few lines above. Next, let's implement some methods of the ARSCNViewDelegate that will inform us about any changes related to our scanned ARWorld.
renderer(_:didAdd:for:) will tell the delegate method that SCNNode corresponding to the new ARAnchor has been added to the scene. We can use this information to add some visual context for the anchor by attaching geometry to the freshly added node.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let arPlaneAnchor = anchor as? ARPlaneAnchor else { return }
let newNode = makeNode(forAnchor: arPlaneAnchor)
node.addChildNode(newNode)
}
It will first check whether the anchor is an ARPlaneAnchor (which is an anchor that represents the planar surface in ARWorld) and create a new node that will represent floor or wall objects. The last thing that needs to be done is to add a child node with new geometry to the node that was added for the anchor.
To achieve it we will make a new helper method that will create that new node and apply a proper texture to its geometry based on the type of alignment.
private func makeNode(forAnchor anchor: ARPlaneAnchor) -> SCNNode {
let node = SCNNode()
node.name = "\(anchor.alignment.rawValue)"
node.eulerAngles = SCNVector3(90.degreesToRadians, 0, 0)
node.geometry = SCNPlane(width: CGFloat(anchor.extent.x), height: CGFloat(anchor.extent.z))
node.geometry?.firstMaterial?.diffuse.contents = anchor.alignment == .horizontal ? imageLiteral(resourceName: "floor-texture") : imageLiteral(resourceName: "wall-texture")
node.geometry?.firstMaterial?.isDoubleSided = true
node.position = SCNVector3(anchor.center.x, anchor.center.y, anchor.center.z)
return node
}
It should look pretty good right now but there is the last thing that needs to be handled. renderer(_:didUpdate:for:) method will inform us about any updates related to nodes of specific anchors. This piece of information gives us an opportunity to remove an invalid node and make a new one.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { guard let arPlaneAnchor = anchor as? ARPlaneAnchor else { return }
/// Remove old node with given alignment removeNode(alignment: arPlaneAnchor.alignment)let newNode = makeNode(forAnchor: arPlaneAnchor) node.addChildNode(newNode) }
Now we can finally run the application and see something similar to the gif below.

However, keep in mind that there are some limitations to the behavior of the framework. Scanning white surfaces and presenting some textures on them is hardly achievable. Also, the lighting in Your place can play a crucial role.
Final thoughts
Working with ARKit is not hard. Proper configuration is needed at first but most of the things are done by the framework and we just need to determine how it should behave for us. For the needs of this short blogpost I introduced only one of multiple functionalities that can be achieved with ARKit. I prepared some simple demo application that uses surface detection presented above and two additional things like placing 3D Objects in ARWorld and sharing ARWorldMap to other devices. You can check it out under this link. In real projects things are going to be much more complex for sure, but it’s encouraging that achieving something that looks solid and is fun to play with requires such a small amount of work from the programmer.