Recently I published an article with the entrance to the ARKit which briefly presented the minimum knowledge that is needed to create something inside augmented reality. We know what is the ARSCNView for, why we need to implement the ARSCNViewDelegate and how to run the ARSession. The demo was pretty straightforward as its main purpose was to receive information from the camera and detect the currently visible surface by attaching proper graphic onto it. User interaction was only limited to pointing the camera at desired spaces and waiting for the results that were calculated on the fly.
In today’s blog post we will focus mostly on user interactions and try to take advantage of the touchscreen. In short, the application will detect touches on our devices and present random geometry in ARWorld.
I will not focus on the initial setup here as it is pretty much the same as in my first article. We’ll pick up where we left off.
One of the ways to get informed about touches that the user has performed on the screen is to implement the touchesBegan(_:with:) method of the UIResponder class. Method is invoked with the Set of touches that can be used to obtain our goal. Let’s see what’s inside:
The first thing that needs to be done is to use UITouch to perform the hitTest on ARSCNView with the specified location. Its main purpose is to search for the objects that are corresponding to a specific point in the view and return an array of the ARHitTestResult sorted from the nearest to the farthest one as the result. In our case hitTest should be called with specific result type - featurePoints.
The last thing that needs to be done before creating the geometry and placing it in the ARWorld is to retrieve the proper element from the results array and create an ARAnchor.
Creating an ARAnchor is pretty straightforward but the parameter used in the init requires a little explanation. The transform parameter that should be passed as the simd_float4x4 is a matrix with 4 rows and 4 columns which defines and represents the ARAnchor’s rotation, translation, and the proper scale relative to the world. The good news is that the ARHitTestResult does include the worldTransform property which perfectly matches our needs - we just need to pass it in.
Last steps ✍️
Calling add(anchor:) on the ARSCNView session property will add the previously created anchor in the next frame update and the result of this action will be triggered with the call of the renderer(_:didAdd:for:) (which is the method of the ARSCNViewDelegate that was briefly discussed in previous blog post).
In that delegate method we need to return SCNNode with some geometry attached to it and it’s exactly the thing that we are going to do now. There are multiple types of geometries that we can use. You can learn more about it here. For now let's say that we want to add a box to the ARWorld and to do so we need to properly implement the delegate method.
We’re creating a node initialized with the SCNBox geometry, adjusting its visual parameters and simply return it as the desired node.
Compiling and running the project on a real device should add users the ability to place the box in the ARWorld by a single tap on the screen. If you want to try some new geometries or adjust some of the materials’ parameters it could be a good place to start with.
Keep in mind the restrictions of the ARKit like good lighting and contrast of the surfaces that are being scanned with your device - they will determine the work of your application. You can find the whole project here - feel free to use it as a reference in case of any problems. You’ll find more cool stuff there: how to add the rotation animation to the geometry with the custom graphic attached to the sides of the geometries and more. Enjoy!