Apple has once again provided us with the tools needed to produce more immersive experiences than ever before with ARKit 2. Using a framework that combines device motion tracking, camera scene capture and advanced scene processing, they’ve laid the foundations for fully-realised experiences with AR. In this post, I’ll walk you through a demo app using ARKit fundamentals to give you an introduction to augmented reality and its potential.
Mobile technology shifts so quickly that sometimes it’s easy for new devices, services and features to feel a little gimmicky.
Augmented reality, however, transforms the way users interact with you, your business and the wider industry. Here are just a handful of the advantages of AR for businesses:
So, which companies are already using it to their advantage?
Some of the most popular and successful uses of augmented reality in recent times include the worldwide phenomenon Pokémon Go and the Ikea app.
Pokémon Go is an AR mobile game that simulates the experience of being a real-life Pokémon catcher by giving users the opportunity to catch Pokémon seemingly in the ‘real world’. The AR component builds on the user’s geographical location to make Pokémon appear in front of them.
The IKEA Place app, however, uses the AR component to simulate the spatial distribution of products and furniture before consumers buy, allowing them to accurately measure scale and style before purchasing.
These two apps, whilst functional and promising, merely scratch the surface of what ARKit is capable of going forward.
Let’s create a mini demo app to demonstrate the basics of ARKit’s capabilities.
Please note: In order to develop the demo app and familiarize yourself with the fundamentals of ARKit, you’ll need the following: XCode 9 or above, Apple’s ARKit compatible device (A9 processor or higher) with iOS 11 (or higher).
1. Start a new single view app in XCode by selecting File > New > Project > Single View App. Click ‘Next’:
2. Name your project (e.g ARKitDemo) and press ‘Next’ to finish creating your project:
3. Allow for camera usage by opening the Info.plist file and adding a new row with the key ‘Privacy - Camera Usage Description’ and a descriptive value like ‘Augmented Reality Purposes’:
4. The SceneKit view is what ARKit uses for rendering AR content, so you now need to go to the Main.storyboard and search ‘ARKit SceneKit View’ in the Object Library (shift + command + L).
We then need to drag it to the View Controller and add constraints that pin it to the limits of its container:
5. Connect the IBOulet to the view controller and give it a name. For example, ‘arSceneView’:
6. Using the viewWillAppear method, we need to initialize ARWorldTrackingConfigurationrequired to start a world tracking session over our scene view.
This class indicates to the ARSession that we want our user to be able to move around the object in the 3D space or perhaps even rotate in the same spot. If you don’t need to allow for this level of movement, however, you can use the ARSessionConfiguration class to initialize the ARSession instance.
7. In order to stop processing the image and tracking motion, we simply pause the session on the viewWillDisappear method:
NOTE: We need to make sure everything is set up correctly so that we can move forward with the next steps. Open XCode and both build and run your project, ensuring that the app prompts you to allow camera access and thus starts showing you what the scene view has captured.
We now need to create the geometry, the skeleton if you will, to start displaying content in the ARKit world. Geometry represents the wireframe of the object which then corresponds to the shape. It could be anything from a simple sphere to a more complex shape like a tube or a plane.
For the purposes of this demo, we’ll be sticking with the former.
1. In our view controller class, we start by creating a function with the following code:
2. We then create a sphere shape with a radius of 0.1 metres (1 Float = 1 metre).
3. Create a node that represents the position and coordinates of an object in a 3D space with an attached geometry or visible content (the sphere shape).
A ‘node’ represents an object that you can then add to the scene.
4. We give a position to our node with a 3 axis vector (x, y, z) of (0, 0, -0.2). This represents the relative position to the camera.
The Z component means 0.20 metres in front relative to the camera.
5. We add the node we created to the rootNode of the scene which is the key node that defines the coordinate system of the real world rendered by SceneKit.
6. Finally, we make a call to our function on the viewDidLoad method:
In building and running this project, you should see a floating sphere in front of you! That was pretty painless, right?!
You can now go on to add animations, scaling effects, texture, and more!
I hope this quick tutorial encourages you to start exploring the ARKit framework. Dive into the official Apple ARKit documentation for further exploration.
In this series, I’ll be exploring other ARKit fundamentals with examples that you can follow and replicate. Have you used ARKit? Do we have any tips for getting started? Tweet us and we’ll be sure to retweet the responses!
We Are Mobile First is a digital product agency based in Barcelona helping to transform businesses in a mobile-first world. Follow us on Twitter, LinkedIn and Medium to be notified of our future posts and stay up-to-date with our company news.
(Hero image credit: Apple Newsroom)