My work with the REAP Waterloo Felt Lab gave me the opportunity to try out a variety of emerging technologies, including in the virtual reality (VR) and augmented reality (AR) spaces. I worked on a series of research sprints as part of a larger AR project.
Prior to starting this project, I had become familiar with mobile AR apps, such as Layar and Aurasma. I noticed one hiccup in the user experience of these applications however. Within these apps, the user must point their camera at a marker (such as a QR code or image) that triggers the augmented reality content (such as a 3D model or video).
The user must then continue to keep this marker in view of the camera, or else the AR content will disappear. This restricts the user’s freedom of movement, which hinders the experience of using these apps. After all, what’s the point of having 3D content if you can’t view it from multiple angles?
I began to think of ways that this problem could be resolved. One of the core ideas that REAP focuses on is platform entrepreneurship, which consists of building on top of existing platforms to create solutions for design problems, rather than starting from scratch. This inspired me to think about what technologies I could use to address this issue.
Based on my previous experiences in the Felt Lab, I realized we had two pieces of tech available that I might be able to combine: the Meta 1 headset, and the Structure Sensor. The Meta 1 is one of the first AR-specific head mounted displays available on the market. The Structure Sensor is a device that attaches to an iPad, allowing the user to “scan” objects or even entire rooms to make 3D models out of them.
This gave me an idea for how I might be able to create a location-specific AR system that wouldn’t rely on any specific markers. I began by using the Structure Sensor to create 3D scans of the Felt Lab itself.
Then, I imported these 3D models into Unity and combined them into one complete, albeit rough, model of the lab.
I then imported the Meta SDK on top of my 3D model. Soon, I was able to get a basic application up and running on the Meta headset. To refine, I made the model of the lab itself invisible, and simply used it as a reference point when placing the AR content. At this point I was imagining a basic app to introduce people to the different pieces of tech we had in the lab. I created a set of labels to point out them out to the user.
The first part of the project was a success! I was able to get an AR system up and running that was meant for a specific place. I did know from previous experience however that the Meta was a bit finicky, especially when it came to interaction (due to it being such a new piece of tech). I needed to know whether I should try to make my existing Meta app more interactive, or pivot in a different direction.
For the second part of the project, I decided create a basic Meta app to test out various modes of interaction and see which would be best to use at this stage. Since my Meta app was meant to provide contextual information about the various pieces of technology in the lab, I decided to test three ways of providing information to the user.
Then, I set up a simple usability test. I wrote out a script so I would know what to say and in what order I should provide the instructions. I prepared a short survey using Google Forms that the participants would complete at the end of the test. During the test, I also wrote down notes from observation (in doing this I learned the difficulty of both conducting and recording a user test at the same time!).
During the test, I ran into a variety of things I wasn’t expecting (which I’ve come to learn almost always happens in user testing). First off, many of the participants had difficulty activating or interacting with the content using hand gestures. For most of the participants, it was their first time using an augmented reality headset. As a result, they were unfamiliar with how exactly to interact with this product.
Even actions which I assumed would be simple, like pushing a virtual button, turned out to be a challenge for the users. I realized that this was because of a discrepancy between their depth perception in reality and the virtual space. While the Meta tries to project the virtual objects as if they are actually in the world around you, the image is always layered on top of the user’s vision. This means that even if the user stretches out their arm, their hand will never be on top of the button in their vision. Another issue is that without the tactile experience that physical objects have in space, it was difficult for users to tell where exactly the virtual object was (how far away from them, etc.)
After completing six user tests, I analyzed my notes and the results from the survey. This is where I found even more surprises. Initially I had anticipated that the visuals in the Meta were too blurry, but most users responded that they could understand the visuals and text (either fully or to some extent).
Another finding that surprised me was which of the tasks users found easiest to use and most informative. My assumption was that the text button would be easiest to use (but it was far from that, as explained above). I also assumed that the 3D virtual model would be best for learning, since the user would be able to interact with the item directly and view it from any angle. In both cases, however, over half of the participants preferred the video. This makes sense ultimately as it was the simplest to interact with (one gesture over a larger area than even the button) and provides a lot of information in a dynamic way.
By completing these user tests, I was able to check my assumptions that I had about the Meta and how users (especially first time users) would interact with it. Based on the results of my testing, I decided to pivot and explore alternate platforms for my location-based AR system concept.
Overall, conducting this research and testing provided me with an opportunity to delve deeper into the field of augmented reality and stay on top of new trends in the emerging tech. I learned how to use Unity in conjunction with the Meta 1 SDK in order to create augmented reality apps. Having this level of control allowed me to design the prototype and test exactly how I wanted to and also taught me about prototyping in new kinds of media.
In doing this project, I learned a lot about user testing new products, especially with users who have never had any prior experience with this kind of technology. When user testing websites and mobile apps, participants usually already know how to operate these devices and have a knowledge baseline that helps guide their actions. With this totally new device, participants were sometimes unsure of how to carry out tasks and sometimes needed more guidance. This has taught me the value of onboarding, especially when it comes to tasks and tools that users may be unfamiliar with.
While the tests I conducted for this project were specifically for AR apps, the lessons I’ve learned can easily be applied to other products and have helped guide my usability testing practices in future projects.