IMVU Mobile 3D Camera System - Interaction Design

IMVU was released on mobile platforms in 2015. On it’s first release we had a very basic camera system in the 3D chat environment - the only control you had was orbiting the camera around your avatar and setting the zoom amount. I was tasked with deciding a way for the user’s camera to navigate the 3D space, and with finding a new set of gestures to control this. We also wanted to make some changes to our node based movement system.

 
 
CameraDisplay.png
 
 
CameraDisplay4.png

Research

The first step I took was working with our User Research team to find initial pain-points users experienced when using our extremely limited navigation control. I knew this was a step that would have to be repeated later as we came up with something more complex, but I wanted to have a strong foundation to start the project on, solving problems that I knew for certain currently existed.

The second step was checking out the competition. I took a look at other apps that required use of a free-moving camera in a 3D environment - games like The Sims, Sim City, and Avakin Life - what also helped was looking at navigation apps like Google Maps.

 

Challenges

The amount of obvious gestures available to mobile users is limited and this is glaringly obvious when trying to create a three dimensional navigation system like this, for example I decided that we were going to avoid use of any three-finger gestures although they were available to us, due to lack of discoverability.

The node system we use also provided its own challenges as the player’s avatar will ‘warp’ from place to place instead of walking - this meant that the camera had to come along for the ride. There are also instances where the avatar would be moving around, maybe walking or in a vehicle, and the user had to have control of whether the camera would follow the moving avatar, or be free roaming.

The node graphic itself needed treatment due to large chat rooms having a virtual sea of nodes to choose from when moving the avatar around. A large amount of nodes would obstruct the view of the chat room, and make it difficult to select the node that you want to select.

Finding Solutions

After looking at similar apps I put together a set of gestures and some ideas behind how they would control the camera - I had regular meetings with engineers to test these out - our goal at that time was to get a system in place that would be appropriate for user testing, then we could use that data to begin iterating on our control scheme until we had something our users enjoyed and found intuitive.

We also began brainstorming ways to make our node system a little less intrusive. We decided to only show nodes that were within a certain distance of the camera, and to fade nodes out that were further away. We also only showed nodes when the camera was in movement, and for a few seconds after the camera stopped moving. This way, in a large environment, users could navigate to an area they were interested in with just the camera first, find the appropriate node, and interact with it to place their avatar there. Nodes would also show a preview of where their avatar would end up, and if anything, what they would be doing once they were there (some nodes begin actions like dancing, DJing, sitting, etc).

Another issue is that IMVU spaces don’t make use of colliders resulting in the camera having no reference for where the floor or ceilings in an environment were. This provided a challenge where zooming in and out is mostly arbitrary, and not based off of the camera’s distance to the ground.

 

More Research

With a simple set of gestures together in a separate testing-only version of the app, we began user testing. I worked with the user testing team to put together a set of tasks and follow-up questions that would help scale the usefulness of the gestures and camera system. We collected videos of people using this while commentating, and had them answer questions at the end of each session. This way we had the data needed to come up with a grading system for each iteration of gestures and navigation methods.

Current Implementation

The resulting implementation of the camera system is something users are able to use without any frustration. Completing this project taught me a lot about how camera systems work in games, and helped solidify my understanding of writing user tasks and extracting useful data from user tests.