Role: UI/UX, Branding and Video production
Brief: Design a product, service or solution that demonstrates the value and differentiation of Mixed Reality
12 weeks group project with Irene Alvarado, Mackenzie Cherban, Julia Petrich, Meric Dagli
Sponsored by Microsoft Research
Ixd Studio, Carnegie Mellon University, Spring 2017
As the population ages, cognitive decline is and will continue to be a pressing issue. Existing tech solutions for older adults mostly focus on the physiological and safety needs.
The way we preserve and improve memory changed from analog to digital. Moment support conversation and memory recall through providing relevant multimedia from 2D to 3D format real time. It works like a voice assistant - but replacing the voice response with visual results.
In Moment, it’s not about the user conversing with the system. Instead, it’s reliant on two users to have their natural conversation. As the system learns about the user, it facilitates a conversation with the user through pulling memory triggers (media) into the feed.
To determine any flat surface to be a workspace, user can either do "bloom" gesture or long press the 'mic' button.
To scroll through the feed, user can gaze to the arrow or swipe a finger left to right on the remote.
To place the media into the shared space, user can gaze on the item, tap & hold and drag it to the space by gesture or the remote.
To scale or rotate the item, user can hold the item and move hands vertical or horizontal direction.
To delete the item, user can gaze on the item and swipe it to the opposite direction of the other person.
At the end of the conversation, user can save the items with the conversation as a memory packet by pressing save button on the controller.
To foster user autonomy and control over the system, we built separate private and shared spaces into Moment. They allow users to be in the driver’s seat, deciding what others can see, while allowing for collaboration in the shared space.
To introduce Moment early, we considered how users would onboard to the system using today’s technology so that Moment can begin with what is familiar. If users start to use it from now, it can help them to recall their past later.
In order to be accessible to older adults, we designed a physical remote because we believe "there is security in the tangible". The grooves on our remote allow users to feel the direction they tracking on the pad, while microphone button works as a start button and the box button works as a save button for conversation. To see how a physical remote would feel, we connected a tactile controller to the Hololens via Unity.
Also, we explored how to maintain visual accessibility in Mixed Reality, with our tests on the Hololens. We considered color, typography, form and shading to best represent our visual assets.
As an area to explore, we believe aging is a rich and important area to explore as an application for Mixed Reality. As technology continues evolve, we believe that we should design the world to be inclusive and accessible to this growing segment of the population.
How might mixed reality be leveraged to help people age well?
With this question in mind, we collected our ideas of what a human needs to age well into the five areas of intervention. Then we started to mapping out various platforms, stakeholders and opportunities for mixed reality in this space.
We kicked off our research with qualitative methods. The postcards asked Baby Boomers what their relationship with technology. We interviewed aging experts, caregivers, and adults ranging from boomers to seniors to understand how we should design for the aging.
As a result of our exploratory research. We have seen five gaps as opportunities for mixed reality (MR) to assist people as they age:
We recognized three challenges surrounding our problem space in relation to mixed reality.
Can Mixed Reality be more than HMD?
Current head-mounted display (HMD) technology is heavy, and in some ways, even makes a person temporarily disabled.
How can we reframe aging to be positive?
There are also many negative biases surrounding aging. They are difficult to solve as we try to make them one might look forward to.
How can we design technology that fails gracefully?
Boomers view technology as a tool. When something goes wrong, frustration ensues and trust in the technology diminishes.
We synthesized our findings and identified design implication to guide our concept development.
We structured our brainstorm around the following opportunity areas we uncovered from exploratory research:
From our initial brainstorm, we further developed the strongest concepts into a series of storyboards that explored different combinations of our design implications. We speed-dated the storyboards with boomers for feedback.
After speed dating, we prototyped and bodystormed to explore how two of the most prominent concepts might play out. Our parallel concepts were:
Based on what we learned, we decided our final direction on Recall + Relate. Our final concept focuses on helping users to connect with others through conversations and memory trigger, which can bring a positive impact on cognitive decline over time. To see how our concept can be introduced early on and grows with the users, we outlined three fuzzy stages of the system that evolves, one that provides a certain, clear value today which transforms over time as the system becomes more critical and may have to take control more often.
From our initial bodystorming on Recall+Relate, we decided to refine the bodystorming and test it out to our potential users, baby boomers. We conducted a workshop with Boomers, taking them through activities on memories of places they were familiar with.
In conversation table activity, we paired people up to talk about their memories surrounding the places, using the table which was set up using dividers and a projector mounted from the ceiling. One of our team member acted as the computer listening to the conversation and pulling up relevant media from Google. More detail process can be found here.
The interesting takeaway was some really used the media on the table and others didn't. But they all said they found it useful to have those media at the end. These activities allowed us to see how the varying levels of immersive media affected our participants’ abilities to remember more details about their chosen places.
With the basic framework we tested in the workshop in mind, we started to develop user flow and interaction for MOMENT. We first developed a wireframe that shows how users interact with the screen interface from on-boarding to end of their conversation.
Once we had basic user flow for our screen interface, we prototyped the key interactions to see how users would interact with the media on the table, once they pulled out the media from the screen interface. We used Cinema 4D to design interaction of pinching, rotating and scaling media.
Because we wanted to create a close-to-native experience, we used Unity and HoloToolkit to simulate our key interactions. Using Vuforia image targets, we could test how an object appears and disappears depending on the image target direct our gaze towards. Further, we implemented a voice command to switch modes and scale and rotate an object using just hand movements.