Discussion of heads up displays
- 1 Panel on Heads Up Displays
- 2 Overview
- 3 The Field of AR Itself
- 4 Determining Use-Value in Wearble Displays
- 5 Types of AR
- 6 Types of Notifications
- 7 User Research
- 8 Input other than gestures is not discussed
- 9 Fashionable, Feasible Prosthetics and Social Status
- 10 Trust and Privacy
- 11 Knowing what the user needs
- 12 Minimum Viable Product Features
- 13 A Gradual Experience
- 14 Concept Models
- 15 Conclusions
- 16 References
- 17 Further Reading
Panel on Heads Up Displays
- Sony Ericsson
- August 2011
- Lund, Sweden
The title of the seminar was "Interactions and Interfaces in Augmented Reality". The audience was comprised of product management, technical and user interaction backgrounds.
There were three speakers and a panel on wearables afterwards.
- Alex Olwal, MIT
- Charlotte Magnusson, IKDC, "Augmented Reality 'on the go' - the importance of the non-visual modalities"
- Amber Case, GeoLoqi
The Field of AR Itself
One of the design engineers in the audience spoke up. "This field suffers from doing cool things - maybe this gets in the way of something being useable".
To which I thought, "the most obscure thing ends up being the most basic thing - revolution comes from the unexpected". The homebrew computer club for instance, the ugliest most unsuspecting thing ends up being the most disruptive.
How to do the appropriate user studies - when we don't have these glasses at the moment?
Determining Use-Value in Wearble Displays
To which I said - in order to determine how to make something useful instead of just experimental, we need to identify the need of the user for these glasses, or the main problems that are solved by wearing the glasses. What is the need behind this?
What info actually needs to be attached to one's eyes. When do glasses actually need to be used?
The host, Pär-Anders Aronsson, gave the example of sunglasses. "We all know why we wear sunglasses," he said, The slightly better vision get makes it all worth it. You wear sunglasses to help out in shading the sun. It solves a problem.
Will people wear these glasses? What will be the killer app for it? This platform needs a killer app as well as a design that increases social status rather than detracts from it. Either that, or it provides some feature so useful (the killer app case) that the idea of ugliness or status degradation flies out the window in the face of the user becoming inherently superhuman while using it. Similar to the early cell phone and cell phone users. Though the device was ugly, it provided an efficient and practical wormhole for communication with someone who is far away from anywhere in the world.
Types of AR
Using audio and moving the phone around to filter what is in one's area.
There were three buttons, near, middle; far out. This helped to filter the information.
Haptic compass belt to orient you to a place
Hitoshi kirokawa - see-through display - Alex Olwal from MIT Media Lab gave this example.
Voice canceling, machine vision and cancelling with a videocamera input, processing loop and ocular input.
Applying a filter for visual search in a city. In Korea for instance, being able to search for a word or title and have your heads uo dispaly highlight those and dim the others, drawinf your attention to them with filtered or diminished reality, much like Steve Mann did in the grocery store.
Instant replay - for instance if I am out and about and I see a gorgeous lady I would like to replay it, said an audience member.
But there would be privacy considerations around that, said another in the room.
To which I thought - great business opportunity! Sell clothing that displaces the light so one's face or body can't be recorded! Many posts have been written about this lately. "How to hide from the machines" and so on.
Types of Notifications
Why overdo AR imaging? Humans have a fantastic image processor in my head that converts the minimal images into higher super duper resolution. It is the reason why it is possible for us to enjoy playing 2d games, or chatting online. Though the interaction and visuals may seem minimal and low-resolution, our brains make up for the rest.
Another pointed out that AR might be more useful in terms of something that is very simple rather than overkill. "Instead of my HUD giving me lots and lots of data just give me vectors - the vectors of direction. That's all I need for navigation".
To which I stated, "The goal of information design, especially functional, everyday information design, should be to render as little as possible to get the information across that fulfills a need." The majority of current AR is obnoxious
A small arrows or a thin line one can follow that shows direction. "Instead of my HUD giving me lots and lots of data just give me vectors - the vectors of direction. That's all I need for navigation".
In Paintwork, science fiction author Tim Maughan wrote about a HUD integrated with Google Maps leading the protagonist to a Starbucks. Instead of a thin line, the trail took the form of a trail of virtual, floating coffeebeans leading to the shop.
Maybe it is a discreet light that sits on the rim of the glasses that blinks a light for direction.
In combination with tangibles- things you can play with on the go without having to visually look at them. The body can store
Example: bicycle that can give directions through haptic buzzers embedded/attached to the handlebars.
I then pointed out that there was probably a very simple way to know what entry-level real-life situations would be helped by HUDs. For instance, when you watch people using their phones, pay close attention to the situations in which people are completely stuck into their phone or can't do a task because they are trying to look at their phone and the real world at the same time while moving. see the situations when people are stuck into their phone - and then those instances are what needs to appear on the phone. those situations present problems that van be solved by entering those use cases into the head's up display.
Input other than gestures is not discussed
"As a species we are built to want to manipulate things with our hands", says Charlotte M.
Steve Mann's used a Twiddler one-handed key chording device along with his very advanced HUD. Thad Starner used that same Twiddler device with his own custom HUD. The Twiddler is a device standard among many, but the learning curve is different. There needs to be a standardized way of entering information and data while using a wearable computer. Just a simple pointer or selector for one hand would make a bit of sense. Charlotte M. talked about how she gave kids a three button handheld device allowing them to focus on the very near, near and far. For devices like these, it would make sense that they would have tangible, solid buttons as it would allow users to use the device without looking at it.
There are many persistent architectures that are historical mistakes. If it is possible to learn from the problems of the past then we will begin to have interfaces that are easier to use and not 40 year traditions in the face of innovation like the mouse. The mouse and keyboard were temporary fixes that became standards, just as the web design industry adopted Flash and Dreamweaver in 2001 and hasn't changed since. What I think we've learned here is that nothing is more permanent than a temporary fix.
Fashionable, Feasible Prosthetics and Social Status
For wide adoption it needs to be able to increase one's social viability. Vs. Detract from it. Not interfering with the social norm. Not detracting from one's sociability.
Like a Mercedes Benz or a BMW adds to your social status. An old Geo Metro may detract from social status, although it is a far more robust, affordable, gas-efficient and maneuverable vehicle.
Trust and Privacy
Would you trust someone who was wearing these glasses?
Maybe try shouldn't do too much.
Would it influence relationships?
"Right now I am in high tech mode, but when I am with you -- I take them off."
Or would it harm them in having the capabilities of one's smartphone even closer to the eyes?
Steve Mann's wrote about Eyetap glasses that would turn dark when you were computing with them, and turn back transparent when you were ready to socially engage again. Thus preventing the very disorienting and bizarre experience your friend from watching your eyes dart about an invisible screen.
Knowing what the user needs
If we know exactly what is importer to the user we will know the problems to solve -/ we will not be able to above all of the problems -- but if we just solve one or two that is enough
Minimum Viable Product Features
The first iPhone was very simple. While it didn't have GPS or 3G, it make it easy to do some things better than others. It was an incremental progression over previous methods of interacting with data.
A Gradual Experience
Every user needs to have an experience that grows over time. They can't just start out with all of the complexity that a system provides.
A user is very trainable over time to the point that when they have a device that they know exactly what they are going to do with it when they pick it up or put it down. When you watch someone with a smartphone, they have an idea of what they want to do with it when they touch it. You can watch them know what to do when they open up their phone.
If we aim low we may have more chances of success
At the very least we should design a HUD that where we don't get nausea from it or receive too much information - somehow what to focus on - what is the thing to focus on - I think it can be a key.
What should be the thing you would like to have in glasses that you would like to have that would motivate the use of glasses vs. the use of the mobile phone?
If my car breaks down is it possible to become my own mechanic? That would be disruptive in taking mechanics out of the way. Order parts from amazon from the device that you need. Expert systems that overlay on the eyes that highlight the areas of work on the vehicle and teach the user how to fix minor problems.
The best way to get a product point across is a design model where someone really puts some thought into it. For some odd reason, designers don't really have to be able to build or wire up objects, although the best of them can. MIT's media lab teaches both design and development. Inseparable from each other.
And if one cannot 3D animate, carving an object or building it from paper and Photoshopping it can get the point across too. As long as the essence of the idea is communicated visually, what it takes to get it there doesn't matter one bit.
There are three main issues to address with wearables.
Look and Feel
The look and feel of a device is extremely important, as poorly-designed, yet workable HUDs will decrease a user's social status, this preventing wide adoption.
Transparency and Reduncancy
Steve Mann's successful HUD was a transparent display with input into one eye by laser input. Currently there are wearables that obscure the user's display from both eyes. Not only is this dangerous in terms of not having a back-up real-world sensor available at all times to the user (the user's own calibrated eye) but it increases the chances of nausea, and the entire contraption suffers from lag if the graphics are not rendered in real-time or if there is a network error.
Almost all AR is designed to "pop" or impress. Most of it is a trick pony that unnecessarily overstimulates the brain of a user. The example I always give is the early web and the giant rush of companies and startups to make an index or navigable way to "surf" the web. Many tried visual views of the different "sections" of the web, and some even tried to render a 3D view that users could explore around. However, users didn't want to "explore", especially over a 14.4K connection on a 233 Megahertz machine. E-mail was sufficient enough to receive hyperlinks to interesting things on the web. What people needed was an architecture that was optimized for speed. Google's no-frills and speedy interface provided that solution.
AR currently suffers from a bout of coolness and has not yet reached the trough of disillusionment. It is my hope that the future of AR will see the design of minimalistic interfaces that actually solve real-world problems. There is a long way to go to clear away the junk that has piled up around industry. Perhaps when the field matures it will no longer be called AR.