An ongoing effort to educate and flush out common ways of describing complex things, in order to better communicate these complexities.
- 1 Fitts's Law
- 2 Minimally Invasive Education
- 3 Ambient Intelligence
- 4 Gulf of Execution
- 5 Human Action Cycle
- 6 OODA Loop
- 7 Think Aloud Protocol
- 8 Persuasive Technology
- 9 Captology
- 10 Information Foraging
- 11 Implicit Data Collection
- 12 Interaction Design (IxD)
- 13 Chunking
- 14 Flow State
- 15 Information Architecture
- 16 Tree Testing
- 17 First Time User Experience
- 18 Out-Of-Box Experience
- 19 Feature Integration Theory
- 20 Attention
- 21 Exploratory Search
- 22 Googlearchy
- 23 Models of collaborative tagging
- 24 Interactive Density
- 25 Baby Duck Syndrome
- 26 Bodystorming
- 27 Consumability
- 28 Experience Design
- 29 Human-Computer Interaction
- 30 Natural Mapping
- 31 Context Sensitive User Interface
- 32 10-foot user interface
- 33 Agile Usability Engineering
- 34 Task-focused interface
- 35 Ontology (information science)
- 36 Organic User Interface
- 37 Persona (marketing)
- 38 Context Awareness
- 39 The Rat Factor
- 40 Berrypicking
- 41 Affordance
Fitts's law (often cited as Fitts' law) is a model of human movement in human-computer interaction and ergonomics which predicts that the time required to rapidly move to a target area is a function of the distance to and the size of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer display using a pointing device. It was proposed by Paul Fitts in 1954.
Minimally Invasive Education (or MIE) is a term used to describe how children learn in unsupervised environments. It was derived from an experiment done by Sugata Mitra while at NIIT in 1999 often called The Hole in the Wall.
Amber's Note: Minimally Invasive Education could also be used to describe an interface which educates the user as they move along it
In computing, ambient intelligence (AmI) refers to electronic environments that are sensitive and responsive to the presence of people. Ambient intelligence is a vision on the future of consumer electronics, telecommunications and computing that was originally developed in the late 1990s for the time frame 2010–2020. In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in easy, natural way using information and intelligence that is hidden in the network connecting these devices (see Internet of Things). As these devices grow smaller, more connected and more integrated into our environment, the technology disappears into our surroundings until only the user interface remains perceivable by users.
The ambient intelligence paradigm builds upon pervasive computing, ubiquitous computing, profiling practices, and human-centric computer interaction design and is characterized by systems and technologies that are (Zelkha & Epstein 1998; Aarts, Harwig & Schuurmans 2001):
- embedded: many networked devices are integrated into the environment
- context aware: these devices can recognize you and your situational context
- personalized: they can be tailored to your needs
- adaptive: they can change in response to you
- anticipatory: they can anticipate your desires without conscious mediation.
Amber's Note: A smart user interface could also have ambient intelligence, meaning that it would be sensitive and responsive to the presence of one or many people. Microsoft Surface may be such an interface, but also Facebook and Amazon.com, Facebook more so.
Gulf of execution is a term usually used in human computer interaction to describe the gap between a user's goal for action and the means to execute that goal. Usability has as one of its primary goals to reduce this gap by removing roadblocks and steps that cause extra thinking and actions that distract the user's attention from the task intended, thereby preventing the flow of his or her work, and decreasing the chance of successful completion of the task. Similarly, there is a gulf of evaluation that applies to the gap between an external stimulus and the time a person understands what it means. Both phrases are mentioned in Donald Norman's 1986 book User Centered System Design: New Perspectives on Human-computer Interaction.
This can be illustrated through the discussion of a VCR problem. Let us imagine that a user would like to record a television show. They see the solution to this problem as simply pressing the Record button. However, in reality, to record a show on a VCR, several actions must be taken:
- Press the record button.
- Specify time of recording, usually involving several steps to change the hour and minute settings.
- Select channel to record on - either by entering the channel's number or selecting it with up/down buttons.
- Save the recording settings, perhaps by pressing an "OK" or "menu" or "enter" button.
The difference between the user's perceived execution actions and the required actions is the gulf of execution.
The human action cycle is a psychological model which describes the steps humans take when they interact with computer systems. The model was proposed by Donald A. Norman, a scholar in the discipline of human-computer interaction. The model can be used to help evaluate the efficiency of a user interface (UI). Understanding the cycle requires an understanding of the user interface design principles of affordance, feedback, visibility and tolerance. The human action cycle describes how humans may form goals and then develop a series of steps required to achieve that goal, using the computer system. The user then executes the steps, thus the model includes both cognitive activities and physical activities.
The three stages of the human action cycle
The model is divided into three stages of seven steps in total, and is (approximately) as follows:
Goal formation stage 1. Goal formation. Execution stage 2. Translation of goals into a set of unordered tasks required to achieve goals. 3. Sequencing the tasks to create the action sequence. 4. Executing the action sequence. Evaluation stage 5. Perceiving the results after having executed the action sequence. 6. Interpreting the actual outcomes based on the expected outcomes. 7. Comparing what happened with what the user wished to happen.
Typically, an evaluator of the user interface will pose a series of questions for each of the cycle's steps, an evaluation of the answer provides useful information about where the user interface may be inadequate or unsuitable. These questions might be:
Step 1, Forming a goal:
- Do the users have sufficient domain and task knowledge and sufficient understanding of their work to form goals?
- Does the UI help the users form these goals?
Step 2, Translating the goal into a task or a set of tasks:
- Do the users have sufficient domain and task knowledge and sufficient understanding of their work to formulate the tasks?
- Does the UI help the users formulate these tasks?
Step 3, Planning an action sequence:
- Do the users have sufficient domain and task knowledge and sufficient understanding of their work to formulate the action sequence?
- Does the UI help the users formulate the action sequence?
Step 4, Executing the action sequence:
- Can typical users easily learn and use the UI?
- Do the actions provided by the system match those required by the users?
- Are the affordance and visibility of the actions good?
- Do the users have an accurate mental model of the system?
- Does the system support the development of an accurate mental model?
Step 5, Perceiving what happened:
- Can the users perceive the system’s state?
- Does the UI provide the users with sufficient feedback about the effects of their actions?
Step 6, Interpreting the outcome according to the users’ expectations:
- Are the users able to make sense of the feedback?
- Does the UI provide enough feedback for this interpretation?
Step 7, Evaluating what happened against what was intended:
- Can the users compare what happened with what they were hoping to achieve?
The OODA loop (for observe, orient, decide, and act) is a concept originally applied to the combat operations process, often at the strategic level in both the military operations. It is now also often applied to understand commercial operations and learning processes. The concept was developed by military strategist and USAF Colonel John Boyd.
The OODA loop has become an important concept in both business and military strategy. According to Boyd, decision-making occurs in a recurring cycle of observe-orient-decide-act. An entity (whether an individual or an organization) that can process this cycle quickly, observing and reacting to unfolding events more rapidly than an opponent, can thereby "get inside" the opponent's decision cycle and gain the advantage.
Think-aloud protocol (or think-aloud protocols, or TAP) is a method used to gather data in usability testing in product design and development, in psychology and a range of social sciences (e.g., reading, writing and translation process research). The think-aloud method was introduced in the usability field by Clayton Lewis  while he was at IBM, and is explained in Task-Centered User Interface Design: A Practical Introduction by C. Lewis and J. Rieman . The method was further refined by Ericsson and Simon.
Think aloud protocols involve participants thinking aloud as they are performing a set of specified tasks. Users are asked to say whatever they are looking at, thinking, doing, and feeling, as they go about their task. This enables observers to see first-hand the process of task completion (rather than only its final product). Observers at such a test are asked to objectively take notes of everything that users say, without attempting to interpret their actions and words. Test sessions are often audio and video taped so that developers can go back and refer to what participants did, and how they reacted. The purpose of this method is to make explicit what is implicitly present in subjects who are able to perform a specific task. A related but slightly different data-gathering method is the talk-aloud protocol. This involves participants only describing their action but not giving explanations. This method is thought to be more objective in that participants merely report how they go about completing a task rather than interpreting or justifying their actions (see the standard works by Ericsson & Simon).
As Hannu and Pallab  state the thinking aloud protocol can be divide in two different experimental procedures: the first one, is the concurrent thinking aloud protocol, collected during the decision task; the second procedure is the retrospective thinking aloud protocol gathered after the decision task.
Persuasive technology is broadly defined as technology that is designed to change attitudes or behaviors of the users through persuasion and social influence, but not through coercion (Fogg 2002). Such technologies are regularly used in sales, diplomacy, politics, religion, military training, public health, and management, and may potentially be used in any area of human-human or human-computer interaction. Most self-identified persuasive technology research focuses on interactive, computational technologies, including desktop computers, Internet services, video games, and mobile devices (Oinas-Kukkonen et al. 2008), but this incorporates and builds on the results, theories, and methods of experimental psychology, rhetoric (Bogost 2007), and human-computer interaction. The design of persuasive technologies can be seen as a particular case of design with intent (Lockton et al. 2010).
Amber's Note: Facebook is a prime example of persuasive technology. The interface elements are persuasive at macro and micro scales.
Captology is the study of computers as persuasive technologies. This area of inquiry explores the overlapping space between persuasion in general (influence, motivation, behavior change, etc.) and computing technology. This includes the design, research, and program analysis of interactive computing products (such as the Web, desktop software, specialized devices, etc.) created for the purpose of changing people's attitudes or behaviors. B.J. Fogg in 1996 derived the term captology from an acronym: Computers As Persuasive Technologies. In 2003 he published the first book on captology, entitled Persuasive Technology: Using Computers to Change What We Think and Do.
Amber's Note: Perhaps Captology is a better descriptor for what I do as a Cyborg Anthropologist, since a lot of my research relates to persuasive architectures and interfaces.
Information foraging is a theory that applies the ideas from optimal foraging theory to understand how human users search for information. The theory is based on the assumption that, when searching for information, humans use "built-in" foraging mechanisms that evolved to help our animal ancestors find food. Importantly, better understanding of human search behaviour can improve the usability of websites or any other user interface.
In the 1970s optimal foraging theory was developed by anthropologists and ecologists to explain how animals hunt for food. It suggested that the eating habits of animals revolve around maximizing energy intake over a given amount of time. For every predator, certain prey are worth pursuing, while others would result in a net loss of energy. In the early 1990s, Peter Pirolli and Stuart Card from PARC noticed the similarities between users' information searching patterns and animal food foraging strategies. Working together with psychologists to analyse users' actions and the information landscape that they navigated (links, descriptions, and other data), they showed that information seekers use the same strategies as food foragers.
In the late 1990s, Ed H. Chi worked with Pirolli, Card and others at PARC further developed information scent ideas and algorithm to actually use these concepts in real interactive systems, including the modeling of web user browsing behavior, the inference of information needs from web visit log files, and the use of information scent concepts in reading and browsing interfaces.
In the early 2000s, Wai-Tat Fu worked with Pirolli to develop the SNIF-ACT model, which further extends the theory to provide mechanistic account of information seeking. The model provides good fits to link selection on Web pages, decision to leave a page (stickiness), and how both link text and its position may affect overall successes of information search. The SNIF-ACT model was also shown to exhibit statistical properties that resemble the law of surfing found in large-scale Web log data.
"Informavores" constantly make decisions on what kind of information to look for, whether to stay at the current site to try to find additional information or whether they should move on to another site, which path or link to follow to the next information site, and when to finally stop the search. Although human cognition is not a result of evolutionary pressure to improve Web use, survival-related traits to respond quickly on partial information and reduce energy expenditures force them to optimise their searching behaviour and, simultaneously, to minimize the thinking required.
Implicit data collection is used in human computer interaction to gather data about the user in an implicit, non invasive way.
The collection of user related data in human-computer interaction is used to adapt the computer interface to the end user. The data collected are used to build a user model. The user model is then used to help the application to filter the information for the end user. Such systems are useful in recommender applications, military applications (implicit stress detection) and others.
Channels for collecting data
The system can record the user's explicit interaction and thus build an MPEG7 usage history log. Furthermore the system can use other channels to gather information about the user's emotional state. The following implicit channels have been used so far to get the affective state of the end user:
- facial activity
- posture activity
- hand tension and activity
- gestural activity
- vocal expression
- language and choice of words
- electrodermal activity
Interaction design (IxD) is the study of devices with which a user can interact, in particular computer users. The practice typically centers on "embedding information technology into the ambient social complexities of the physical world." It can also apply to other types of non-electronic products and services, and even organizations. Interaction design defines the behavior (the "interaction") of an artifact or system in response to its users. Malcolm McCullough has written, "As a consequence of pervasive computing, interaction design is poised to become one of the main liberal arts of the twenty-first century"(McCullough, Malcolm (2004). Digital Ground. MIT Press. ISBN 0-262-13435-7).
The term chunking was introduced in a 1956 paper by George A. Miller, The Magical Number Seven, Plus or Minus Two : Some Limits on our Capacity for Processing Information. Chunking breaks up long strings of information into units or chunks. The resulting chunks are easier to commit to working memory than a longer and uninterrupted string of information. Chunking appears to work across all mediums including but not limited to: text, sounds, pictures, and videos.
Source: Harrod, Martin (2008). Chunking. Retrieved 21 June 2010 from Interaction-Design.org: http://www.interaction-design.org/encyclopedia/chunking.html
Mihaly Csikszentmihalyi has been studying the flow state around the world for many years. Here are some facts about the flow state, the conditions that make it occur and what it feels like:
- You have very focused attention on your task – The ability to control and focus your attention is critical. If you get distracted by anything that is outside of the activity you are engaging in, the flow state will dissipate.
- You are working with a specific, clear, and achievable goal in mind – Whether you are singing, fixing a bike, or running a marathon, the flow state comes about when you have a specific goal. You then keep that focused attention and only let in information that fits with the goal. The research shows that you need to feel that you have a good chance of completing the goal in order to get into, and hold onto, the flow state. If you think you have a good chance of failing at the goal, then the flow state will not be induced. And, conversely, if the activity is not challenging enough, then you won’t hold attention on it and the flow state will end.
- You receive constant feedback – In order to stay in the flow state you need a constant stream of information coming in that gives you feedback as to the achievement of the goal.
- You have control over your actions – Control is an important condition of the flow state. You don’t necessarily have to be in control, or even feel like you are in control, but you do have to feel that you are exercising significant control in a challenging situation.
- Time changes – Some people report that time speeds up — that they look up and hours have gone by. Others report that time slows down.
- The self does not feel threatened – In order to enter a flow state your sense of self and survival cannot feel threatened. *You have to be relaxed enough that you can engage all of your attention onto the task at hand. In fact, most people report that they lose their sense of self when they are absorbed with the task.
- The flow state is personal – everyone has different activities that put them in a flow state. What triggers a flow state for you is different from others.
- The flow state crosses cultures – So far it seems to be a common human experience across all cultures with the exception of people with some mental illnesses People who have schizophrenia, for example, have a hard time inducing or staying in a flow state, probably because they have a hard time with some of the other items above, such as focused attention, control, or the self feeling threatened.
- The flow state is pleasurable – People like being in the flow state.
- The pre-frontal cortex is involved – I’ve been trying to find research on the brain correlates of the flow state. So far the research seems slim (if you know of any, please pass it on to me). From what I have read it seems that the pre-frontal cortex is very involved. That would not be a surprise, since the pre-frontal cortex is all about focused attention. Some researchers suggest that dopamine may be involved as well, but there isn’t exact research on that.
More: Flow: The Psychology of Optimal Experience by Mihaly Csikszentmihalyi, 1990.
Amber's Note: Facebook has some of the best flow experiences out there. Wikipedia is similar. Constant information and feedback. Facebook has no goal. Twitter has no goal. Information keeps on appearing. Micronarratives are easy to digest, like chips. It is often difficult to stop. The information is so regular and bite-sized that it is easy to consume an entire bag before looking up and realizing that it is gone. An interface that flows and then names someone stop could be considered an interface with a punctuation mark.
Information architecture (IA) is the art of expressing a model or concept of information used in activities that require explicit details of complex systems. Among these activities are library systems, Content Management Systems, web development, user interactions, database development, programming, technical writing, enterprise architecture, and critical system software design. Information architecture has somewhat different meanings in these different branches of IS or IT architecture. Most definitions have common qualities: a structural design of shared environments, methods of organizing and labeling websites, intranets, and online communities, and ways of bringing the principles of design and architecture to the digital landscape.
The term ' architecture describes a specialized skill set which relates to the interpretation of information and expression of distinctions between signs and systems of signs. It has some degree of origin in the library sciences. Many schools with library and information science departments teach information architecture.
Information architecture is the categorization of information into a coherent structure, preferably one that the most people can understand quickly, if not inherently. It's usually hierarchical, but can have other structures, such as concentric or even chaotic. It has nothing to do with philosophy or semiotics.
Critiques: The term Information Architecture has been criticized, as the term "architecture" is primarily used for habitable physical structures and imply that information systems are static like habitable physical structures or buildings. Information systems are "living systems", which frequently get updated, altered, and morphed, both by author and users. In some cases, information systems dynamically adapt to specific actions and context of users. Since the discipline of architecture ("habitable physical structures") increasingly uses materials and solutions that are less static, this criticism may be unjustified.
Tree testing is a usability technique for evaluating the findability of topics in a website. It is also known as reverse card sorting or card-based classification. A large website is typically organized into a hierarchy (a "tree") of topics and subtopics. Tree testing provides a way to measure how well users can find items in this hierarchy. Unlike traditional usability testing, tree testing is not done on the website itself; instead, a simplified text version of the site structure is used. This ensures that the structure is evaluated in isolation, nullifying the effects of navigational aids, visual design, and other factors.
In a typical tree test:
- The participant is given a "find it" task (e.g., "Look for brown belts under $25").
- They are shown a text list of the top-level topics of the website.
- They choose a heading, and are then shown a list of subtopics.
- They continue choosing (moving down through the tree) — drilling down, backtracking if necessary – until they find a *topic that satisfies the task (or until they give up).
- The participant does several tasks in this manner, starting each task back at the top of the tree.
- Once several participants have completed the test, the results are analyzed for each task.
Analyzing the results The analysis typically tries to answer these questions:
- Could users successfully find particular items in the tree?
- Could they find those items directly, without having to backtrack?
- If they couldn't find items, where did they go astray?
- Could they choose between topics quickly, without having to think too much?
- Overall, which parts of the tree worked well, and which fell down?
FTUE, is a term used to describe the configuration steps (like signing up for a Yahoo Mail/Hotmail account, or configuring your DVR for your cable station/DirectTV, or putting in defaults for Microsoft Office, or any other software package that requires some user settings prior to it working correctly).
FTUE (pronounced FuTooEE) describes the steps and process for getting your software or software/hardware system to work, once the OOBE (Out-Of-Box Experience) steps have been taken.
The "out-of-box experience" is typically the first impression a product creates, such as the ease with which a buyer can begin using the product. For hardware products a positive OOBE (abbreviation of "out-of-box experience") can be created with logical easy-to-follow instructions and parts that have a low likelihood of failure.
For software, this often means easy installation and "Welcome" or "Initial Configuration" wizard screens that simplify elaborate set-up. The OOBE can also be the complete lack of such wizards.
A frequently encountered "out-of-box experience" is the process of installing Microsoft Windows. While the installation is largely automatic; the user must proceed through multiple screens to acknowledge software license terms, specify partition settings for the hard disk, enter the "product key", select international settings, a time zone, and also configure network settings. After the installation is complete, Microsoft Windows launches an "Out-of-box Experience" application that presents a full-screen wizard to assist the user with critical first steps of using Windows, such as creating a user account, registering the software with Microsoft (optional), configuring Internet connectivity, and activating the software. While this Microsoft application is named after OOBE, the real OOBE began when the user first turned on a new computer, or began to peel the shrinkwrap off the product packaging.
The feature integration theory, developed by Anne Treisman and Garry Gelade since the early 1980s, posits that different kinds of attention are responsible for binding different features into consciously experienced wholes. The theory has been one of the most influential psychological models of human visual attention.
According to Treisman, in a first step to visual processing, several primary visual features are processed and represented with separate feature maps that are later integrated in a saliency map that can be accessed in order to direct attention to the most conspicuous areas.
Treisman distinguishes two kinds of visual search tasks, feature search and conjunction search. Feature search can be performed fast and pre-attentively for targets defined by primitive features. Conjunction search is the serial search for targets defined by a conjunction of primitive features. It is much slower and requires conscious attention. She concluded from many experiments that color, orientation, and intensity are primitive features, for which feature search can be performed.
It was widely speculated that the saliency map could be located in early visual cortical areas, e.g. the Primary Visual Cortex (V1), though this is controversial. Wolfe's popular Guided Search Model offers a more up to date theory of visual search but is also problematic.
Evidence for this theory comes from the phenomenon of illusory conjunctions, popout of primitives in visual search (making them easily identifiable regardless of number of distracters) and the fact that participants can often remember the presence of an object, but not its location, during a fast visual search.
Attention is the cognitive process of selectively concentrating on one aspect of the environment while ignoring other things. Attention has also been referred to as the allocation of processing resources. Examples include listening carefully to what someone is saying while ignoring other conversations in a room (the cocktail party effect) or listening to a cell phone conversation while driving a car. Attention is one of the most intensely studied topics within psychology and cognitive neuroscience.
William James, in his textbook Principles of Psychology, remarked:
“Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.”
Attention remains a major area of investigation within education, psychology and neuroscience. Areas of active investigation involve determining the source of the signals that generate attention, the effects of these signals on the tuning properties of sensory neurons, and the relationship between attention and other cognitive processes like working memory and vigilance.
Amber's Note: "Source of signals that generate attention..." - study these signals on Amazon.com, Flickr, form fields, Facebook, calls to action, ect. See if you can place someone in front of your site and see what's attracting their attention. Do eye tracking studies if you really want to get into it.
Exploratory search is a specialization of information exploration which represents the activities carried out by searchers who are either:
- unfamiliar with the domain of their goal (ie need to learn about the topic in order to understand how to achieve their goal)
- unsure about the ways to achieve their goals (either the technology or the process)
- or even unsure about their goals in the first place.
Consequently, exploratory search covers a broader class of activities than typical information retrieval, such as investigating, evaluating, comparing, and synthesizing, where new information is sought in a defined conceptual area; exploratory data analysis is another example of an information exploration activity. Typically, therefore, such users generally combine querying and browsing strategies to foster learning and investigation.
Key figures, including experts from both information seeking and human–computer interaction, are:
Googlearchy, or googlocracy, is a term associated with ways in which a search engine, like google, can influence politics and social reality in general. More broadly, the term is intended to reflect on how certain technological developments can influence or dominate the organization of society. The scope of the term can have a variety of uses, such as the issues with freedom of the press and how modern politics are done through the use of the internet, with technologies such as google. One issue raised when considering the googlearchy phenomenon in web politics is in considering whether the changes brought by technologies such as google have reinforced the media that is already more powerful, or the other way around.
Many have argued that social tagging or collaborative tagging systems can provide navigational cues or “way-finders”  for other users to explore information. The notion is that, given that social tags are labels that users create to represent topics extracted from Web documents, interpretation of these tags should allow other users to predict contents of different documents efficiently. Social tags are arguably more important in exploratory search, in which the users may engage in iterative cycles of goal refinement and exploration of new information (as opposed to simple fact-retrievals), and interpretation of information contents by others will provide useful cues for people to discover topics that are relevant. One significant challenge that arises in social tagging systems is the rapid increase in the number and diversity of the tags. As opposed to structured annotation systems, tags provide users an unstructured, open-ended mechanism to annotate and organize web-content. As users are free to create any tag to describe any resource, it leads to what is referred to as the vocabulary problem . Because users may use different words to describe the same document or extract different topics from the same document based on their own background knowledge, the lack of a top-down mediation may lead to an increase in the use of incoherent tags to represent the information resources in the system. In other words, the inherent "unstructuredness" of social tags may hinder their potential as navigational cues for searchers because the diversities of users and motivation may lead to diminishing tag-topic relations as the system grows.
Amber's Note: Stumbleupon is an excellent system of collaborative tagging.
Interactive Density refers to the amount of visual engagement viewable on a user interface; like a web page or software application screen. A graphic user interface that has a large amount of interactive density on a page may be longer for the user to decipher what to do.
Baby Duck Syndrome denotes the tendency for computer users to "imprint" on the first system they learn, then judge other systems by their similarity to that first system. The result is that "users generally prefer systems similar to those they learned on and dislike unfamiliar systems." The term may have been inspired by popular understanding of the work, experiences, and observations of Konrad Lorenz.
Amber's Note: This might explain the continued use of Dreamweaver, against all attempts at HTML/CSS education.
Bodystorming is a technique sometimes used in interaction design or as a creativity technique. The idea is to imagine what it would be like if the product existed, and act as though it exists, ideally in the place it would be used.
A concept recently championed by International Business Machines (IBM) , consumability is a description of customers' end-to-end experience with technology solutions (although the concept could easily apply to almost anything). The tasks associated with consumability start before the consumer purchases a product and continue until the customer stops using the product. By improving the consumability of the product, the value of that product to the client can be increased. Understanding product consumability requires an in-depth understanding of how clients are actually trying to use the product, which is why consumability is so closely aligned with the user experience and Outside-in software development. While usability addresses a client's ability to use a product, consumability is a higher-level concept that incorporates all the other aspects of the customer’s experience with the product.
Key consumability aspects of the user experience include:
- Identifying the right product
- Acquiring the product
- Installing and configuring the product
- Using and administering the product
- Troubleshooting problems with the product
- Updating the product (e.g. installing fix packs)
How efficiently and effectively clients can complete these tasks affects the value they get from the product. Missteps anywhere along this path can have direct impacts on the customer's ability to complete the task they set out to do. By focusing on consumability, developers can smooth the path, allowing technology solution consumers to focus on the needs of their business, improving their perception and satisfaction with the product or solution.
Experience design (XD) is the practice of designing products, processes, services, events, and environments with a focus placed on the quality of the user experience and culturally relevant solutions, with less emphasis placed on increasing and improving functionality of the design. An emerging discipline, experience design attempts to draw from many sources including cognitive psychology and perceptual psychology, linguistics, cognitive science, architecture and environmental design, haptics, hazard analysis, product design, information design, information architecture, ethnography, brand management, interaction design, service design, storytelling, heuristics, and design thinking.
In its commercial context, experience design is driven by consideration of the moments of engagement, or touchpoints, between people and brands, and the ideas, emotions, and memories that these moments create. Commercial experience design is also known as experiential marketing, customer experience design, and brand experience. Experience designers are often employed to identify existing touchpoints and create new ones, and then to score the arrangement of these touchpoints so that they produce the desired outcome.
What makes an effective human-computer interface? Ben Shneiderman, an expert in the field, writes (Ben Shneiderman. Designing the User Interface: Strategies for Effective Human-computer Interaction. Addison-Wesley, Reading, MA, 1997, p.10:
Well designed, effective computer systems generate positive feelings of success, competence, mastery, and clarity in the user community. When an interactive system is well-designed, the interface almost disappears, enabling users to concentrate on their work, exploration, or pleasure.
As steps towards achieving these goals, Shneiderman lists principles for design of user interfaces. Those which are particularly important for information access include (slightly restated): provide informative feedback, permit easy reversal of actions, support an internal locus of control, reduce working memory load, and provide alternative interfaces for novice and expert users. Each of these principles should be instantiated differently depending on the particular interface application. Below we discuss those principles that are of special interest to information access systems. 1. Design Principles Offer informative feedback. This principle is especially important for information access interfaces. In this chapter we will see current ideas about how to provide users with feedback about the relationship between their query specification and documents retrieved, about relationships among retrieved documents, and about relationships between retrieved documents and metadata describing collections. If the user has control of how and when feedback is provided, then the system provides an internal locus of control.
Reduce working memory load. Information access is an iterative process, the goals of which shift and change as information is encountered. One key way information access interfaces can help with memory load is to provide mechanisms for keeping track of choices made during the search process, allowing users to return to temporarily abandoned strategies, jump from one strategy to the next, and retain information and context across search sessions. Another memory-aiding device is to provide browsable information that is relevant to the current stage of the information access process. This includes suggestions of related terms or metadata, and search starting points including lists of sources and topic lists.
Provide alternative interfaces for novice and expert users. An important tradeoff in all user interface design is that of simplicity versus power. Simple interfaces are easier to learn, at the expense of less flexibility and sometimes less efficient use. Powerful interfaces allow a knowledgeable user to do more and have more control over the operation of the interface, but can be time-consuming to learn and impose a memory burden on people who use the system only intermittently. A common solution is to use a `scaffolding' technique [rosson90]. The novice user is presented with a simple interface that can be learned quickly and that provides the basic functionality of the application, but is restricted in power and flexibility. Alternative interfaces are offered for more experienced users, giving them more control, more options, and more features, or potentially even entirely different interaction models. Good user interface design provides intuitive bridges between the simple and the advanced interfaces.
Information access interfaces must contend with specialkinds of simplicity/power tradeoffs. One such tradeoff is the amount of information shown about the workings of the search system itself. Users who are new to a system or to a particular collection may not know enough about the system or the domain associated with the collection to make choices among complex features. They may not know how best to weight terms, or in the case of relevance feedback, not know what the effects of reweighting terms would be. On the other hand, users that have worked with a system and gotten a feeling for a topic are likely to be able to choose among suggested terms to add to their query in an informed manner. Determining how much information to show the user of the system is a major design choice in information access interfaces.
The term natural mappings comes from proper and natural arrangements for the relations between controls and their movements to the outcome from such action into the world. The real function of natural mappings is to reduce the need for any information from a user’s memory to perform a task. This term is widely used in the areas of human-computer interaction (HCI) and interactive design.
Mapping and natural mapping are very similar in that they are both used in relationship between controls and their movements and the result in the world. The only difference is that natural mapping provides users with properly organized controls for which users will immediately understand which control will perform which action. A simple design principle: “ If a design depends upon labels, it may be faulty. Labels are important and often necessary, but the appropriate use of natural mappings can minimize the need for them. Wherever labels seem necessary, consider another design. 
As clear as it is, good design such as the controls for burners on stove, will provide the users immediate feedbacks on which controls activates which burners. The power of natural mappings will leave the users nothing but the ease of use with no frustration (Norman, Donald A., "Knowledge in the Head and in the World". The Design of Everyday Things. New York: Basic Book, 1988. 75).
A context sensitive user interface is one which can automatically choose from a multiplicity of options based on the current or previous state(s) of the program operation.Context sensitivity is almost ubiquitous in current graphical user interfaces, and should, when operating correctly, be practically transparent to the user.
- Clicking on a text document automatically opens the document in a word processing environment. The user does not have to specify what type of program opens the file under standard conditions.
The same methodology applies to other file types eg:
- Video files (.mpg .mov .avi etc) open in a video player without being the user having to select a specific program.
- Photographic and other image files (.jpg .png etc) will open in a photo viewer automatically.
- Program files and their shortcuts (ie .exe files) are automatically run by the operating system.
The user-interface may also provide Context sensitive feedback, such as changing the appearance of the mouse pointer or cursor, menu colour changes, or where applicable auditory or tactile feedback.
Reasoning and advantages of context sensitivity
- The primary reason for introducing context sensitivity is to simplify the user interface. Advantages include :
- Reduced number of commands required to be known to the user for a given level of productivity.
- Reduced number of clicks or keystrokes required to carry out a given operation.
- Allows consistent behaviour to be pre-programmed or altered by the user.
- Reduces the number of options to be on screen at one time (i.e. "clutter").
Disadvantages Context sensitive actions may be perceived as dumbing down of the user interface - leaving the operator at a loss as to what to do when the computer decides to perform an unwanted action. Additionally non-automatic procedures may be hidden or obscured by the context sensitive interface causing an increase in user workload for operations the designers did not foresee.
- A poor implementation can be more annoying than helpful - a classic example of this is Office assistant
A 10-foot user interface (also sometimes referred to as "10 foot UI" or "10-foot experience") is a software GUI (graphical user interface) designed for display on a large television (or similar sized screen) with interaction using a regular television-style remote control.
"10 foot" refers to the fact that the GUI interface's elements—i.e. menus, buttons, text fonts, and so on—are theoretically large enough to easily read at a distance of 10 feet (3 m) from the display (which in this context is normally a large-screen television). To avoid distractions and to be more clear, 10 foot UIs also tend to be very simple and usually only have the minimum core buttons.
Typical examples of 10-foot user interfaces are media center software applications such as Front Row, Windows Media Center, Boxee, MythTV, and XBMC Media Center interfaces.
The "10 foot" is used to differentiate the GUI style from those used on desktop computer screens, which typically assume the user's eyes are less than two feet from the display. The 10-foot GUI is almost always designed to be operated by a hand-held remote control. The 10-foot user interface has extra large buttons with menu fonts that are easily read and navigated.
This difference in distance has a huge impact on the interface design compared to typical desktop computer interaction when the user is sitting at a desk with a computer monitor, and using a mouse and keyboard (or perhaps a joystick device for computer games) which is sometimes referred to as a "2-foot user interface".
Agile Usability Engineering is a concept to describe a combination of methods and practices of agile development and usability engineering. Therefore, this entry commences with a brief note on agile methods.
In recent years, agile methods for software and web engineering have reached widespread acceptance in the community. In contrary to classic, heavy-weight software engineering processes like the V-model, agile methods (Ambler 2002) begin coding very early while having a shorter requirements engineering up-front as well as less documentation. Following the paradigm of Extreme Programming (Beck 1999), implementation of code takes place in small increments and iterations, and small releases are delivered to the customer after each development cycle. During a short claims analysis, called the exploration phase, the development team writes user stories trying to describe user needs and roles; the interviewed people need not necessarily be the real users of the later software product. Seen from a human-computer engineering perspective, Extreme Programming (XP) thus often fails to collect real user data and starts coding with just assumptions about user needs. The development in small increments may work properly as long as the software has no focus on the user interface (UI). Changes to software architecture most often have no impact on what the user sees and interacts with.
With the UI, it's a different story. When designing UIs like websites, continuous changes of the user interface due to fast iterative design may conflict with user expectations and learnability, provoke inconsistency and possibly lead to user dissatisfaction. Evaluation of small releases with stakeholder participation does not ensure that the whole system provides a consistent conceptual, navigational or content model.
Nevertheless, the numerous discussions about agile approaches to user interface design (UID) have lead to a movement in the human-computer interaction community, which has begun to reconsider its user-centered heavy-weight usability lifecycles (see table 1, compare Mayhew 1999).
Source: Memmel, Thomas. "Agile Usability Engineering". Interaction-Design.org 28 April 2006. 21 June 2010 http://www.interaction-design.org/encyclopedia/agile_usability_engineering.html
The task-focused interface is a type of user interface which extends the desktop metaphor of the graphical user interface to make tasks, not files and folders, the primary unit of interaction. Instead of showing entire hierarchies of information, such as a tree of documents, a task-focused interface shows the subset of the tree that is relevant to the task-at-hand. This addresses the problem of information overload when dealing with large hierarchies, such as those in software systems or large sets of documents. The task-focused interface is composed of a mechanism which allows the user to specify the task being worked on and to switch between active tasks, a model of the task context such as a degree-of-interest (DOI) ranking, a focusing mechanism to filter or highlight the relevant documents. The task-focused interface has been validated with statistically significant increases to knowledge worker productivity. It has been broadly adopted by programmers and is a key part of the Eclipse Integrated development environment. The technology is also referred to as the "task context" model and the "task-focused programming" paradigm.
In computer science and information science, an ontology is a formal representation of the knowledge by a set of concepts within a domain and the relationships between those concepts. It is used to reason about the properties of that domain, and may be used to describe the domain.
In theory, an ontology is a "formal, explicit specification of a shared conceptualisation". An ontology provides a shared vocabulary, which can be used to model a domain — that is, the type of objects and/or concepts that exist, and their properties and relations.
Ontologies are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it. The creation of domain ontologies is also fundamental to the definition and use of an enterprise architecture framework.
An organic user interface (OUI) is a user interface "with non-planar displays that actively or passively change shape via analog physical inputs."  OUIs are characterized by displays that can change or take on any shape and their ability to use the display as an input device. The folding camera is an early example of an OUI. Holman and Vertegaal present three design principles for OUIs :
- Input Equals Output: In the GUI there is a clear division of input and output. The mouse and keyboard input actions from the user. Based on those actions, output is generated graphically on the screen. A key feature of OUI is that a piece of OLED paper, or any potentially non-planar object for that matter, is meant to input actions from the user and also output them onto the same object.
- Function Equals Form: The form of an object clearly determines its ability to be used as an input. The statement Function Equals Form emphasizes this dependency on one another. Holman and Vertegaal argue that these two are in fact inseparable and that it is a mistake to try to deny this in any way.
- Form Follows Flow: This principle states that it is of utmost necessity for OUIs to negotiate user actions based on context. e.g. the ubiquitous 'clamshell' phone, where incoming calls alter the phone's function when opening the phone during an incoming call.
Personas are fictional characters created to represent the different user types within a targeted demographic, attitude and/or behaviour set that might use a site, brand or product in a similar way. Personas are a tool or method of market segmentation. The term persona is used widely in online and technology applications as well as in advertising, where other terms such as pen portraits may also be used.
Personas are useful in considering the goals, desires, and limitations of brand buyers and users in order to help to guide decisions about a service, product or interaction space such as features, interactions, and visual design of a website. Personas are most often used as part of a user-centered design process for designing software and are also considered a part of interaction design (IxD), have been used in industrial design and more recently for online marketing purposes. A user persona is a representation of the goals and behavior of a real group of users. In most cases, personas are synthesized from data collected from interviews with users. They are captured in 1–2 page descriptions that include behavior patterns, goals, skills, attitudes, and environment, with a few fictional personal details to make the persona a realistic character. For each product, more than one persona is usually created, but one persona should always be the primary focus for the design.
Context awareness originated as a term from ubiquitous computing or as so-called pervasive computing which sought to deal with linking changes in the environment with computer systems, which are otherwise static. Although it originated as a computer science term, it has also been applied to business theory in relation to business process management issues. [Rosemann, M., & Recker, J. (2006)." Context-aware process design: Exploring the extrinsic drivers for process flexibility". in T. Latour & M. Petit. 18th international conference on advanced information systems engineering. proceedings of workshops and doctoral consortium. Luxembourg: Namur University Press. pp. 149–158.].
Context defines some rules of inter-relationship of features in processing any entities as a binding clause. Some common understanding is the segregation of four categories (according to ):
Some classical understanding in business processes is derived from the definition of AAA applications  with the following three categories:
- Authentication, which means i.e. confirmation of stated identity
- Authorisation, which means i.e. allowance to accrual or access to location, function, data
- Accounting, which means i.e. the relation to order context and to accounts for applied labour, granted license, and delivered goods, these three terms including additionally location and time as stated before.
Human factors related context is structured into three categories:
- information on the user (knowledge of habits, emotional state, biophysiological conditions, ...)
- the user’s social environment (co-location of others, social interaction, group dynamics, ...)
- the user’s tasks (spontaneous activity, engaged tasks, general goals,...).
Likewise, context related to physical environment is structured into three categories:
- location (absolute position, relative position, co-location,...)
- infrastructure (surrounding resources for computation, communication, task performance...)
- physical conditions (noise, light, pressure,...).
While not the most pleasantly named concepts, "The Rat Factor" refers to a principle by which computer users expect consistency in interface/interaction design. In a lab, rats can be trained to push a button in order to receive a food pellet or reward. The Rat Factor describes the optimal state of training a user to return to the same place in order to execute a command in the interface. The Rat Factor principle keeps order in the UI and provides consistency for the end user's experience.
Source: Sally Applin, early 1990's.
Berrypicking: goal-oriented search with an unfocused and more relaxed mode of browsing. Or rephrased in the context of this article, Berrypicking unites explicit and implicit search modes. The latter is the space which allows users to explore experiment and actively change direction. Therefore Berrypicking is an example of an emerging user journey, that views browsing as the space for discovery, and even serendipity, and as we see later, for implicit learning which supports users’ active creativity.
Source: Kaltenbacher, Brigitte. From Prediction to Emergence. Journal of Information Architecture, Issue 2. Issue 2 Volume 1, Fall 2009.
The word "affordance" was originally invented by the perceptual psychologist J. J. Gibson (1977, 1979) to refer to the actionable properties between the world and an actor (a person or animal). To Gibson, affordances are a relationship. They are a part of nature: they do not have to be visible, known, or desirable. Some affordances are yet to be discovered. Some are dangerous. I suspect that none of us know all the affordances of even everyday objects.
In product design, where one deals with real, physical objects, there can be both real and perceived affordances, and the two need not be the same. In graphical, screen-based interfaces, all that the designer has available is control over perceived affordances.
The computer system, with its keyboard, display screen, pointing device (e.g., mouse) and selection buttons (e.g., mouse buttons) affords pointing, touching, looking, and clicking on every pixel of the display screen. Most of this affordance is of no value. Thus, if the display does not have a touch-sensitive screen, the screen still affords touching, but it has no result on the computer system. Mind you, the affordance still has impact: it is useful in multiple-person communication, and it helps aid the sale of screen-cleaning tissues and fluids.
All screens afford touching: only some detect the touch and are capable of responding. But the affordance of touchability is the same in all cases. Touch sensitive screens often make their affordance visibly perceivable by displaying a cursor under the pointing spot.
The cursor is not an affordance; it is visual feedback.
Source: Donald Norman