UX Glossary

From Cyborg Anthropology
Revision as of 12:26, 21 June 2010 by Caseorganic (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

An ongoing effort to educate and flush out common ways of describing complex things, in order to better communicate these complexities.


Fitt's Law

Fitts's law (often cited as Fitts' law) is a model of human movement in human-computer interaction and ergonomics which predicts that the time required to rapidly move to a target area is a function of the distance to and the size of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer display using a pointing device. It was proposed by Paul Fitts in 1954.

Source: [1]

Minimally Invasive Education

Minimally Invasive Education (or MIE) is a term used to describe how children learn in unsupervised environments. It was derived from an experiment done by Sugata Mitra while at NIIT in 1999 often called The Hole in the Wall.

Source: [2]

Amber's Note: Minimally Invasive Education could also be used to describe an interface which educates the user as they move along it

Ambient Intelligence

In computing, ambient intelligence (AmI) refers to electronic environments that are sensitive and responsive to the presence of people. Ambient intelligence is a vision on the future of consumer electronics, telecommunications and computing that was originally developed in the late 1990s for the time frame 2010–2020. In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in easy, natural way using information and intelligence that is hidden in the network connecting these devices (see Internet of Things). As these devices grow smaller, more connected and more integrated into our environment, the technology disappears into our surroundings until only the user interface remains perceivable by users.

The ambient intelligence paradigm builds upon pervasive computing, ubiquitous computing, profiling practices, and human-centric computer interaction design and is characterized by systems and technologies that are (Zelkha & Epstein 1998; Aarts, Harwig & Schuurmans 2001):

  • embedded: many networked devices are integrated into the environment
  • context aware: these devices can recognize you and your situational context
  • personalized: they can be tailored to your needs
  • adaptive: they can change in response to you
  • anticipatory: they can anticipate your desires without conscious mediation.

Source: [3]

Amber's Note: A smart user interface could also have ambient intelligence, meaning that it would be sensitive and responsive to the presence of one or many people. Microsoft Surface may be such an interface, but also Facebook and Amazon.com, Facebook more so.

Gulf of Execution

Gulf of execution is a term usually used in human computer interaction to describe the gap between a user's goal for action and the means to execute that goal. Usability has as one of its primary goals to reduce this gap by removing roadblocks and steps that cause extra thinking and actions that distract the user's attention from the task intended, thereby preventing the flow of his or her work, and decreasing the chance of successful completion of the task. Similarly, there is a gulf of evaluation that applies to the gap between an external stimulus and the time a person understands what it means. Both phrases are mentioned in Donald Norman's 1986 book User Centered System Design: New Perspectives on Human-computer Interaction.

This can be illustrated through the discussion of a VCR problem. Let us imagine that a user would like to record a television show. They see the solution to this problem as simply pressing the Record button. However, in reality, to record a show on a VCR, several actions must be taken:

  • Press the record button.
  • Specify time of recording, usually involving several steps to change the hour and minute settings.
  • Select channel to record on - either by entering the channel's number or selecting it with up/down buttons.
  • Save the recording settings, perhaps by pressing an "OK" or "menu" or "enter" button.

The difference between the user's perceived execution actions and the required actions is the gulf of execution.

Source: [4]

Human Action Cycle

The human action cycle is a psychological model which describes the steps humans take when they interact with computer systems. The model was proposed by Donald A. Norman, a scholar in the discipline of human-computer interaction. The model can be used to help evaluate the efficiency of a user interface (UI). Understanding the cycle requires an understanding of the user interface design principles of affordance, feedback, visibility and tolerance. The human action cycle describes how humans may form goals and then develop a series of steps required to achieve that goal, using the computer system. The user then executes the steps, thus the model includes both cognitive activities and physical activities.

The three stages of the human action cycle

The model is divided into three stages of seven steps in total, and is (approximately) as follows:

Goal formation stage 1. Goal formation. Execution stage 2. Translation of goals into a set of unordered tasks required to achieve goals. 3. Sequencing the tasks to create the action sequence. 4. Executing the action sequence. Evaluation stage 5. Perceiving the results after having executed the action sequence. 6. Interpreting the actual outcomes based on the expected outcomes. 7. Comparing what happened with what the user wished to happen.

Typically, an evaluator of the user interface will pose a series of questions for each of the cycle's steps, an evaluation of the answer provides useful information about where the user interface may be inadequate or unsuitable. These questions might be:

Step 1, Forming a goal:

  • Do the users have sufficient domain and task knowledge and sufficient understanding of their work to form goals?
  • Does the UI help the users form these goals?

Step 2, Translating the goal into a task or a set of tasks:

  • Do the users have sufficient domain and task knowledge and sufficient understanding of their work to formulate the tasks?
  • Does the UI help the users formulate these tasks?

Step 3, Planning an action sequence:

  • Do the users have sufficient domain and task knowledge and sufficient understanding of their work to formulate the action sequence?
  • Does the UI help the users formulate the action sequence?

Step 4, Executing the action sequence:

  • Can typical users easily learn and use the UI?
  • Do the actions provided by the system match those required by the users?
  • Are the affordance and visibility of the actions good?
  • Do the users have an accurate mental model of the system?
  • Does the system support the development of an accurate mental model?

Step 5, Perceiving what happened:

  • Can the users perceive the system’s state?
  • Does the UI provide the users with sufficient feedback about the effects of their actions?

Step 6, Interpreting the outcome according to the users’ expectations:

  • Are the users able to make sense of the feedback?
  • Does the UI provide enough feedback for this interpretation?

Step 7, Evaluating what happened against what was intended:

  • Can the users compare what happened with what they were hoping to achieve?

See the following book by Donald A. Norman for deeper discussion: Norman, D. A. (1988). The Design of Everyday Things. New York, Doubleday/Currency Ed. ISBN 0-465-06709-3

Source: [5]

OODA Loop

The OODA loop (for observe, orient, decide, and act) is a concept originally applied to the combat operations process, often at the strategic level in both the military operations. It is now also often applied to understand commercial operations and learning processes. The concept was developed by military strategist and USAF Colonel John Boyd.

The OODA loop has become an important concept in both business and military strategy. According to Boyd, decision-making occurs in a recurring cycle of observe-orient-decide-act. An entity (whether an individual or an organization) that can process this cycle quickly, observing and reacting to unfolding events more rapidly than an opponent, can thereby "get inside" the opponent's decision cycle and gain the advantage.

Source: [6]

Think Aloud Protocol

Think-aloud protocol (or think-aloud protocols, or TAP) is a method used to gather data in usability testing in product design and development, in psychology and a range of social sciences (e.g., reading, writing and translation process research). The think-aloud method was introduced in the usability field by Clayton Lewis [1] while he was at IBM, and is explained in Task-Centered User Interface Design: A Practical Introduction by C. Lewis and J. Rieman [2]. The method was further refined by Ericsson and Simon.

Think aloud protocols involve participants thinking aloud as they are performing a set of specified tasks. Users are asked to say whatever they are looking at, thinking, doing, and feeling, as they go about their task. This enables observers to see first-hand the process of task completion (rather than only its final product). Observers at such a test are asked to objectively take notes of everything that users say, without attempting to interpret their actions and words. Test sessions are often audio and video taped so that developers can go back and refer to what participants did, and how they reacted. The purpose of this method is to make explicit what is implicitly present in subjects who are able to perform a specific task. A related but slightly different data-gathering method is the talk-aloud protocol. This involves participants only describing their action but not giving explanations. This method is thought to be more objective in that participants merely report how they go about completing a task rather than interpreting or justifying their actions (see the standard works by Ericsson & Simon).

As Hannu and Pallab [6] state the thinking aloud protocol can be divide in two different experimental procedures: the first one, is the concurrent thinking aloud protocol, collected during the decision task; the second procedure is the retrospective thinking aloud protocol gathered after the decision task.

Source: [7]

Persuasive Technology

Persuasive technology is broadly defined as technology that is designed to change attitudes or behaviors of the users through persuasion and social influence, but not through coercion (Fogg 2002). Such technologies are regularly used in sales, diplomacy, politics, religion, military training, public health, and management, and may potentially be used in any area of human-human or human-computer interaction. Most self-identified persuasive technology research focuses on interactive, computational technologies, including desktop computers, Internet services, video games, and mobile devices (Oinas-Kukkonen et al. 2008), but this incorporates and builds on the results, theories, and methods of experimental psychology, rhetoric (Bogost 2007), and human-computer interaction. The design of persuasive technologies can be seen as a particular case of design with intent (Lockton et al. 2010).

Source: [8]

Amber's Note: Facebook is a prime example of persuasive technology. The interface elements are persuasive at macro and micro scales.

Captology

Captology is the study of computers as persuasive technologies. This area of inquiry explores the overlapping space between persuasion in general (influence, motivation, behavior change, etc.) and computing technology. This includes the design, research, and program analysis of interactive computing products (such as the Web, desktop software, specialized devices, etc.) created for the purpose of changing people's attitudes or behaviors. B.J. Fogg in 1996 derived the term captology from an acronym: Computers As Persuasive Technologies. In 2003 he published the first book on captology, entitled Persuasive Technology: Using Computers to Change What We Think and Do.

Source: [9]

Amber's Note: Perhaps Captology is a better descriptor for what I do as a Cyborg Anthropologist, since a lot of my research relates to persuasive architectures and interfaces.

Information Foraging

Information foraging is a theory that applies the ideas from optimal foraging theory to understand how human users search for information. The theory is based on the assumption that, when searching for information, humans use "built-in" foraging mechanisms that evolved to help our animal ancestors find food. Importantly, better understanding of human search behaviour can improve the usability of websites or any other user interface.

In the 1970s optimal foraging theory was developed by anthropologists and ecologists to explain how animals hunt for food. It suggested that the eating habits of animals revolve around maximizing energy intake over a given amount of time. For every predator, certain prey are worth pursuing, while others would result in a net loss of energy. In the early 1990s, Peter Pirolli and Stuart Card from PARC noticed the similarities between users' information searching patterns and animal food foraging strategies. Working together with psychologists to analyse users' actions and the information landscape that they navigated (links, descriptions, and other data), they showed that information seekers use the same strategies as food foragers.

In the late 1990s, Ed H. Chi worked with Pirolli, Card and others at PARC further developed information scent ideas and algorithm to actually use these concepts in real interactive systems, including the modeling of web user browsing behavior, the inference of information needs from web visit log files, and the use of information scent concepts in reading and browsing interfaces.

In the early 2000s, Wai-Tat Fu worked with Pirolli to develop the SNIF-ACT model, which further extends the theory to provide mechanistic account of information seeking. The model provides good fits to link selection on Web pages, decision to leave a page (stickiness), and how both link text and its position may affect overall successes of information search. The SNIF-ACT model was also shown to exhibit statistical properties that resemble the law of surfing found in large-scale Web log data.

"Informavores" constantly make decisions on what kind of information to look for, whether to stay at the current site to try to find additional information or whether they should move on to another site, which path or link to follow to the next information site, and when to finally stop the search. Although human cognition is not a result of evolutionary pressure to improve Web use, survival-related traits to respond quickly on partial information and reduce energy expenditures force them to optimise their searching behaviour and, simultaneously, to minimize the thinking required.

Source: [10]

=