Difference between revisions of "Affective Computing"

From Cyborg Anthropology
Jump to: navigation, search
 
(20 intermediate revisions by 2 users not shown)
Line 1: Line 1:
[[File:Kelly-Dobson-Blender.jpg|200px|thumb|right|Kelly Dobson controlling her blender with voice.]]
+
[[Image:affective-computing-Maggie-Nichols.jpg|center|600px]]
  
'''Learning the Language of Machines'''
+
===Definition===
 +
Affective Computing is a term used to describe the process of developing computing architectures that account for human concerns such as usability, touch, access, persona, emotions and history. Those who build systems by these principles think of computing as a solution or a helper for problems or essences of human living, especially in an industrial world. 
  
Instead of teaching machines to understand humans, MIT’s [[Kelly Dobson]] programmed a blender to understand voice activation, but not the typical voice one uses. Instead of saying “Blender, ON!”, she made an auditory model of a machine voice.  
+
Many examples of affective computing exist. One of the most notable examples was built by [[Kelly Dobson]] while she was at MIT Media Lab. Instead of teaching machines to understand humans, Dobson programmed a blender to understand voice activation, but not the typical voice one uses.<ref>Dobson, Kelly. Blendie. MIT Media Lab. 2003-2004. http://web.media.mit.edu/~monster/blendie/ Accessed 02 July 2011.</ref> Dobson's work called to question the notion that machines should always be built to understand human commands instead of simply understanding a command similar to its own native machine language.
  
If she wants the blender to begin, she simply growls at it. The low-pitched “Rrrrrrrrr” she makes turns the blender on low. If she wants to increase the speed of the machine, she increases her voice to “RRRRRRRRRRR!, and the machine increases in intensity. This way, the machine can understand volume and velocity, instead of a human voice. Why would a machine need to understand a human command when it can understand a command much more similar to its own human language?
+
Instead of saying "Blender, ON!", Dobson made an auditory model of a machine voice. If she wanted the blender to begin, she simply made blending noises at it. The low-pitched "Rrrrrrrrr" she made turned the blender on low. If she wanted to increase the speed of the machine, she increased her voice to "RRRRRRRRRRR!", and the machine increased in intensity. This way, the machine could understand volume and velocity, instead of a human voice.
  
<private>
+
The principles of affective computer represent a next step in making computers that are responsive and helpful to humans. At Carnegie Mellon University, a pillow was developed could stored a person's hug for future playback.<ref>Foo, Juniper. Hug Pillow. CNET News Asia. Published 23 Nov 2004. http://asia.cnet.com/crave/hug-pillow-62100099.htm Accessed 02 June 2011.</ref> This pillow allowed one to feel across distances and well as leave haptic recordings. If one’s grandmother were to use the device to store her own hug, and then two months later were to die, her family members could replay it after she was gone.
  
---
+
==References==
 +
<references />
  
Cite this correctly!
+
[[Category:Book Pages]]
 +
[[Category:Finished]]
 +
[[Category:Illustrated]]
  
In [[The Automatic Production of Space]] Plutowski (2000) identifies three broad categories of research within the area of affective or emotional computing
+
__NOTOC__
</private>
+
 
+
See: [[Media Lab at MIT]]
+

Latest revision as of 23:30, 26 November 2011

Affective-computing-Maggie-Nichols.jpg

Definition

Affective Computing is a term used to describe the process of developing computing architectures that account for human concerns such as usability, touch, access, persona, emotions and history. Those who build systems by these principles think of computing as a solution or a helper for problems or essences of human living, especially in an industrial world.

Many examples of affective computing exist. One of the most notable examples was built by Kelly Dobson while she was at MIT Media Lab. Instead of teaching machines to understand humans, Dobson programmed a blender to understand voice activation, but not the typical voice one uses.[1] Dobson's work called to question the notion that machines should always be built to understand human commands instead of simply understanding a command similar to its own native machine language.

Instead of saying "Blender, ON!", Dobson made an auditory model of a machine voice. If she wanted the blender to begin, she simply made blending noises at it. The low-pitched "Rrrrrrrrr" she made turned the blender on low. If she wanted to increase the speed of the machine, she increased her voice to "RRRRRRRRRRR!", and the machine increased in intensity. This way, the machine could understand volume and velocity, instead of a human voice.

The principles of affective computer represent a next step in making computers that are responsive and helpful to humans. At Carnegie Mellon University, a pillow was developed could stored a person's hug for future playback.[2] This pillow allowed one to feel across distances and well as leave haptic recordings. If one’s grandmother were to use the device to store her own hug, and then two months later were to die, her family members could replay it after she was gone.

References

  1. Dobson, Kelly. Blendie. MIT Media Lab. 2003-2004. http://web.media.mit.edu/~monster/blendie/ Accessed 02 July 2011.
  2. Foo, Juniper. Hug Pillow. CNET News Asia. Published 23 Nov 2004. http://asia.cnet.com/crave/hug-pillow-62100099.htm Accessed 02 June 2011.