Document Type
Presentation
Publication Date
2-2006
Subjects
Human-computer interaction, Androids -- Design and construction, Robotics -- Philosophy
Abstract
As the field of robotics matures robots will need some method of displaying and modeling emotions. One way of doing this is to use a human-like face on which the robot can make facial expressions corresponding to its emotional state. Yet the connection between a robot s emotional state and its physical facial expression is not an obvious one: while a smile can gradually increase or decrease in size, there is no principled method of using boolean logic to map changes in facial expressions to changes in emotional states. We give a philosophical analysis of the problem and show that it is rooted in the vagueness of robot emotions. We then outline several methods that have been used in the philosophical literature to model vagueness and propose an experiment that uses our humanoid robot head to determine which philosophical theory is best suited to the task.
Persistent Identifier
http://archives.pdx.edu/ds/psu/12859
Citation Details
Serchuk, Phil, Ehud Sharlin, Martin Lukac, and Marek Perkowski. "Emotion-mapped Robotic Facial Expressions based on Philosophical Theories of Vagueness," 2006.
Description
Originally presented to the ACM Conference on Human Factors in Computing Systems (CHI 2006) workshop on HCI and the Face, Montréal, Canada, April 22-27 2006.