Systems Science Friday Noon Seminar Series

Files

Download

Download (2.7 MB)

Date

6-3-2011

Abstract

There is a general consensus among robotics researchers that the world of the future will be filled with autonomous and semi-autonomous machines. There is less of a consensus, though, on the best approach to instilling a sense of 'machine morality' in these systems so that they will be able to have effective interactions with humans in an increasingly complex world. In my talk, we take a brief look at some existing approaches to computational ethics, and then describe work we've undertaken creating multiagent simulations involving moral decision-making during strategic interactions. In these simulations, agents make choices about whether to cooperate with each other based on each agent's weighting of five moral attributes (reciprocity, harm avoidance, loyalty, authority, purity). Our hope is that watching how these populations evolve over time can provide insights into how large numbers of distributed, autonomous systems might be programmed with respect to moral decision-making and behavior.

Biographical Information

David Burke leads the Active Defense Program Area at Galois, Inc. The goal of this program is to translate computer science research into effective real-world solutions to the challenges of host and network-based cybersecurity. The threat landscape is constantly evolving, and Galois solutions developed under this program are designed to adapt, mitigate, and defeat these evolving threats. Mr. Burke received a M.S. in Computer Science from the Oregon Graduate Institute in 1998, and a B.S.M.E. from Lehigh University in 1983. He has over 15 years of experience in the application of mathematical modeling, machine learning, and data visualization to problems in the social sciences, with a specialization in Bayesian techniques for reasoning under uncertainty. His M.S. thesis was on the subject of the automatic generation of compilers from high-level specifications. At Galois, he conducts research into logics for reasoning about trust in the design of secure systems, and techniques for ensuring robust decision-making in multi-agent systems.

Subjects

Autonomous robots -- Moral and ethical aspects, Autonomous robots -- Technological innovations, Robotics -- Social aspects, Artificial intelligence

Disciplines

Applied Ethics | Robotics

Persistent Identifier

https://archives.pdx.edu/ds/psu/31204

Rights

© Copyright the author(s)

IN COPYRIGHT:
http://rightsstatements.org/vocab/InC/1.0/
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

DISCLAIMER:
The purpose of this statement is to help the public understand how this Item may be used. When there is a (non-standard) License or contract that governs re-use of the associated Item, this statement only summarizes the effects of some of its terms. It is not a License, and should not be used to license your Work. To license your own Work, use a License offered at https://creativecommons.org/

Evolving Machine Morality Strategies through Multiagent Simulations

Share

COinS