Systems Science Friday Noon Seminar Series

Files

Download

Download Full Text (2.7 MB)

Date

6-3-2011

Description

There is a general consensus among robotics researchers that the world of the future will be filled with autonomous and semi-autonomous machines. There is less of a consensus, though, on the best approach to instilling a sense of 'machine morality' in these systems so that they will be able to have effective interactions with humans in an increasingly complex world. In my talk, we take a brief look at some existing approaches to computational ethics, and then describe work we've undertaken creating multiagent simulations involving moral decision-making during strategic interactions. In these simulations, agents make choices about whether to cooperate with each other based on each agent's weighting of five moral attributes (reciprocity, harm avoidance, loyalty, authority, purity). Our hope is that watching how these populations evolve over time can provide insights into how large numbers of distributed, autonomous systems might be programmed with respect to moral decision-making and behavior.

Biographical Information

David Burke leads the Active Defense Program Area at Galois, Inc. The goal of this program is to translate computer science research into effective real-world solutions to the challenges of host and network-based cybersecurity. The threat landscape is constantly evolving, and Galois solutions developed under this program are designed to adapt, mitigate, and defeat these evolving threats. Mr. Burke received a M.S. in Computer Science from the Oregon Graduate Institute in 1998, and a B.S.M.E. from Lehigh University in 1983. He has over 15 years of experience in the application of mathematical modeling, machine learning, and data visualization to problems in the social sciences, with a specialization in Bayesian techniques for reasoning under uncertainty. His M.S. thesis was on the subject of the automatic generation of compilers from high-level specifications. At Galois, he conducts research into logics for reasoning about trust in the design of secure systems, and techniques for ensuring robust decision-making in multi-agent systems.

Subjects

Autonomous robots -- Moral and ethical aspects, Autonomous robots -- Technological innovations, Robotics -- Social aspects, Artificial intelligence

Disciplines

Applied Ethics | Robotics

Persistent Identifier

https://archives.pdx.edu/ds/psu/31204

Evolving Machine Morality Strategies through Multiagent Simulations

Share

COinS