Systems Science Friday Noon Seminar Series

Files

Download

Download (142 KB)

Date

11-5-2010

Abstract

When we want to solve a problem, we talk about how we might manage or regulate—control it. Control is a a central concept in systems science, along with system, environment, utility, and information. With his information-theoretic Law of Requisite Variety, Ashby proved that to control a system we need as much variability in our regulator as we have in our system (“only variety can destroy variety”), something like a method of control for everything we want to control. For engineered systems, this appears to be the case (at least sometimes). But what about for social systems? Does a group of humans behave with the same level of variability as a machine? Not usually. And when control is applied to a human system, in the form of a new law or regulation, individuals within it may deliberately change their behavior. A machine's behavior may also change when a control is applied to it—think of how emissions equipment affects the performance of an automobile (less pollution, but less power too)—but the machine doesn't (typically) adapt. People do. Does this pose a difficulty if we want to employ Ashby's law to solve a control problem in a human system? Or could our ability to adapt provide an advantage?

Ashby acknowledged that for very large systems regulation is more difficult, and many social systems are very large. With limited resources we may not be able to control for all the variety and possible disturbances in a very large system, and therefore we must make choices. We can leave a system unregulated; we can reduce the amount of the system we want to control; we can increase control over certain forms of variety and disturbances; or we could find constraint or structure in the system's variety and disturbances—in other words, create better, more accurate models of our system and its environment.

Creating better models has always been a driving force in the development of systems science. Conant and Ashby proved that “every good regulator of a system must be a model of that system” in a paper of the same name. Intuitively this makes sense: if we have a better understanding of the system—a better model—we should be better able to control the system. But how well are we able to able to model human systems? For example, how well do we model intersections? Think about your experience in a car or on a bike at a downtown intersection during rush hour. Now think about that same intersection from the perspective of a pedestrian late in the evening. Did the traffic signals control the intersection in an efficient manner under both conditions? What if we consider all the downtown intersections, or the entire Portland-area traffic system? What about even larger systems? How well can we model the U.S. health care system? What is the chance that in a few thousand pages of new controls a few of them will cause some unforeseen consequence? How well do we understand the economy? Enough to create a law limiting CEO compensation? Might just one seemingly straightforward control lead to something unforeseen?!

So what level of understanding must we have of a system, i.e., how well must we be able to model it, before we regulate it? We must still react to and manage, as best we can, a man-made or natural disaster, even when we may know very little about it at the start. Our ability to adapt is critical in these situations. But at the same time, with our ability to adapt we can also (with the proper resources) circumvent the intent of regulations or use regulations to protect or increase our influence: consider “loopholes” in the tax code or legislation with which large corporations can easily comply but causes great difficulties for smaller businesses.

No matter what problem we have, it's important to understand what limits our ability to control and how controls may cause new and different problems; this will be the general focus of this seminar. A brief overview of Ashby's Law of Requisite Variety, along with a conceptual example, will be presented.

Biographical Information

Joshua Hughes is a third-year, core-option Ph.D. student and graduate assistant in the PSU Systems Science Graduate Program. He is working on research with George Lendaris on contextual reinforcement learning and experience-based identification and control, and he has recently collaborated with Martin Zwick on a paper showing how the panarchy adaptive cycle can be formalized using the cusp catastrophe. He is interested in information theory, cybernetics, reconstructability analysis, neural networks, fuzzy logic, catastrophe theory, game theory, and many other things.

Subjects

Neurons -- Physiology, Neural circuitry -- Mathematical models, Rate distortion theory, Information theory, System analysis

Disciplines

Theory, Knowledge and Science

Persistent Identifier

https://archives.pdx.edu/ds/psu/31007

Rights

© Copyright the author(s)

IN COPYRIGHT:
http://rightsstatements.org/vocab/InC/1.0/
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

DISCLAIMER:
The purpose of this statement is to help the public understand how this Item may be used. When there is a (non-standard) License or contract that governs re-use of the associated Item, this statement only summarizes the effects of some of its terms. It is not a License, and should not be used to license your Work. To license your own Work, use a License offered at https://creativecommons.org/

The Limits of Control, or How I Learned to Stop Worrying and Love Regulation (Discussion)

Share

COinS