2011 Webinar Profile
Myths About Autonomous Machines and Robots
Presented by David Woods, Ohio State University
Monday, May 23
ABOUT THE WEBINAR
When accidents occur in complex systems, the common belief is that solutions lie in advancing what devices can do autonomously. However, empirical results on the effects of delegating more autonomy and authority to devices show that the effects are different from what developers expect—automation surprises.
In this webinar, Woods will first cover how increases in automation change what is expertise, how interdependent activities can be synchronized, and how complex systems become brittle. But what is even more puzzling is the persistence of the false belief that increases in isolated autonomy alone provide solutions. As he observed 30 years ago, when technologists are confronted by an automation surprise, they promise "Just a little more autonomy will be enough, next time."
Second, Woods will review basic regularities regarding systems of people and automation. Autonomous capabilities always exhibit brittleness as situations inevitably develop beyond their boundary conditions. Computerized devices and systems are fundamentally literal minded, as one father of computer science warned 50 years ago: A computer-based agent can't tell if its model of the world is the world it actually exists in. As a result of regularities like these, there is a fundamental asymmetry between people and machines as team players and responsible agents.
Finally, Woods will explore the implications for design responsibility. The basic finding is that developers evade responsibility for negative consequences of new automation by blaming human operators. When automated systems fail, they ask, "Why didn't the operators stop the automation from pursuing the wrong course of action?" Responsibility for the effects of robot-mediated activities lies in the people who design and deploy robotic systems. This responsibility can be discharged by designing new automation to (a) be responsive to others as part of a larger system of coordinated activity, (b) support the smooth transfer of control as situations change, and (c) develop the systems which are resilient in the face of surprise.
Attendees may find it helpful to review chapters 10, 11, and 12 in Woods, D. D., & Hollnagel, E. (2006). Joint cognitive systems: Patterns in cognitive systems engineering. Boca Raton FL: CRC Press.
ABOUT THE PRESENTER
David Woods, professor of cognitive systems engineering and human-systems integration at Ohio State University, was one of the pioneers of cognitive systems engineering in the aftermath of the Three Mile Island nuclear power accident. For 30 years, his program of research has studied how people cope with complexity in time-pressured situations such as critical-care medicine, aviation, space missions, intelligence analysis, and crisis management, including multiple accident investigations. Based on these results, he designs visualizations, perceptual interfaces, decision support, and collaborative systems to help people find meaning in large data fields when they are under pressure to diagnose anomalies, coordinate activities, and replan to overcome obstacles. In recent years Woods has helped to pioneer resilience engineering as a new approach to safety and complexity.
Woods is a past president and Fellow of the Human Factors and Ergonomics Society and is a recipient of the Laurel Award from Aviation Week and Space Technology, as well as other awards. He was an adviser to the Columbia Accident Investigation Board, was a founding board member of the National Patient Safety Foundation, has been a member of several National Academy of Science committees, and is the author or editor of 8 books, including Behind Human Error, Resilience Engineering, and Resilience Engineering in Practice. He leads Ohio University's initiative on Complexity in Natural, Social, and Engineered Systems.