<- back

The information and materials contained in this paper have been developed from sources believed to be reliable. However, the American Society of Safety Engineers (ASSE) accepts no legal responsibility for the correctness or completeness of this material or its application to specific factual situations.

This paper is being considered for adoption as a formal position of ASSE. Presentation of this paper does not ensure that adherence to these recommendations will protect the safety or health of any persons or preserve property. Questions should be sent to tfisher@asse.org

This Draft Statement was prepared by:

Automation vs Human Intervention
What is the Best Fit for the Best Performance

Joel M. Haight, Ph.D., P.E., CIH, CSP
Dept. of Energy and Geo-Environmental Engineering
Penn State University
224 Hosler Building
University Park, PA 16802
Telephone: (814) 863-4491
Fax: (814) 865-3248
Email: jmh44@psu.edu

INTRODUCTION

Does automation of control systems in today's industry help to reduce human error? Intuitively, one would expect that if we engineered humans out of the system, errors would decrease. This would seem to be a worthwhile goal. In fact, well known management consultant, Walter Bennis, says the factory of the future will have only two employees, a human and a dog. The human is there only to feed the dog and the dog is there to bite the human if he or she touches anything (Paradies and Unger, 2000).

Human error is inevitable and this may prompt us to think like Mr. Bennis (Hammond, K.R., Unknown year). While it may seem appealing to automate humans out of the man-machine system, we do provide judgment, logic and opinions. As a component in the control system, humans are variable, interactive and adaptable. But, error is a natural and inevitable result of this variability. Therefore, our input can be both a blessing and a curse to those responsible for system design or performance. Humans can fill many roles as we can adapt and specialize, but our natural variability also makes it possible to take actions that the system cannot tolerate (Lorenzo, 1990).

Human error is often implicated as a cause of or contributor to industrial, military, airline, agricultural and mining incidents that result in injuries, fires, spills or even unplanned equipment down time. Even with emphasis on automation in recent years, various sources still report today, that between 50% and 90% of industrial incidents are caused by human error. It is difficult to say exactly what the real figure is because it depends on one's perspective. However, whatever your perspective, it can be reasonably well argued that a large percentage of industrial incidents are contributed to or caused by human error (Haight 2003)

As technology improves and efficient and highly productive output becomes an absolute necessity for financial survival, we may be more inclined to use automated control systems. However, there is a cost to the growth in this phenomenon. Twenty two years ago, this author, while learning the oil production industry, spent much time with experienced first line supervisors. It was impressive to watch these people in action while driving around the oil fields. A supervisor would stop the truck upon hearing something out of the ordinary (a hiss, an unfamiliar vibration). He would walk to the equipment and listen more closely, place his hand on a pump or a piece of piping and then radio the maintenance supervisor to request the repair of a leak, a bad order bearing or some other problem. These supervisors relied on experience and "sentient" knowledge (knowledge obtained from the senses) to operate the process. They had a "feel" for the system. With today's reliance on computer controlled automation, can human operators develop this same level of experience or sentient knowledge?

While automation provides predictable, consistent performance, it lacks judgment, adaptability and logic. While humans provide judgment, adaptability and logic, we are unpredictable, inconsistent and subject to emotions and motivation. To maximize system performance, do we follow Mr. Bennis' suggestion and possibly lose the sentient knowledge or "feel" for the process or do we maximize human input and lose efficient, consistent, human error-free system performance? The answer is likely somewhere in the middle of these two extremes and different for each individual system and situation. This paper provides a review of the existing literature covering the many types of control schemes and parameters that determine system performance. The paper will seek to answer the question "how can we minimize human error while still maximizing system performance? What is the right human-machine mix?

POSITION STATEMENT

In my opinion, the American Society of Safety Engineers should take the following positions:

  • Human error is inevitable
  • Automation of industrial system performance is an accepted means to minimize human error and its impact
  • System designers must recognize that humans provide valuable characteristics such as judgment, flexibility, adaptability, experience and sentient knowledge (feel for the process) and are therefore essential components in industrial systems
  • System designers must recognize that to achieve maximum system performance while minimizing human error and risk to human operators, they must fully understand human physical, mental and emotional capacities and limitations and account for these in their designs
  • Human errors and system risk can be minimized if system designers incorporate known principles of human-machine interaction and ensure humans remain mentally engaged in system operation.
  • System performance can be maximized if designers incorporate into their designs, optimized levels of the strengths of each of the human and machine components in the system

HOW DOES AN ENGINEER CONSIDER HUMAN AND AUTOMATED COMPONENTS?

While we would all like to minimize human error and maximize performance in any system, the decision to automate or of how much to automate should be dealt with like any design decision. There are a number of variables to consider when designing a system that requires a combination of human input and automated control. In addition to discussing these variables, it is important to determine a number of foundation definitions and considerations of human engineering to ensure each situation can be based on consistent principles.

Petersen (1996), in quoting Peters (1966) tells us "human error consists of any significant deviation from a previously established, required or expected standard of human performance". There are other definitions out there, but from the point of view of designing, operating and maintaining an industrial system, this one provides a working foundation. When humans make errors and cause a system to fail, it doesn't fail due to any one particular reason. It fails because of the kinds of people operating the system, the amount of training they have received and the level to which they are physically or mentally able to cope with the way the system was designed. The system failure can be a function of the operating procedures provided for the person and the environment in which the people are working Chapanis (1972). Petersen (1996) explains that with this view, it should be important for us to recognize that it is likely that most human errors are not made because the person is not intelligent or that they are wrong. They commit errors because, in the heat of the moment, they make decisions and take actions that seem logical given the situations and systems in which they are operating. He tells us the errors are "caused".

While human error may be caused, history has shown that it is inevitable. It then stands to reason that anyone responsible for designing a system may want to engineer the "inevitable" human error out of the system. Often times this is done by automating the system. Although wanting to minimize human error is not the only reason for automating a system, it is one of a number of the driving forces. Before delving into why and to what level an engineer/designer should or will automate a system, it is necessary to present what "system automation" means in the context of this article. Parasuraman and Riley (1997) define automation as the execution by a machine agent (usually a computer) of a function that was previously carried out by a human. They explain that due to this definition, what is currently considered "automation" will change over time. If, in the case of a particular system, complete automation is installed and is permanent, the particular function will be considered just machine operation as opposed to automated operation.

One might automate a system to relieve humans of time-consuming and laborious tasks (Parasuraman et al. 1997), to speed the operation, to increase production rates, to extend an operation to a longer shift or even to continuous production, to reduce system inefficiencies or to ensure physical specifications are maintained and consistent. Automated operations are thought to be more efficient, reliable and accurate than humans. It is often thought that a machine can perform a particular function at a lower cost than a human can. While many of these reasons are true in some cases, humans still provide the valuable roles of decision-making, planning and creative thinking. These higher cognitive functions, while being further explored (artificial intelligence and neural networks), can be assigned to higher order machines. Humans still provide these functions better and more completely than a machine (Parasuraman, et al. 1997). As discussed in the introduction, humans are an interactive component in the overall system. We adapt, we specialize and we fill many roles in a system's operation, but this variability can be both positive and negative. While humans bring judgment, flexibility, experience and logic to the system, sometimes these attributes and skills give humans the ability to make moves (accidental error or intentionally) that the system cannot tolerate (Haight, 2003 and Lorenzo, 1990). Some of these "moves" are discussed in greater detail later in this article.

A prevalent fear is that machines and automated systems are meant to replace humans….take away jobs, in other words. This fear may have been encouraged by the computer system "Hal" in the movie 2001, A Space Odyssey. According to Parasuraman, et al. (1997), automation never really removes the human from the system, it merely changes the nature of the role the human plays in the system from one of doer to one of overseer. A problem occurs however, when the designer of an automated system does not fully consider the ways in which this human role is changed by the automation. Unintended or unanticipated human responses result when the designer has not adequately thought through all of the potential human responses.

Many industry people are likely to be familiar with the fact that, often, the automated component of the system is circumvented because the operators do not trust it, it makes their work more difficult or its use is more time consuming. Automation hardware malfunction also creates problems in the human-machine interface. If I am an operator charged with filling a storage vessel with liquid product and that vessel is equipped with a level control system and high-level alarm, I may be tempted to open the fill valve and leave the site to do other work while the vessel fills. If the control system and alarm fail and the tank overfills, chances are that human error will be cited as the cause and it will not be attributed to the malfunctioning of the automation system. What is happening here in this combined Automated-Human system? If this system were completely manual, the operator would be required to open the fill valve and stand at the vessel the entire time it was filling to shut off the valve when the vessel is filled. This is very time consuming and tedious, let alone the fact that it idles an operator that could be doing other productive work (although it requires him or her to remain partially engaged). If the system were completely automated, the filling operation would take place with out human intervention and the operator would be free to work on any other productive activity (the operator is no longer physically or mentally engaged and it is likely they would not follow this operation very closely). However, if the automated fill system malfunctioned and the fill valve remained open and/or the high-level alarm failed, it would only be by luck that an operator would see the level increasing above the desired height and then know what and how to intervene.

Given this dilemma, what is the appropriate level of automation for an operating system? The correct answer is probably, "it depends" or it is somewhere between "completely manual" and "fully automated" and it is system and application dependent. However, it may also depend on who is being asked the question. Is it age dependent? Is it dependent upon the technological savvy of the person being asked? How would one determine this? This article will explore the possibilities.

Why automate a system?

An automated system is thought to perform a function more efficiently, reliably and accurately than a human operator. There is also the expectation that the automated machine can perform the function at a lower cost than the human operator and so as technology has advanced and become more and more prevalent, it has been and is being applied at a rapid rate (Parasuraman, et al. 1997). There is little argument that can be waged against the efficiency, reliability and accuracy of an automated system. With higher reliability, it could be argued that a system would then be a safer system as well. System failures and upsets often lead to incidents in which people are injured, containment of toxic or flammable materials is lost or catastrophic rupture of equipment results in significant damage to the surroundings. It could also be argued that keeping the human operator out of the system, in a sense is protecting the human from him or herself and thus improving the safety of the system. Therefore, even though economics has been a primary driving force in the development and increase in automation in recent years, it could be argued, on one level, that an automated system is also a safer system.

Since designers are not able to "program in" all possibilities, they tend to integrate humans into the system at the very least, as a supervisor or monitor. This however, puts the operator in a position of having to respond to an action that has already been taken by the automated system and the operator has to respond on the system's terms. Under these terms, there is not as much room or at least not as much time for human judgment and creativity in deciding the most appropriate response in time to avert upset. The operator is in a "catch up" mode and at a disadvantage in terms of heading off an incident (Parasuraman, et al. 1997). Unfortunately, it has been the case on occasion, that in an effort to involve humans in the operation of the system, designers give the human operator override capabilities and some room for discretion without providing them with adequate system feedback. Human operators are often not provided with appropriate feedback about the system's intentions or actions soon enough to allow them to take correct enough or complete enough actions to avoid system upset or incident. The full potential of the automated system (efficiency, reliability, safe, economic operation) cannot be fully realized if the human operator makes errors that bring the system down. The question becomes, "why incorporate the human into the system at all?" This issue will be developed next.


Why Actively Engage Humans in System Performance?

Humans provide judgment to the system. It is a common perception that humans are more flexible, adaptable and creative than a purely automatic system. They are then said to be better able to respond to changes or unforeseen conditions (Parasuraman, et al. 1997). When is this adapted response necessary? First, it should be presented that since automated systems are designed by humans, it is not possible for the designer to foresee, plan for and design an automated response into a system for every possible situation in the complex world in which we live. Therefore, the best the designer has to hope for is to understand the environment, in which the system will operate well enough to foresee and plan for as many possible events and conditions as possible. Then the system designer must integrate the human operator into the system to an adequate level to provide the necessary judgment and adaptability to implement alternate responses as the unforeseen events present themselves.

This is where the challenge lies. Humans are a difficult species. We are driven by ambition and emotion, we are subject to inconsistencies and forgetfulness and we allow our cognitive functions to disengage without realizing it. We switch from "habits of mind" to "active thinking" several times throughout the workday and it is difficult to tell what triggers and what motivates the switch (Louis and Sutton, 1991). It is known, however, that, in general, the more humans remain engaged in the process, the more likely they will maintain the active thinking mode will be maintained. It is this active thinking that a designer would want to maximize as he or she decides what to automate, what not to automate and to what level. Unfortunately, increasing automation, in general, tends to promote the likelihood that humans will switch to the habits of mind mode as it takes over some of the function of what we would be expected to carry out (Louis et al. 1991).

Once a designer has been able to create a system in which the human operator remains engaged, he or she must be sure that the hardware remains reliable and that malfunction is minimized. An important component in the human-machine system is "trust". The operator has to be able to trust the automation to be accurate, functional, reliable, and consistent. Once the operator sees that the hardware is not accurate or subject to malfunction, the trust goes away and the operator begins to under-rely on the automation. This means he or she will shut down, circumvent or disable the automation and rely too heavily on his or her own manual input. The safety, risk and error-reducing benefits of the automation are then lost.

The opposite of this phenomenon is also a problem. If the system is designed to minimize human input and it is known to be accurate and reliable, the operator may be more likely to switch to the "habits of mind" mode and will tend to over-rely on the automation (Parasuraman et al. 1997 and Louis et al. 1991). The operator then does not allocate any attention to the system as he or she is counting on the automation to take care of everything. A common joke around an oil refinery (which involves many automated systems) is to ponder the question - "If everyone just walked away from the refinery and left it to function on its own, how long do you think it would continue to operate before it experienced either a complete shut down or an explosion?" Thankfully, the answer to this question has not been determined experimentally, but the answer would probably be, "not long". An example of over-reliance on an automated system would be the example of Eastern Airlines flight 401 that crashed in the Florida Everglades in the early 1970's. The crew became preoccupied with a landing gear indicator light that would not activate. They placed the aircraft on automatic pilot and set the altitude for 2000 feet while they addressed the landing gear. When the automatic pilot system accidentally became disengaged, no one recognized it and the plane continued to fly on manual control. It gradually lost altitude until it contacted the ground, killing almost everyone onboard.

If the automated system is known to be completely accurate and reliable over time, the operator may tend to over trust the system and switch back to the habits of mind mode or no longer remained cognitively engaged (Parasuraman et al. 1997). One way to help keep an operator cognitively engaged is to provide accurate and understandable feedback about system status and mode. If the operator understands the system's status, he or she is in much better position to respond to upset conditions (Mumaw et al. 2000). This feedback has to be provided in a timely manner and be delivered such that the human operator is not overwhelmed by too much feedback at one time. In one case at an oil refinery, a furnace explosion occurred after the high-temperature detection and alarm system malfunctioned allowing the furnace tubes to overheat and rupture. This allowed a large volume of flammable liquid to spill into the furnace firebox and ignite. The rapid combustion catastrophically ruptured the furnace firebox and while this was happening, the operator received over 300 alarms in less than five minutes. Under these conditions, human error is inevitable.

It appears that an optimized system that maximizes performance and minimizes human errors will operate somewhere between full automation and complete manual control. Where on this continuum a system should operate, of course, depends on the application. Whatever the application though, to maximize system performance, a designer should maximize the need for and use of what machines do best; accurate, consistent, fast, continuous, economic operation. To ensure minimized human error, a designer must integrate human input into the system in such a way that the operator stays mentally and physically engaged. He or she should have a monitoring role in the automated system with override capabilities as needed, receive adequate feedback on system status with enough time to respond and can trust the accuracy and reliability of the system. This is a tall order for a designer or an engineer and this will be different for each system. An engineer must have an understanding not only of how the automation hardware and software operates, he or she must also understand the inner workings of the human operator; physical, mental, motivational, emotional, training level, experience level, etc.

Too often, an engineer will design the system to an automation level that maximizes the economic benefit and then, almost as though it were an afterthought, leaves the human operator to manage the system as best he or she can (Parasuraman et al. 1997). This leads to increased errors, upsets, injuries, fires, spills, etc. Let us evaluate what can go wrong in this case.

What can go wrong in an automated system?

As mentioned above, designers tend to automate a system with economic benefit being the driving force for determining which aspects of the system will be automated and how. They then integrate the human into the system in an "after the fact" manner leaving him or her to manage the results of the automation (Parasuraman, et al. 1997). This approach can work provided the designer considers issues such as performance feedback to the operator, the level of training and experience of the operator, the level to which the automation allows the operator to remain mentally engaged in the system operation and speed at which the operator must respond to the system feedback. As of 1997, due to an inability to consider these performance and training issues, designers of automated systems had not systematically and fully integrated these human variables. This fact has been made evident in the accounting of several accidents in which airplanes operating on automatic pilot were flown into terrain and rail cars operating with speed constraints derailed due to high speed (Parasuraman, et al. 1997). Why does there appear to be a disconnect between automated system designers and human operators?

Is there a disconnect? The literature points out that the incidents involving automated systems, appear to involve human operators abusing, disusing or misusing the automated components of the system they are operating (Parasuraman, et al. 1997). Misuse could be defined as under-reliance or over-reliance on the automated components. Under-reliance refers to an operator not relying on the automation when it is required or prudent. This is characterized by an operator circumventing a storage vessel's level control system to avoid a bothersome alarm or disconnecting a turbine's over-speed trip mechanism to avoid an alarm. Over-reliance on automation refers to an operator completely turning over the operation to the automated control system and withdrawing his or her own valuable input to the system's performance. This could be characterized by an operator leaving a job site and allowing the flooded boot of a reflux drum to be drained out on its own via the control system. Well known comic, Gary Larson showed in one of his Far Side cartoons, the image of two airplane pilots, who seemingly trusted their automated control system so much that when they peered through an opening in the clouds and saw a mountain goat standing there, asked "What is that mountain goat doing way up here in this cloud bank?"

As the level of automation increases, operators may be losing the sentient knowledge of the process that they once had. There seem to be fewer and fewer people who can tell, simply by smell, sound and feel, what is going wrong in a process. If our human operators are not able to tell that there is an impending upset in the system, they will have to rely on the automation to catch and quell the problem. At this stage, the automation cannot be designed and built to account for every possibility and we need our human operator to make judgments, interpretations and to make moves to prevent the incidents. Without the sentient knowledge one develops from experience and living with the system and all of its faults, the risk of incident is higher. The more automated a system becomes, the more over-reliant an operator becomes and the more he or she stays in the "habits of mind" mode and the less likely he or she is to switch cognitive gears when needed (Louis et al. 1991). How does and engineer design the optimum system?

Mathematical Modeling of the "right mix"

The first step to any design process is to develop and quantify the system performance variables. For most engineers, this is an every day part of the job. In this case, however, most mechanical, electrical or instrumentation engineers are not trained to understand, much less quantify the human attributes of motivation, training, emotion, judgment, flexibility, adaptability, fatigue or boredom. Engineers must also account for the operator's trust in the system in terms of designing in maximum reliability. To account for minimal available information in this area, the designer has to quantify the automated system variables in terms of human needs. Once this is complete, an optimization function must be built from an understanding of the mathematical relationship between all of these variables.

Where Y is overall system performance
A is automation variables
H is human variables
E is errors

This is undeveloped territory and for the most part, it is accomplished with subjective thinking in cases where engineers are even thinking about the automation vs. human interface. In many cases, engineers consider the human operator as an afterthought who must align with the system or who will be left to struggle with it. More research is needed such that system designers and human factors engineers study jointly, the interface between the two parts of the system.

Conclusions

Mr. Bennis' exaggerated portrayal of the factory of the future may never come. There are many who hope it does not, and rightly so. Humans provide a valuable input to any system and it should be the goal of every system designer to maximize that input to take advantage of our judgment, flexibility, experience, adaptability and motivation. While at the same time, the engineer must be able to maximize system output by relying on automation to help cover for our inattentiveness, our inconsistencies, our lack of endurance, our lack of vigilance and all of our physical and cognitive limitations. More research is needed to understand the relationship between the automation and human variables so that appropriate quantifications can be made to facilitate the design process. More education of our engineering students is needed in the area of human factors to achieve this level of understanding in the future designers of industrial systems.

REFERENCES

  1. Haight, J.M., To Err is Human and That is Hard for Engineers, Engineering Times, 24, 4, pp. 5, 2003
  2. Hammond, K.R., Quote from John Stuart Mill, Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice, Year unknown.
  3. Lorenzo, P.E., A Managers Guide to Reducing Human Errors - Improving Performance in the Chemical Industry -- Chemical Manufacturer's Association, 1990
  4. Louis, M.R. and Sutton, R., Switching Cognitive Gears: From Habits of Mind to Active Thinking, Human Relations, 44, 1, 55-76, 1991
  5. Mumaw, R. J., Roth, E. M., Vicente, K. J., Burns, C. M., There is More to Monitoring a Nuclear Power Plant than Meets the Eye. Human Factors; The Journal of the Human Factors and Ergonomics Society, 42, 36-55, 2000
  6. Paradies, M., Unger, L., TapRoot®, The System for Root Cause Analysis, Problem Investigation, and Proactive Improvement, System Improvements, Inc., Knoxville, TN, 2000
  7. Parasuraman, R. and Riley, V., Humans and Automation: Use, Misuse, Disuse and Abuse, Human Factors; The Journal of the Human Factors and Ergonomics Society, 39, 2, 230-253 1997
  8. Peters, G., (secondary source), Human Error: Analysis and Control, Journal of the ASSE, 11, 1, 1966
  9. Petersen, D., Human Error Reduction and Safety Management, Van Nostrand Reinhold, New York, 1996
  10. Petroski, H., Design Paradigms, Case Histories of Error and Judgment in Engineering- Duke University, Cambridge University Press, 1994
  11. Salthouse, T. A. A Theory of Cognitive Aging. Amsterdam: Elsevier, 1985
  12. Senders, J.W. and Murray, N.P., Human Error: Cause, Prediction and Reduction/ Analysis and Synthesis, Hillsdale, N.J.; Erlbaum Associates, 1991