.

Why Is Safety So Hard?

Marc Green


We shall understand accidents when we understand human nature. (Kay, 1971)

Safety is usually a continuous fight with human nature. (Geller, 2001)

Why is it so hard convincing people to avoid "unsafe acts", which some claim cause about 90% of all accidents1. The answer is likely to upset some people. It lies in the two quotes above. Humans are designed to make judgments and to behave in certain ways, presumably because evolution has found them advantageous. In many cases, however, human judgments and behavior are labeled as "unsafe" when they appear risky or simply when they are followed by an accident. Kay's observation about accidents is critical for two reasons. First, the operation of human nature explains previous accidents and allows prediction of future accidents. Second, any safety measure that runs contrary to human nature is doomed to fail in the long run. This makes safety, in Geller's words, a continuous fight with human nature.

Unfortunately, many authorities focus on issues like procedures, training, etc. without any attempt to understand people or what governs their behavior. They ignore the real issue - the psychology of human nature. Acting "safely" frequently goes against human nature and often requires "irrational" behavior. Yes, you read that right. The safe thing is often the irrational thing. Of course, it all depends on what is meant by "safe" and "irrational", two rather vague terms. Perhaps most importantly, it depends on your point of view: who is more likely to benefit from the "safety" and who pays the cost.

Much of the material written about safety reads like a combination of the Ten Commandments and Polyanna. The Commandments part is the "Thou shalt not..." safety rules. One behavior is safe while another is "unsafe." Calling these rules commandments is appropriate because they carry an almost religious aura. The Pollyanna part is the naive expectation that people will base their behavior only on the commandments and ignore all other sensory, contextual, and social information as well as their own previous experiences. This is purely wishful thinking and is bound to fail.

Expecting "hypercompliance" with rules is generally naïve (e.g., Rebbitt & Erickson, 2016). Rules do not capture the complexity of the real-world. No one has ever devised a set of rules that can cover all situations. Rules also typically reflect the vision of people who have a limited understanding of the real behavior, who are examples of the Dunning-Kruger effect (less knowledge leads to more certainty), and who are often guided by White Hat bias. Even the best rules are only guidelines that people will and often should ignore for practical reasons. Moreover, humans are not by nature rote rule followers because we are analog, not digital (Norman, 2020). We respond in a graded fashion proportional to graded information. Digital rules such as "cross when the light is green but not when it is red" seem artificial and arbitrary to our human nature. Instead, we just cross when the traffic gap looks long enough (offers an affordance) even if we may have to hurray a bit.

In short, the Commandment approach to safety is pure wishful thinking and is bound to fail for three reasons. First, no one has ever devised a set of rules that can cover all situations. Even the best rules are only guidelines that people will and often should ignore for practical reasons. Second, it assumes that individuals will ignore all the other information provided by their past experiences and future goals. It pays no heed to the realities of our evolved human nature. Third, it often ignores the incentive structure in the situation. "Unsafe" behavior often has higher utility than "safe" behavior. Anyone who is either ignorant of human nature or expects people to always follow the commandments despite human nature is setting himself up for disappointment, not to mention disaster.

The root issues are goals, efficiency and utility. To start, safety is never the main goal, despite all the nonsense rhetoric about safety being the number one priority. No one does anything for the main purpose of being safe. Instead, people have goals which they hope to achieve. They may trade safety off against efficiency in achieving that goal, but the goal and not the safety is the primary focus. Anyone who has crossed the street knows this. When you cross the street, the goal is to get to the other side. Certainly, you don't want to be struck by a car, but the goal is to cross the street. In fact, you probably don't think about the risk and safety at all. Instead, you think of crossing as a task to be performed and then try to achieve the goal without being struck. You simply attempt to perform a task that attains the goal. This is Summala's "zero risk theory," which says that people don't take risks. Instead, they adjust their behavior to act so they perceive risk as essentially zero - they are always acting safely as far as they are concerned. Certainly, if you ask a pedestrian whether there is risk in crossing the road in the middle of the block, the answer will be "yes", but this is just a conscious, socially acceptable response. In the real world, people don't think, they often just act as if there were no, or at least negligible, risk. One study of household accidents found that 74% of injured persons did not believe that they were running any risk at all.

Further, safety is inefficient and never free. Someone has to pay in terms of efficiency, i.e., effort, money, time, etc. When telling people to act safely, you are exacting a tax from them. Efficiency and safety are generally in conflict. This leaves open the question of who is going to pay the tax and who benefits from it. In many cases, the people who must pay the tax are not the authorities who are dictating the commandments, so preaching safety is easy for them. In some cases, there is no obvious way other than to promote safety other than a set of commandments. In others, they could redesign the situation to remove the hazard or to create strong guards, but this is inefficient for them. It is in their best interests to require that individuals follow the commandments, as it saves them much trouble and expense. Their basic view is that the system is inherently safe until someone screws up.

Moreover, following safety rules is often irrational. To act safely, an individual must believe that some benefit offsets the increase effort and inefficiency. Often time, it would be irrational to expect the payoff to justify the cost.

In understanding why safety is so hard, familiarity with two basic psychological sciences is critical. One is psychophysics and the other is operant learning. The first, in part, studies how people make decisions in the face of uncertain evidence. The second studies how the consequences of behavior (the "contingencies") influence and shape behavior.

Putting On the SDT Glasses

One simple way to understand why safety is so hard is through a psychophysical analysis called "Signal Detection Theory" (SDT). I love SDT. I admit it. I'm an SDT junkie. From the time of my master's research 47 years ago until today, I have been constantly amazed at how viewing the world through SDT glasses brings scientific research and everyday decisions and actions into clear focus2.

I have discussed SDT elsewhere (Green, 2017), but a brief review goes like this. Humans are often required to decide whether or not to respond (say "yes" that a "signal" is present") under uncertainty. In the present case, the signal is a hazard that might be encountered. The individual must decide whether to act safely or not. This gives rise to a "payoff matrix". Figure 1 shows a simple, straightforward one. There are four possible outcomes in the example. Two are correct decisions (H and CR) and two incorrect (M and FA).

>Yes: Hit (H): the decision is to act safely and the hazard is encountered;
>Yes: False Alarm (FA): the decision is to act safely and the hazard is not encountered.
>No: Correct Rejection (CR): the decision is to act "unsafely" and the hazard is not encountered
>No: Miss (M): the decision is to act "unsafely" and the hazard is encountered;


Figure 1 Simple SDT decision matrix.

In sum, humans have a choice of saying Yes or No which here corresponds to acting "safely" and acting "unsafely". Humans make the decision using two sources of information. The first is sensory information obtained directly through the eyes, ears, etc. For example, if you see a fire, don't put your hand in it. If you see the edge of a cliff, don't step over it. In such cases, safety is easy.

Safety is hard in other cases where encountering the hazard is not perceptually so obvious and/or lies in the uncertain future. You might have a car crash if you travel 10 mph over the speed limit. The crash is neither visible nor certain in advance. In many cases, the consequences are far in the future, e.g., wearing a breathing apparatus will prevent silicosis in 30 years. Every action is making a bet about what will happen next. The question is how the odds are figured. Individuals want to make the right decision, of course, but the uncertainty requires them to also answer the question, "If I am going to error (FA or M), what kind of error do I prefer and what is the probability of that error". That is where the second information source comes in.

This second source is nonsensory information that is based on stored knowledge and beliefs. Humans use the nonsensory into to set a criterion for when to say "yes" and act safely and when to say "no" and act "unsafely"3. Apart from the simple sensory cases like seeing the fire, everyday decisions require the weighing of probabilities and payoffs. Humans want to maximize "utility", the overall benefit/loss that occurs when making the various responses in the long run. In SDT, the goal is not to make the maximum number of correct decisions but rather to optimize the overall utility of decisions.

In SDT, nonsensory information divides into two general categories, payoffs and probabilities. Probabilities are the likelihood that a "signal" will appear, i.e., there will be crash. Probabilities create expectations. If I judge the crash probability as higher, then I am more likely to wear a seatbelt. The payoffs are the benefits and losses the result from each of the four matrix possibilities.

For example, suppose a worker has to decide whether to wear a hardhat on a construction site:

Correct responses would be:

>H: suppose that he says "Yes" to wearing a hardhat" and an object falls on his head. The hardhat saves his life, create a very big +.
>CR: Conversely, not wearing the hardhat creates a small comfort + value if nothing falls on his head. On a very hot day, the + might grow, however.

Incorrect responses would be:

>FA. He says "Yes" to wearing a hardhat and nothing falls on his head. There is a very small loss - due to hassle and discomfort.
>M: He decides not to wear a hard hat, but an object falls on your head. That is a fairly large -!

The decider presumably looks at the matrix and judges whether saying Yes or No has the greater utility. The same analysis applies to almost any decision to act safely or "unsafely", e.g., the wearing of seatbelts, breathing equipment in mines, high visibility clothing when bicycling, etc. More generally, it could apply to the decision to follow almost any safety rule or warning. Even more generally, it applies to almost any decision made with uncertain or incomplete evidence - which is almost every one.

The highly asymmetrical payoff matrix would seem to dictate behavior. In the hardhat situation, for example, it might seem that wearing the hardhat could ironically be termed a no-brainer. But wait! There's another factor. A meteor might someday fall from the sky and kill me. Should I go out and buy meteor insurance? Looking only at payoffs, the cost of an M, I most certainly should. However, it is a very unlikely event. In deciding whether to act safely (buy insurance, wear a hard hat, buckle a seat belt, etc.), I must somehow combine the probabilities with the payoffs. After all, life is full of hazards, and I would be completely immobilized if I didn't allow for the low probability of most. In fact, it would be irrational.

In the equation of safety, an individual may judge that a large number of small payoffs offers more utility than the very rare large loss. Humans are willing to gamble when the odds are highly with them, but they do not see it as gambling. Humans are poor at estimating the probabilities of rare events. Prospect Theory (Kahneman & Tversky, 1979) suggests the humans exhibit a "pseudocertainty effect", treating low probability but uncertain events as if they would not happen. Humans then prefer a response which produces a high probability small gain rather than one which avoids a low probability great loss, which they judge as having essentially no chance of occurring. Moreover, zero risk theory says when humans believe that they can control risk, they act to bring its probability down to virtually nil. In short, "Accidents do not occur because people gamble and lose, they occur because people do not believe that the accident about to occur is at all possible" (Wagenaar & Groeneweg, 1987).

Operant learning, practical drift, and wicked environments

This is the core problem of safety. How does a decider combine payoffs with probabilities to reach a rational decision, one that maximizes utility? The exact probabilities are unknowable and the payoffs are highly subjective and difficult to quantify. They have several dimensions. Magnitude is obviously one, so avoiding the loss of a leg is a greater than avoiding the loss of a toe. However, in many cases, the payoffs differ in kind and cannot be directly compared. A common conundrum, for example, is the comparing the cost of money to physical harm, loss of time, or increased effort. Immediacy is another important factor in determining payoffs. As many operant studies have shown, immediate payoffs are far more potent than delayed ones. It is easy to see why a miner might not wear breathing equipment: the payoff of wearing one and avoiding silicosis has uncertain probability in the distant future while the benefits of not wearing one is immediate and certain. Similarly, smokers gain small immediate and certain pleasure at the risk of uncertain long-term large negative outcome. The miner and the smoker then choose the smaller comfort benefit, which in hindsight looks irrational.

The world looks very different in hindsight and in foresight. In hindsight, it is easy to focus only on the payoffs and to ignore the probabilities and to then blame the individual for not acting "safely". However, after the fact, the probability of the negative outcome is one. In foresight the probabilities must be taken into account. Decisions are then far less obvious and acting safely is not necessarily the high utility behavior.

In sum, when deciding whether or not to respond, there might be greater perceived utility in a response that produces a very low gain with a very high probability over one that has a very great loss but with very low probability. In other words, many small but very highly probable small payoffs can win out over one big but improbably disaster that will likely never occur. Of course, human ability to accurately assess payoff magnitude and especially probability are limited and are often highly inaccurate. This is a whole other topic of its own.

Figure 2 shows a personal example. Several times a week I come to the point shown. I have to decide whether act to "unsafely" (to cross so that I can take the shortest path home and risk being struck by a car) or whether to act safely, in this case walking to the nearest intersection with the light and crossing with presumably negligible risk. The distance is 266 feet, which I could walk in about one minute. Still, I crossed the road at this point almost every time. After all, I've crossed there many hundreds of times without incident. Would it be rational pay the tax of walking to the intersection if I see no car coming? I don't think so. My direct sensory information easily overrides the commandment, which is very broad and does not consider the specific context. Besides, even if there are cars, I have control over the situation. Probability is not purely a statistical concept. Probability is also based on my perceived ability to control the risk. In fact, people tend to think causally, not probabilistically, so I can make the signal probability essentially zero by crossing the street with a sufficient gap.


Figure 2 Pedestrians cross illegally and ignore signs because probabilities trump payoffs in the utility calculation.

If traffic is heavy, my control is reduced, however, and the payoffs shift. The wait for a "safe" gap is so long that the cost of walking down to the intersection is less, so I cross at the light. This points out the fatal flaw in many commandments - they are general rules that do not necessarily correlate with the actual risk level at any given moment. Incidentally, the sign shown in the figure was erected only last year in an attempt to stop the many pedestrians who cross there. Predictably, it has had absolutely no effect. Apparently, the city is now planning to put in a traffic signal at that location, although, as far as I am aware, there is never been an accident there. It will be safer in theory but an efficiency tax will then fall on the drivers, not the pedestrians like me who had been illegally crossing. This is a common way of increasing safety - find some scapegoat pay the safety tax to compensate for the "unsafe" behavior of others. Drivers will usually be forced to stop unnecessarily at the red light despite a complete absence of pedestrians. This creates a "cry wolf" effect that leads drivers to pay less attention to other pedestrian warnings in general. Of course, pedestrians will probably just cross against the light if they perceive a sufficient gap, anyway.

The pedestrian example also highlights the issue of "risk" and "safety" always being contextual. Is drinking three beers "unsafe" behavior? If I'm sitting on my couch at home, then probably not. If I'm walking along the edge of cliff, probably yes. Commandments are context independent. They often command people to act in ways that their direct perceptions show as being nonsensical because nothing bad is going to happen. Of course, human perception is not absolutely perfect. About 4500 to 6000 pedestrians die on US roads every year. This sounds like a large number so following crossing commandments is only rational, right? Maybe not. The number of road crossings a year is unknown, but it must be a very high number, perhaps in the trillions. Even from a statistical sense, then, crossing at that sign is not obviously "unsafe" behavior. If there are 5,000 pedestrian fatalities and a half a trillion4 road crossings a year, then the odds of being killed crossing the street is one in 100,000,000. What's to worry?

One solution is to make the warning information contextually valid. Figure 3 shows a better solution, a "pedestrian crossover", that contextualizes the warning. Rather than installing a traffic light, the pedestrian pushes a button to light the overhead yellow warning and to stop traffic. If there are no cars approaching, then there is no need to push illuminate the warning and the pedestrian just crosses. If there is no pedestrian, then there are no illuminated signs and no need for drivers to unnecessarily stop. It is probably not as effective stopping cars as a red light, but it is a reasonable compromise between constantly crying wolf and pedestrian safety at a midblock location5.


Figure 3. A pedestrian pushes the button indicated by the red arrow in order to illuminate the overhead signals.

Likewise, people can almost always speed, tailgate, drive intoxicated, cross against the light, ignore protective clothing, refuse to comply with product warnings, and so on without suffering ill consequences. People learn that mishaps are rare, even when performing behavior that authorities might deem "unsafe". Attempts to make people act cautiously with a lower criterion(more H's and FA's) through warnings, etc. run head on into the problem that experience tells people that they can set their criterion high (more M and CR). For example, drivers believe that tailgating is not dangerous because they have done it many times yet have had no collisions. I am speaking loosely here because they do not necessarily "believe" in any conscious sense. On the contrary, they are making an unconscious inference based on experience with the contingencies. Operant shaping usually occurs outside of awareness.

In many cases, individuals learn to act "unsafely". They start out being careful and following rules. They then learn the contingencies, which shape their behavior. They depart from "safe" behavior slightly and nothing bad happens while efficiency increases. Then they depart a bit more and again efficiency increases and again nothing bad happen. This called "practical drift" and is caused by the gradual shaping of behavior by its contingencies, as described by Thorndike's Law of Effect. As Skinner has noted, it much is like evolution, where random genetic variations are naturally selected to shape species. In practical drift, behavioral variations are selected by the Law of Effect. This often leads to a sense of control. While the existence of a potential hazard may be consciously acknowledged ("I could get hit by a car crossing midblock"), the perceived risk is negligible ("There is no risk if I choose a safe gap"). In some cases, the individual need not go through the shaping process himself, but rather models the behavior of cohorts. Individuals determine appropriate behavior largely by watching the actions of others, especially when they are viewed as more experience and knowledgeable. Observed social behavior readily overrides rules and training. "unsafe" behavior is a contagion that spreads rapidly across the population.

One good example of practical drift occurs in car-following. A driver must choose an acceptable following distance behind the vehicle ahead. The novice starts following at a relatively long distance where he feels comfortable. Behavior is variable and every so often, he finds himself closer. He might initially feel some discomfort, but nothing bad happens. He is reinforced for the greater efficiency of shorter following distance. This process is repeated and he is gradually shaped to a shorter and shorter distance, just as the rat in Chapter 2 was shaped to press the key. This leads to "consequence traps" (Fuller, 1991) where the roadway teaches drivers to take risks, as in close car-following. A similar explanation can account for excessive speeding, failure to search and other "unsafe" road user behaviors - pedestrians accept shorter gaps, bicyclists slow less at intersections, etc. However, shaping has limits. Drivers eventually reach a following distance that they will not shorten further because they would no longer be comfortable (Summala, 2007). Unfortunately, the routine and automatic nature of most driving disguises the risk and suppresses the discomfort.

The drift also leads to "consequence traps" (Fuller, 1991) where the roadway teaches drivers to take risks, as in close car-following. Practical drift's power is demonstrated in data (Yue, Yang, Pei, Chen, Song, & Yao, 2020) showing that drivers increase their following distances after the punishment of a rear end collision, but they gradually drift back to their pre-crash short following distances.

This car-following example also highlights a more general roadway problem - the "wicked environment." Drivers must learn "safe behavior" by associating behavior with outcomes. In an ideal world, the there would be a perfect correlation between behavior and outcome. In this "kind environment," learning to distinguish between safe and unsafe behavior would be fast and accurate. However, the contingencies between roadway behavior and outcome are very weak. The roadway generally has a large safety margin and variability, so drivers are not always reinforced for behaving safely and are seldom punished for behaving "unsafely" (Rumar, 1982). The variability of the road environment makes it difficult for drivers to connect behaviors with outcomes. The environment is then "wicked" because there is not a close correlation between behavior and outcomes. Drivers frequently speed, run red lights, drive intoxicated, text, and perform other risky behaviors because they are seldom punished. Conversely, safe behavior often results in punishment. One common example occurs when a driver leaves a large gap in car-following and another vehicle cuts him off. In other cases, the punishment is slower travel and delay.

Once a person believes that he has learned the contingencies, he often develops a sense of control. While the existence of a potential hazard may be consciously acknowledged ("I could get hit by a car crossing midblock"), the perceived risk is negligible ("There is no risk if I choose a safe gap"). In some cases, the individual need not go through the shaping process himself, but rather models the behavior of cohorts through "VT&E" (vicarious trial and error). Individuals determine appropriate behavior largely by watching the actions of others, especially when they are viewed as more experienced and knowledgeable. Parents who walk and bicycle illegally are teaching risky behavior to their children. The modeling of "unsafe behavior" is frequent. Everyone has seen the case where a group of pedestrians stands at an intersection with a red light. When one person starts to walk across, many soon follow. Observed social behavior readily overrides rules and training. "Unsafe" behavior is a contagion that spreads rapidly.

The message here is that people often act "unsafely" because the perceived probabilities and payoffs say that the "unsafe" act has higher utility. "Unsafe" behavior is often perfectly rational. The gain may be small but it is sure. The loss may be large but virtually impossible. Or so it seems. Road users learn to loosen attempted safety restraints due to shaping from the contingencies through practical drift, sometimes reinforced by social modeling. This is another example of learn the system fault tolerance as in Chapter 8. The lesson is that "[r]oad accidents do not just happen: We learn to have them" (Fuller, 1991).

Affordances

Commandments often fail because humans are likely to use, or even rely upon, other sources of information. So far, I have described three: learned contingencies, the observed behavior of others and the observed level of hazard/control. A fourth information source is "affordances", one of the most important but seldom discussed influences on human behavior.

When looking at a scene, a viewer automatically sees sensory attributes such as color, size and shape. Less obviously, the viewer also automatically sees the actions that can be performed on the objects and space in the scene. In a real sense, objects and environments communicate directly with us by their "affordances" (Gibson, 1977), cues from their appearance that tell us how to use them.

Affordances tell us what to do in very direct fashion. Even without written instructions or warnings, a user automatically sees that a handle affords grasping, a flat horizontal surface waist high is a table affordance for the placing of objects, a flat horizontal surface one foot above the ground affords stepping, a doorway affords passage, etc. Commandments that contradict object strong affordances are fighting an uphill battle. For example, ladders usually have a commandment stating that stepping on the top rung is "unsafe" behavior. It is highly foreseeable that this warning will frequently fail because the ladder rungs are stepping affordances. Moreover, the top rung looks exactly like all the other rungs, and objects which look the same will be treated similarly. The ladder is telling us how it should be used.

Affordances provide information about what to do at an automatic and primitive level of human mental processing. Never forget that reading and understanding through text is not highly natural to humans. In contrast, the use of affordances in deeply wired into us. Our instinct is to unconsciously and automatically act on affordances. Moreover, humans have a strong "recency bias", the tendency to act based on the most recent information available. The characteristics of the present situation, including the affordances, may override commandments which were encountered in the distant past during training, etc.

Who Benefits From "Safe" Behavior?

The question of probability is even more complex because there are two ways of looking at probability. There is the individual perspective on the probability that the hazard will be encountered and the authority/employer perspective on probability. Thinking in terms of the group rather than the individual changes risk perception. Airlines now tell passengers to keep their seatbelts buckled throughout the flight in order to prevent being thrown from their seats by a sudden air turbulence. Should I wear my seatbelt throughout an airline flight and suffer the cost of a false alarm (being uncomfortable in my seat, having to buckle and unbuckle when someone in my row wants to go to the lavatory, experiencing a feeling of being controlled, passively complying with the nanny state, etc.) when the likelihood of my encountering that turbulence is miniscule? Sure, the cost is low, but so is the probability. From the airline perspective, passengers' buckling of seatbelts has high utility by preventing injury and lawsuits. The probability multiplied by the number of passengers makes it highly likely that turbulence will occur on some of their many flights and that wearing the seatbelt will be a "hit" for a small number of passengers.

Moreover, the airlines aren't the ones who pay the costs of the false alarm. For them, it's a pure gain. For individual passengers, the correct decision is not so obvious. The cost-benefit of some safety rule depends on who receives the benefit and who pays the cost. Like all warnings, the seat belt warning is an attempt to download the cost on to the user while the likely benefit is enjoyed by the warning issuer. The commandment to act safely by wearing the seatbelt is just great for the airline, but it is not so clear that it is rational behavior for the passengers. Having individuals wearing a seatbelt at all times my be rational for maximizing airline utility, but not necessarily for maximizing passenger utility. The case of the pedestrian crossing above is even more obvious - why should a pedestrian not cross when he judges that there is a gap so large than there is no risk? Individuals are aware of this, so they are skeptical about the credibility of such commandments.

Further, if the commandments are not enforced, the obvious conclusion is that even the authority who created them believes them unimportant. Of course, enforcement would defeat the real reason why authority created the commandments - reducing "unsafe behavior" for free. Enforcement costs money. Moreover, it is often simply impractical.

In many cases, society attempts to promote safety by persuading people to adopt a low decision criterion (many false alarms but very few misses) despite their personal experiences suggesting very low signal probability. This, of course, is why both government and "public interest" groups create inflated and ultimately misleading statistics about some potential hazard - they are trying to increase perceived signal probability. The meaningless "alcohol-related" crash statistic is a good example. This, along with other types of educational approaches, attempt to cognitively change risk perception that develops through direct experience. After all, almost all people who speed, drive intoxicated, etc. do not have a collision. Drivers must be convinced by abstract information, such as accident rates, to believe what they have not directly experienced.

The evidence shows that this strategy has, at best, mixed success against peoples' personal experiences about probability (and doubtfulness about credibility of the sources). As explained elsewhere (Green 2017), individuals often learn directly through experience in a specific task that following the commandment and paying the tax is unnecessary. Even worse, individuals learn that the probabilities favor "No", and that the commandment source has no credibility, which reduces the effectiveness of future commandments from the same and similar sources. Perhaps the classic example is work zones that have signs to slow down followed by miles of orange barrels with nary a sign of workers or construction in sight. It is hardly surprising that drivers slow little when encountering construction areas. Although many authoritative sources tell construction companies to remove signs and barrels when no work is being performed, the guideline is often ignored. This the "cry wolf" effect kills the effectiveness of road signs in general.

A discussion of the commandment approach to safety reveals both why safety is so hard and what to do about it. Most obviously, it explains why the safety hierarchy is so important. Hazards should be designed out if possible and why reliance on warnings or other forms of persuasion is unreliable or worse. It also explains why the proliferation of commandments and overwarning is bad-it increases the a priori probability that prohibitions are just noise (see California prop 65 for example). Sometimes it seems like every device and product on the market contains the phrase "may cause serious injury or death," probably on lawyer advice. It also explains why enforcement and high penalty are often the best strategies; it changes the contingencies by adding a cost what had been a positive CR. The strategy of improving safety solely by changing behavior throught education, etc. simply does not work very well (e.g., Novoa, Pérez, & Borrell, 2009).

Lastly, the term "safety" is relative. The world is probably a safer place than it once was. Improved technology, adoption of the safety hierarchy, etc. has lowered the probability of death and injury in most fields of activity. This ironically makes further safety gains even more difficult. The signal probability has decreased, so deciders are biased even more toward saying No and acting "unsafely." Individuals may then engage in more "unsafe" behavior by "risk compensating", the tendency of individuals to take more risks when they perceive the situation as safer. The research literature is full of case studies where increased safety measures have partly or completely been negated by increases in "unsafe" behavior, i.e., cars with ABS and seatbelts/airbags.

Conclusion

So why is safety so hard? The answers are:

1. Individuals act "unsafely" in order to optimize perceived utility by choosing a very probable small gain over a very large but highly improbable (or subjectively impossible) loss. This is often a rational choice from the individual perspective;

2. Individuals often feel that they can control the situation, making perceived probability of loss essentially nil;

3. Individuals find that the commandments exert an efficiency tax. Experience shows that departing from these rules produces no harm, so there is a "practical drift" away from them. Authorities often tacitly encourage the drift because it improves efficiency and cuts costs;

4. The first three points are reinforced by the pervasiveness of warnings and commandments for extremely unlikely and minor negative outcomes. Individuals see them as "crying wolf" and the believe that they represent nothing more than attempts at CYA ("Covering Your Ass"). The result is a loss of source credibility;

5. Safety rules are usually context independent, so they tax needlessly in many situations. This is apparent to individuals either by observation or by experience;

6. Humans must often learn to distinguish "safe" from "unsafe" behavior in wicked environments where the association between action and outcome is weak;

7. Many individuals exhibit a strong optimism bias. If something bad has never happened before, then it won't happen in the future;

8. Objects and environments directly communicate how they can be used. Unintended affordances may direct individuals toward "unsafe" behavior;

9. High authorities who look at group behavior sum the low probabilities, so their payoff matrix is very different. They count numbers, not probabilities. They also are often the beneficiary of safe behavior without having to pay any of the tax. Moreover, they act as a CYA mechanism. Since new commandments cost authorities nothing and are beneficial, they readily create new ones that ignore human nature and that are unreasonable and unlikely to be followed on a regular basis;

10. Failure to enforce the commandments suggests that even the authorities don't believe them to be important. From the authorities' viewpoints, making commandments and hoping for the best may have more utility than enforcing them or performing a redesign that makes the commandments unnecessary. However, leaving safety up to individual decisionmaking is far less certain;

11. Accident causation is often attributed to "unsafe" behavior because hindsight focuses only on payoffs and ignores probabilities. Such attributions are often counterproductive because they forestall the search for real causes such as system design;

12. Safe behavior tends to asymptote. Safety measures reach diminishing returns because fewer accidents decreases signal probability and biases deciders even more toward saying "No" - acting "unsafely". If individuals feel safer and more in control, then risk compensation can increase "unsafe" behavior. There may be a practical limit to how much safety is possible solely by relying on human behavior; and

Footnotes

1The first comprehensive study was Heinrich (1931), Industrial Accident Prevention: A Scientific Approach. He attributed 88% of all accidents to "unsafe" behavior. More modern studies tend to find similar results, although these are likely gross overestimations resulting from a poor understanding of human factors.

2SDT is actually a complex mathematical model of optimally efficient behavior under uncertainty that hypothesizes the sensory distributions of signal and noise. It is not for the scientifically faint of heart. The aspects discussed here are only a tiniest tip of the iceberg without the mathematics. However, it is enough to appreciate its benefits in the analysis of decision making.

3Technically, this means setting the criterion Yes/No point or "beta" on the "likelihood function," which is the probability that the current situation arose from signal or from noise. The function occurs because noise level varies from observation to observation. I ignore this issue for simplicity sake.

4Assumes 275,000,000 people cross the road on an average of five times per day 365 days a year.

4City rules say that the crossovers should not be used under some conditions that apply at this location. This is counterproductive because it ignores the damage done by putting in a traffic signal.

6My experience in research with children aged 5-10 and the elderly seems to bear this out. I have run both groups on visibility tests. On each trial, they were to say "Yes" or "No" whether they saw the target on a screen. It was very hard to run the tests on children - they had a very strong tendency to say "Yes," even on trials when the target was absent. Conversely, the elderly had a pronounced "No" bias.