.

Driver Error: The 94% Myth

Marc Green


I am posting this page as a rebuttal to the growing number of citations that claim driver error causes as much as 94 percent of road accidents. This number has been used to justify measures that restrict drivers as well as to create a dangerous headlong rush to put self-driving cars on public roads although the technology is far from perfect. However, a closer look shows that although drivers do cause many and perhaps most accidents, the percentage is nowhere near the 90+ percent that often appears in print. The discussion is an outgrowth of my review of the Uber self-driving car-pedestrian death, which is cross linked in several places below.

However, there is a greater lesson to be learned by examining the claim that drivers cause 90+ percent of accidents. No one should blindly accept assertions that "X causes 94 percent of Y", "X increases Y by 94 percent" or worse, simply that "X causes Y." Don't believe what you read, especially about statistics, until you critically evaluate the source and, most importantly, identify the "operational definition" being used to distinguish cases.

Upon hearing such statements, the listener/reader should start asking questions. The first and most basic is:

1. "Where did that number come from?"

In the case of road accident causation, there are two likely sources. One is the Tri-level Study of the Causes of Traffic Accidents (Treat, Tumbas, McDonald, Shinar, Hume, Mayer, Stansifer, & Castellan, 1979). According to this study, driver error contributes to 92.6 percent of all accidents. However, this source is published in an old academic journal. Those less knowledgeable about the road safety literature but more web-savvy are more likely to cite a 2015 National Highway Transportation Safety Administration (NHTSA) report, Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey. According to this report, human error "was involved" in 94 percent of all crashes. This all sounds very damning until you ask the next important question:

2. "How did the source decide whether a crash was due to human error?"

The Tri-Level study isn't entirely clear, but one of the authors (Shinar) explained the criterion in a Q&A session following a conference paper (Rumar, 1985):

I was involved in that study, and it gets cited a lot for the wrong reasonÂ…That study showed behavior that, had it been different, the accident would have been prevented. That study never purported to say that these are the cause of the accidents.

In sum, even the author noted that the study has been widely misunderstood. The study only said that the crash could have been avoided if driver behavior had been different. This is quite different from saying that the driver caused the crash.

The NHTSA report makes a similar statement but it is even more explicit about the criterion of classifying driver involvement in a crash. In a section that is seldom noted by those citing the 94 percent figure, NHTSA says that the number only indicates cases where the driver had the last chance in the event sequence to avoid the crash. This seems similar to the criterion used by the Tri-Level study. Moreover, in line with Shinar's comments, NHTSA then states that the number "is not intended to be interpreted as the cause of the crash nor as the assignment of fault to the driver, vehicle or environment."

In sum, neither the Tri-Level study nor the NHTSA actually say that drivers caused 92.6/94 percent of crashes. But their disclaimers look like meaningless fine print and it is easy to see how a casual reading suggests that these numbers refer to causation. Doesn't the use of the descriptor "error" automatically imply that the driver did something wrong? The 94 percent "was involved" quickly becomes transmuted to 94 percent caused.

However, the probing should not stop here and continue with yet another question:

3. "What operational definition was used to determine when the driver had the "last chance" to avoid?

The concept of "operational definition" is one of the most important yet seldom discussed aspects of science. It is the specific criterion used classify cases. For example, the term "abuse" originally referred to physical abuse, but now often extends to include verbal abuse. The result is a great increase in the number of "abuse" cases. Those in abuse treatment, etc. can then demand more attention, money, and power. Changing operational definitions to widen scope and increase numbers runs rampant in the victimhood-industrial complex. However, they are far from the only culprits in this regard. The lesson is that it is easy to manipulate data to tell almost any story by simply changing the operational definition of the objects of study.

With that in mind, the next step in examining the claim of 94 percent is to determine the criterion used to classify accidents as involving human error. Several obvious criteria suggest themselves. 1) The human was simply the element that directly interacted with the crash. This criterion seems unlikely, otherwise human error would have been 100 percent. 2) A successful avoidance response was in the realm of possible human behavior. This means that a human could theoretically have anticipated the collision and responded, e.g. it wouldn't require a 0.25 second perception-response time since this is beyond human capability in roadway situations. 3) A successful avoidance was in the realm of likely human behavior. Their use of the phrase "last chance" suggests strongly that they were using the "possible" and not "likely" definition. This would drastically inflate the apparent role "driver error" in accidents. It implies that drivers should always avoid any collision that within human capacity to avoid. However, this is simpleminded thinking as many interacting factors can influence whether a driver responds in time to avoid a collision (e.g., Green, 2024).

Moreover, there is evidence that transportation safety organizations drastically overestimate the contribution human behavior to accidents. I have discussed this elsewhere, but for now I'll just note one study (Holden, 2009) which examined 27 National Transportation Safety Board (NTSB) investigations and found that they attributed causation to a human in 26 of them.

The next obvious question is:

"Why did the sources adopt the 'last chance' criterion to inflate their numbers?"

The question has several likely answers. On set lies in social/ideological and incentive factors:

  • "White Hat" bias (e.g., Cope & Allison, 2010). This is the distortion of science for righteous ends. Authorities turn scientific findings into propaganda to promote a goal that they see as socially desirable. In this case, the goal is increased road safety1. The more that organizations such as NHTSA blame drivers, the more pressure there will be to restrain drivers in some way, usually reduced speed limits, etc.

  • Incentives. When thinking about new observations or information, it is advisable to keep in mind the question posed by Roman orator Cicero: Cui bono? By blaming drivers more, NHTSA creates a bigger crisis which leads to a bigger budget, greater control, and more power. As I've explained, this magnification of a problem is a common tactic that plays out over and over. "Drunk driving" is another good example (I'll be writing about that soon.), but it is found in the use of statistics on any social issue, especially where harm or injury is concerned. It is salesmanship 101: convince the buyer that he has a problem and then offer to supply a solution, for a price, of course. The bigger the problem, the higher the price. As I explain depth elsewhere, to understand scientific research you must also understand the goals of those conducting it. There is often a large gap between what Matt Ridley terms "science as a philosophy" and "science as an institution," science as actually practiced. Scientists are just people so scientific research has the biases and foibles of people. How could it be otherwise?

  • Adherence to the old view of human error. The traditional approach to accidents, which is what some now call the "old view" of human error, contrasts with the more modern "new view" (e.g. Dekker, 2002). These represent different philosophies and viewpoints in the understanding of accidents. The old view is that humans are the major cause of problems so they are almost inevitably to blame. The system is safe until humans screw it up. The new view says that human error is a consequence, not a cause of accident. The causes lie in the circumstances and system design. While some road accidents are likely better analyzed from the old view, many are best understood from the new view. However, transportation safety organizations, especially the NTSB are apparently cling to the old view even when it is clear that circumstances produced the outcome. The Uber self-driving car fatality case is a perfect example.

The second set of factors is a combination of innate (or strong cultural) cognitive biases in the way human reason about causation. I'll stop here and direct your attention to this page which discusses the cognitive biases in more detail.

Conclusion

I did not write this article because I believe that drivers seldom cause road accidents. No, I wrote this article for both more specific and more general reasons. The specific reason is to debunk the myth that drivers cause 94 percent of road accidents. Even taken at face value, the reports make no such claim. Moreover, their methodology likely exaggerates the role of driver "error" in collisions. The general reason is to remind readers not to uncritically accept research and especially statistics because they are so easily manipulated both mathematically and by selective use of operational definitions.

Lastly, be especially on guard when you find yourself automatically nodding agreement when seeing a headline such as "X causes 94 percent of Y" or worse, simply "X causes Y." Heed the words of another great Roman thinker, Decius Caecilius Metellus:
The will to believe is mankind's greatest source of error.

Endnotes

1In other cases, the goal is more specific and has White Hat elements, i.e., the active promotion of bicycles, electrical vehicles, self-driving cars, etc., which are seen socially beneficial.