How Reliable is Your Reliability Data?

 In

CHERYL TULKOFF

BY CHERYL TULKOFF

“Science is an ongoing race between our inventing ways to fool ourselves, and our inventing ways to avoid fooling ourselves.”1

The quote above came from a fascinating article Nature Magazine published in 2015 about how scientists deceive themselves and what they can do to prevent it. Nature was responding to a rash of articles decrying bias, reproducibility and inaccuracy in published journal studies. Although the original focus of the articles concerned the fields of psychology and medicine, the topic is directly applicable to the electronics field and especially relevant to professionals performing failure analysis and reliability research. Reliability data has always been extremely sensitive both within and between companies. You’ll rarely see reliability data unless the results are overwhelmingly positive or they resulted from a catastrophic event. Furthermore, the industry focuses more on how to organize and analyze data and less about the best way to select or generate that data in the first place. Can you truly rely on the reliability data you see and generate?

■ Figure 1. Biases & De-biasing Techniques

There are definitely some relevant bias recognition and prevention lessons we should all learn and share. For example, how many times have you been asked to analyze data only to be told the expected conclusion or desired outcome before you start? Figure 1 illustrates a few common forms of scientific biases as well as some de-biasing techniques.2 I will discuss these techniques in further detail.

First, the term “bias” has many definitions both inside and outside of scientific research. The definition3 I prefer is that bias is any deviation of results or inferences from the truth (reality), or the processes that lead to the deviation. An Information Week magazine article sums up the impact of data bias on industry well stating: “Flawed data analysis leads to faulty conclusions and bad business outcomes.”4 That’s something we all want to avoid. The article expands and builds on the biases and cognitive fallacies illustrated in the Nature graphic:

  • Confirmation Bias: A wish to prove a certain hypothesis, assumption, or opinion; intentional or unintentional
  • Selection Bias: Selecting non-random or non-objective data which doesn’t represent the population
  • Outlier Bias: Ignoring or discarding extreme data values
  • Overfitting And Underfitting Bias: Creating either overly complex or overly simplistic models for data
  • Confounding Variable Bias: Failure to consider other variables which may impact cause and effect relationships
  • Non-Normality Bias: Using statistics which assume a normal distribution for non-normal data

Another definition I found particularly useful comes from the US government Generally Accepted Government Auditing Standards. They use the concept of “data reliability” which is defined as “a state that exists when data is sufficiently complete and error free to be convincing for its purpose and context.”5 Data reliability refers to the accuracy and completeness of data for a specific intended use but it doesn’t mean that the data is error free. Errors may be found but errors are within a tolerable range, assessed for risk, and found to be accurate enough to support the conclusions reached. In this context, reliable data is

  • Complete: Includes all of the data elements and records needed
  • Accurate: Free from measurement error
  • Consistent: Obtained and used in a manner that is clear and can be replicated
  • Correct: Reflects the data entered or calculated at the source
  • Unaltered: Reflects source and has not been tampered with

So, don’t simply ask “Is the data accurate?” Instead, ask “Are we reasonably confident that the data presents a picture that is not significantly different from reality?”

Shedding further light on the topic of bias in scientific data and research are some foundations that have made it their mission to improve data integrity and study repeatability. Two such organizations are the Laura and John Arnold Foundation (LJAF) and the Center for Open Science.6 The LJAF’s Research Integrity Initiative seeks to improve the reliability and validity of scientific research across fields that range from governmental to philanthropy to individual decision making. The challenge is that people believe that if work is published in a journal, it is scientifically sound. That’s not always true since scientific journals have a bias towards new, novel and successful research. How often do you read great articles about failed studies?

Shedding further light on the topic of bias in scientific data and research are some foundations that have made it their mission to improve data integrity and study repeatability.

LJAF promotes research that is rigorous, transparent and reproducible. These three tenets apply equally well to reliability studies. Studies should be:

  • Rigorous: Randomized and well-controlled with sufficient sample sizes and durations
  • Transparent: Researchers explain what they intend to study, make the elements of the experiment easily accessible, and publish the findings regardless of whether they confirm the hypothesis
  • Reproducible: Repeating the work and validating that the outcome is consistent and can be reproduced independently

The Center for Open Science also has a mission to increase openness, integrity, and reproducibility of research.7 COS makes a great analogy to how a second grade student works in science class: observe, test, show your work, and share. These are also shared, universal values in the electronics industry but things get in the way of living up to those values. COS advocates spending increased time spent on experiment design. This involves communicating the hypothesis and design, having an appropriate sample size, and using statistics correctly. Taking time to do things right the first time prevent others from being led down the wrong path. COS also emphasizes that just because a study doesn’t give the desired outcome or answer doesn’t make the stud y worthless. It doesn’t even mean that a study is wrong. It may simply mean that the problem being studied is more complicated than can be summed up in a single experiment or two.

Ultimately, ignoring data and analysis biases can lead to catastrophe. The Harvard Business Review published a paper8 with case studies illustrating the harmful impacts of bias. The Toyota case study shows the consequences of outlier bias. Ignoring a sharp increase in sudden acceleration complaints, the “near misses”, led to tragedy. The Apple iPhone 4 antenna example illustrates asymmetric attention bias. The problem with signal strength was well-known and ignored since it was an old problem that had been tolerated by the public. Until it wasn’t. So, now that we have discussed some of the many biases and de-biasing techniques out there, is your reliability data reliable? How confident are you that it truly reflects reality?

REFERENCES

  1. http://www.nature.com/news/how-scientists-fool-themselves-and-how-they-can-stop-1.18517
  2. http://www.nature.com/polopoly_fs/7.30171.1444233846!/image/Reproducibility_graphic2.jpeg_gen/derivatives/landscape_630/Reproducibility_graphic2.jpeg
  3. Porta, Miguel. A Dictionary of Epidemiology. New York: Oxford University Press, 2008
  4. http://www.informationweek.com/big-data/big-data-analytics/7-common-biases-that-skew-big-data-results/d/d-id/1321211
  5. www.auditorroles.org/files/…/Tool2aAustinCityAud_GuidanceTestingReliability.pdf
  6. http://www.arnoldfoundation.org/initiative/research-integrity/
  7. https://cos.io/
  8. https://hbr.org/2011/04/how-to-avoid-catastrophe
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

0

Start typing and press Enter to search