Why Don’t we Six Sigma Forensic Science? It’s all about method validation, traceability, and quality assurance

The goal of any form of identification or quantitation is to produce a specific and true expression that is valid. In the forensic world how we scientifically arrive at a reported result should not be an act of mysterious busywork, but rather a result of planned, purposeful meaningful action that is validated and truly scientific.

We live in an interesting time in forensic science. Frequently we find ourselves with very good technology and equipment that is capable of producing valid results. What is missing from this equation quite frankly is the most important part: the validation. Frequently, we miss out on proof of how the actions were preformed and/or how useful the actions are: traceability. Finally, we miss out on having an important double check of the process: quality assurance. It is only if there is a meaningful nexus between these three important concepts (validation, traceability, and quality assurance) that we can begin to have confidence in our reported results.

The lack of transparency in the validation of these techniques that are applied has built up over the years to the point of utter failure where we wake up routinely to headlines that dozens, hundreds and even thousands of results are cast into jeopardy. This is why I predict that we are quickly approaching the tipping point towards the ever bigger, and ever more alarming headlines that we are just starting to see that cast doubt over all forms of forensic science. When there is a lack of regulation and oversight (such is now the case in forensic science), activity moves into that area and overwhelming expands towards the absurd. Where forensic science activities might have had some legitimate use originally, in this day and age of modern testing, we see some really bizarre suggestions trying to slip their way into the mainstream of our courts of justice.

We need to remain ever vigilant and skeptical. Perhaps more so now than ever. The veneer of science and scientific processes is at an all time high, but when truly examined is simply superficial in some cases. The biggest threat we have is not in being wrong, but rather to allow the propagation of error to accumulate into mere repetition and the anchoring of forensic science techniques into our courts merely by their repeated presence instead of their fundamentals.

When court rulings suggest that a forensic technique must be valid simply because it has existed (perhaps never being questioned in the first place) and has been accepted into court for decades and by inference holding that science is static, then justice is a casualty. Who in their right mind believes that science is static? Our understanding of science is subject to change as we learn more. Science is evolutionary, and incremental. Yet, our court system by mere edict holds otherwise.

In essence, it comes down to our society’s willingness to accept risk. To what end are we willing to accept what level of risk in being wrong? In every sort of industry (environmental, pharmaceutical, manufacturing) there is a triad of reducing risk: validation, traceability, and quality assurance. It is enshrined in the very basic movement of Six Sigma.

According to Wikipedia:

Six Sigma is a business management strategy, originally developed by Motorola in 1986. Six Sigma became well known after Jack Welch made it a central focus of his business strategy at General Electric in 1995, and today it is widely used in many sectors of industry.  Six Sigma seeks to improve the quality of process outputs by identifying and removing the causes of defects (errors) and minimizing variability in manufacturing and business processes. It uses a set of quality management methods, including statistical methods, and creates a special infrastructure of people within the organization (“Black Belts”, “Green Belts”, etc.) who are experts in these methods… The term Six Sigma originated from terminology associated with manufacturing, specifically terms associated with statistical modeling of manufacturing processes. The maturity of a manufacturing process can be described by a sigma rating indicating its yield, or the percentage of defect-free products it creates. A six sigma process is one in which 99.99966% of the products manufactured are statistically expected to be free of defects (3.4 defects per million). Motorola set a goal of “six sigma” for all of its manufacturing operations, and this goal became a byword for the management and engineering practices used to achieve it.

If industry can do this, why on Earth can’t forensic science?

In the courtroom, shouldn’t we demand that we have a process in place that results in fewer defects than the making of a cell phone?

Of course.

When asked why do we see these forensic science scandals, I suggest the answer is simple. We have failed to demand processes (simple processes) that minimize the risk of being wrong.

Like the tip of an iceberg, when these scandals are discovered, it is quite simply the equivalent of the Fukushima-like disaster. There is a cascading effect of failure. There is a global failure of quality assurance or traceability or in underlying validation. The entire point of having a double check (what is called either technical review or a quality assurance review) with every result is to have someone greater qualified than the bench analyst disbelieve the data produced and seek to falsify its validity and only approve it if there is no question of the data. When there is a noticeable failure that reaches the headlines, there was undoubtedly a massive failure of the quality assurance (QA) program that may or may not have been in place. Was the QA officer not trained well in the technique? Was there too much throughput that the QA degenerated to nothing but a rubber stamp? Was the QA officer incompetent or fraudulent?

As is often the case in these cases where aberrant invalid results are produced, the knee-jerk reaction of those that are in political control of a testing laboratory is to seek to blame one analyst and claim that the sole analyst alone is the source of all that is wrong. In the process, this analyst is classified as either a rogue lone wolf or an incompetent oaf. A press conference is held with no meaningful information told to the public as to the scientific source of the error with non-scientists assuring the public that the issue has been identified, quarantined, and corrected with no damage. This oddly predictable pattern is repeated in all of the major national scandals: Houston Police Department, Washington DC, Colorado Springs, Philadelphia, San Francisco, Michigan and on and on.

“My policing background allows me to claim the science is OK now.”

The truth of the matter is that this types of event, it is never a lone wolf or a poorly trained analyst situation, but rather a systemic failure and is indeed symptomatic of some large scale issue that includes the training of the analyst, the supervision of the analyst, the lack of meaningful periodic proficiency training  of the analyst, the analysts’s proper training and initial review of his/her own data, the quality assurance officer, the lab supervisor, and ultimately the lawyers and judges of criminal justice system.

An error that is “discovered” to include many samples is not an accident or something that happened overnight. It is repeated error that should have been discovered by someone in that laboratory system or in the criminal justice system well before it got to the headlines. The laboratory’s after-generated claims that frequently report no source of the error and issues instead bald conclusions that are in keeping with its vested political interest to minimize the error and isolate the analyst is just babble and garbage.

When these things happen, it is incumbent upon the laboratory to release the data of its investigation so those of us in the scientific community can see if it is correct in what it is telling us. Many of us would offer to do this verification for free.

If these types of errors were an EPA regulated or FDA regulated laboratory in the private sector, it would be shut down immediately, fully investigated by a wholly independent laboratory auditor, heavily fined, possibly key members in the QA chain possibly indicted, and the documents and information made available to others to review the conclusions in a wholly transparent and scientific way.

From 60 Minutes in describing one such incident and the government and scientific reaction:

Of all the things that you trust every day, you want to believe your prescription medicine is safe and effective. The pharmaceutical industry says that it follows the highest standards for quality. But in November, we found out just how much could go wrong at one of the world’s largest drug makers. A subsidiary of GlaxoSmithKline pleaded guilty to distributing adulterated drugs.

There was reason to believe that some of the medications were contaminated with bacteria, others were mislabeled, and some were too strong or not strong enough. It’s likely Glaxo would have gotten away with it had it not been for a company insider: a tip from Cheryl Eckard set off a major federal investigation.

She’s never told the public what she saw inside Glaxo, but now she has. Her story opens a rare window on how one company traded its good name for bad medicine.

When Fukushima happened, it was shut down immediately, fully investigated by wholly independent auditors (many auditors), and the documents and information made available to others to review the conclusions in a wholly transparent and scientific way. Then, after a while, it is checked again in a meaningful way by independent agencies. Most importantly, the lessons learned by that failure were examined in the light of current situations to insure that this failure was protected against happening elsewhere. Just to name a few of the auditing agencies and their detailed release of data:

Meaningful quality assurance, proper training, method validation, adherence to a validated series of instructions, achieving traceability, and the like are not that hard to implement and monitor. We can do better.

The performance of tasks in forensic science should not be a leap of faith, but rather firmly grounded in science instead. But is it now?

We know how to do better. We must do better. The laboratory’s own assurances that it is doing truly validated science without complete transparency and documented proof must be suspect. We have independent third party review for a reason. Fukushima was well investigated by the company itself, the Japanese government, the International Atomic Energy Agency, US Nuclear Regulatory Commission and others. We must do likewise if there is to be confidence in the reporting of results in the forensic arena.

Trust but verify.

One response to “Why Don’t we Six Sigma Forensic Science? It’s all about method validation, traceability, and quality assurance”

Leave a Reply

Your email address will not be published. Required fields are marked *