Method Validation for Lawyers Part 6: What is quality assurance and quality control?

In a series of posts, we are going to talk about method validation.

  1. Part 1: Introduction-Is it valid, invalid or non-validated?
  2. Part 2: What is method validation?
  3. Part 3: Can we use someone else’s validated method?
  4. Part 4: What triggers verification, re-validation or out right new validation of a method?
  5. Part 5: What are the essential terms in method validation?
  6. Part 6: What is quality assurance and quality control?

As we discussed in our last post,

Quality Control: Procedures which give insight into the precision and accuracy of analysis results. The maintenance and statement of the quality of a product (data set, etc.) specifically that it meets or exceeds some minimum standard based on known, testable criteria.

Quality Assurance: The guarantee that the quality of a product (analytical data set, etc.) is actually what is claimed on the basis of the quality control applied in creating that product. Quality assurance is not synonymous with quality control. Quality assurance is meant to protect against failures of quality control.

Wonderful. We can read, but what does this mean?

Whenever I think of quality control, I think of two things: (1) the resolution standard/separation matrix, and (2) the calibration curve. What I am talking about is the heart of any analytical chemistry process: (1) the ability to be specific, and (2) the ability to weigh. Quality Control is the designed system that is tested and found to be sufficient to provide useful and “good” data that will support the contention that the machine is indeed capable of producing the results the analyst and the method wants. It is not a system of simply testing things in random order, but is a uniform repeated process that has been proven to produce data that supports the goals of the testing. This is very important. We need to test all unknown samples as part of a designed and deliberate process that as a system is designed to produce valid results. We need to not only have instructions (e.g., place 4 calibrators of concentrations of 0.02, 0.08, 0.16 and 0.40 from certified reference materials in vial positions 2, 3, 4, and 5 within the run/batch) but we have to have data that supports that doing so in this exact order and in this exact way produces and insures valid results and protects against invalid results.

This is where a lot of forensic science crime laboratories fail. They have not conducted experiments into the true ruggedness of their process. In a ruggedness study (sometimes referred to as a robustness study), we change variables and inputs to see if it changes the results. Another way of looking at it is that we look to optimize (streamline) the method and see when or if the method reaches a “breaking point.”

Is our method delicate or is it more robust/rugged?

Every machine needs to be “taught” what to do and how to do it because out of the box, each machine is a dumb device. In the case of blood ethanol testing for example, the analyst has to teach it what is ethanol and most importantly what is not ethanol. This is not as easy as it seems as there are over 65 million registered chemical compounds according to CAS. If you don’t teach it right, meaning how to pick out ethanol and ethanol alone to the exclusion of every other compound in the universe, then you will get the wrong result. It can tell us how much, but only of what it selected out. So, if you did not teach the machine right as to how to pick out ethanol uniquely, then when it next moves on to weighing stuff, it will report the wrong result. The machine is a fancy bean counter, but you have to show it what to count and what not to count in order for the result to be right.

Quality Assurance on the other hand has two basic levels. It is basically a “double check” mechanism and also an error reporting mechanism, if set up correctly. We should expect analysts to make mistakes. After all, they are human. This is why we need a formal process of double checking to make sure that the data is free of error and there is verification that all of the correct, proper and validated steps were followed as articulated in the instructions of the assay. The old carpenter’s adage comes to mind:

Measure twice, cut once
Measure twice, cut once

But there is also another level of important double checking in the process referred to as error reporting. I find it highly suspect (as you should) if a laboratory has  a formalized method of error reporting, yet reports there are no errors. The failure of Quality Control initiatives or actual testing unknowns is bound to happen. To err is human, or so they say… To document this error is the scientifically and ethically correct decision. Ignoring it or covering it up is not. It is not considered a laboratory failure unless it is reported outside of the laboratory. Too often in forensic testing as performed in the United States today, the Quality Assurance process is little more than a rubber stamp that equates any result generated with a “good and valid” result. A good QA officer should be more harsh than any educated criminal defense attorney in examining the data. Sadly, this is not the case.

Leave a Reply

Your email address will not be published. Required fields are marked *