Although Corporate America has embraced the concept of metrics and dashboards for close to 20 years now, the US Food and Drug Administration (FDA) has only recently come to the conclusion that incorporating a review of company-provided product quality metrics into its inspection and enforcement prioritization model will provide the Agency with an increased understanding of product quality performance at pharmaceutical manufacturers.
Significant activity has been occurring in this space of late, and given the heightened interest, on December 9-10, the Parenteral Drug Association (PDA) is hosting a conference dedicated to the topic of pharmaceutical quality metrics. Notable, Dr. Janet Woodcock, Director of the Center for Drug Evaluation and Research (CDER) will be present at this conference. The lineup of industry experts shows that this topic is ready-for-prime-time and is most certainly one that will get a high-level of attention from the quality and compliance community over the next couple of years.
Things to Consider
Given all this interest, and the long-standing use of metrics by Corporate America, it would seem that it should be full-steam ahead for both manufacturers and the Agency in collecting, aggregating, reporting, and most-importantly relying on such data. However, based on our experience from a recent, substantial consulting engagement with a leading multi-national pharmaceutical company, where we drove the development and implementation of a SharePoint-based metrics reporting portal, there a number of things to consider before “jumping into the pool” and faithfully relying on corporate quality metrics.
Metrics are the Illegitimate Siblings of Statistics.
Most everyone has heard the saying “Lies, Damn Lies and Statistics!” What is interesting about this phrase, and about the use of statistics generally is that, despite the completely scientific and mathematical basis for probability and statistics, they can be easily misused to not only obfuscate, but often to completely misrepresent reality. In my course on Writing for Compliance™ I cover multiple ways to characterize information that can lead you from the legitimate “representation” of information or data, across the line into the murky realm of “misrepresentation.”
Crossing that line into misrepresentation is easy enough with statistics; however, it’s even easier with metrics, as metrics, per se, have no scientific or mathematical basis. By definition, they are in fact, made up—to meet the needs of their creator. Metrics can be defined to represent information in whatever manner you want it to be considered.
If you want metrics to look good, it’s easy to do so. Just measure the things that are consistently positive, and characterize them as more significant than they are. And it has been my distinct experience that very few people have the insight, capability, and/or desire to challenge a corporate metric, once published.
Metrics, and Specifically Quality Metrics, are Often Based on Inaccurate, Misleading or Poorly-Defined Source Data
Even if an organization or company is well-intentioned (and as the saying goes, you know where that leads you), metrics are, by definition, measurements. However, in the context of quality metrics, the items (or better said, information objectives) being measured are often not easy to obtain; and even if they are, they may not be based on consistent definitions.
Due in substantial part to industry’s fear of 21 CFR Part 11 and the cost and risk of implementing computer-based systems for quality management because of the requirements for Computer Systems Validation (CSV), far too many companies are still using either paper-based information systems, or computer-facilitated, paper-based systems. This is in contrast to data-driven approaches to information management that have been embraced mostly by the very large FDA-regulated organizations, and even there, has resulted in expensive systems that companies are reticent to change or modify.
The downside of the foregoing is that it is a rare company that is lucky enough to have it’s operational and quality information readily-available in the right formats for easy analysis, aggregation, and/or reporting, since each of these steps creates an additional “functionality” over basic record-keeping (the 40-year outdated objective of most of FDA’s quality system regulations).
Securing the source information for the calculation of quality metrics is often an entirely manual exercise, done mostly with spreadsheets from proprietary system exports, if at all. It is also often a laborious, paper-intensive exercise, relying on manual review of paper-based records to write down previously “recorded” information for input to spreadsheets to enable basic aggregation, analysis and reporting.
Finally, regardless of whether the information (data) is readily-available and easy to aggregate, there is a real danger in “definitional disconnect”. In other words, unless a significant effort is made during the source data recording activity to ensure that people are recording information that is “apples to apples”, or, that within the data collection and aggregation phase, that the information is normalized for field names, content scope, date range, procedural differences, requirements differences (e.g., from division to division), etc., then the resulting metrics may very well mean different things, regardless of the intent to provide coherent, meaningful measures.
Thinking of how hard it is to be consistent with information within an organization, and thinking of how these concerns apply to the new FDA intended-approach to using metrics for quality assessment, and it’s challenging to think of how the FDA is going to be able to meaningfully rely on metrics to assess differences in quality performance from company to company. It’s a great intention; but, remember what I said above — that good intentions often lead us into places we don’t really want to be.
Too Much Focus on Metrics Leads to “Metrics Mania”.
Having spent a large part of my professional career inside large corporations, and now often delivering consulting services to these same large companies, I have seen firsthand how the well-intended use of metrics for performance management, performance improvement, and legitimate measurement turns quickly into what I not-so-affectionately term “metrics mania.” This affliction is based on the dangerous assumption that once you eliminate judgment and the corresponding subjectivity from decision-making, that the “pure” nature of mathematics — numbers — will lead you to the best outcomes.
Hence, incredible amounts of time are spent on capturing, analyzing and reporting metrics, which then turns into the inevitable need for “correction” when the metrics show poor performance.
Most often, though, in situations where the metrics are red, the objective is not to find the underlying structural contributors to poor performance — it’s not to find that the organization is improperly-resourced — that the physical plant is inadequate — that the process or product is poorly designed — that the specifications are not aligned to product performance — no, the corrective measures are usually focused on simply figuring out what to do to change the metric from red to green. In this approach to characterizing and representing corporate achievement, once the metrics turn green, everyone is happy.
Outcome Metrics are Often Not Linked to Operational “Levers”
The original intent of most companies in their zeal to embrace quality and operational metrics is to provide objective information that will lead to both increasing performance relative to outcomes that drive business success, and to identify activities and outcomes that require remediation.
As noted above though, the exercise often turns into a “do whatever it takes” effort to turn the metric from “red” to “green.” The reason this occurs is that companies often don’t have the first idea as to what granular activities and efforts influence the outcome methods, nor how much they influence, or in what proportion to other related activities and efforts.
Good outcome metrics need to be well-characterized and tied to pre-defined operational “levers.” To use an analogy, if we have a light that’s attached to a wall switch, and the light is off (outcome), there is in almost all cases, only one of two reasons (levers) for it: 1) the switch is off, or 2) the bulb is not functioning.
Pull the right lever — turn the switch back on or fix the bulb, and the light goes back on. Similarly, metrics (although they’re generally a bit more complicated than a light bulb) should be sufficiently characterized such that we understand what the pre-defined levers are. You could also call these contributing factors or any one of a number of terms that tie effect and cause together. Unfortunately though, whatever you call them, defining these levers well is rarely done in the metrics programs I’ve seen.
So, in light of the foregoing is the FDA’s plan to use submitted quality metrics in a corporation-to-corporation risk comparison something we should be concerned about? And, give the challenges that individual companies have with the establishment of reliable data and metrics, should companies be comfortable “submitting” such information to the FDA “metrics reviewers” without a substantial focus on data integrity, reliability and validity? If nothing else, the FDA’s plans should be highly-scrutinized by both FDA’s policy wonks, and by industry critics.
Of course, in spite all of the foregoing, the establishment of meaningful, well-characterized metrics aligned to an organization’s quality and operational objectives can be accomplished. Such metrics can be extremely beneficial to consistently achieving high-quality operational outcomes.
Done well (and it can be done well) metrics serve as a (but not the only) important tool in effective quality and operational management. Understanding where a metrics program can go wrong is a first step in ensuring the most value and in ensuring that you don’t rely on the wrong information in development of your metrics program.
Contact Us
For help in creating a quality metrics program or adapting an existing program to the upcoming submission requirements of the FDA; and / or to move towards a fully-data centric, web-based metrics solution, please fill out the contact form below.