Truth in Reporting Clinical Trial Efficacy: A Call for Action
Trial Site News
Clinical trial reporting of treatment efficacy is the foundation of public trust in medicine. Yet pharmaceutical companies often report efficacy measured as a relative risk reduction (RRR) while omitting the more relevant absolute risk reduction (ARR). This statistical sleight of hand can make clinically insignificant benefits look spectacular!
Consider a simple example
A risk is defined as the percentage of people in a group that experience an event. For example, if 2% of untreated patients contract influenza compared to 1% of treated patients, the ARR is simply the mathematical difference—a 1% risk reduction. But companies trumpet a 50% RRR (the 1% ARR divided by the 2% untreated or baseline risk). To the public, a high risk reduction, like 95%, sounds miraculous. The startling reality is that a relative risk reduction is not even a risk! It is a reduction of a ratio: calculated as the treatment risk divided by the untreated baseline risk, a ratio known as the relative risk, which is then subtracted from the null value of 1 to indicate effect magnitude. Sound confusing? Exactly—that’s the point. It obscures the absolute risk difference.
Whatever you may think of the impressive-sounding RRR, it is imperative that the ARR is reported to set the record straight. No less than the U.S. Food and Drug Administration has admitted this, stating: “Both absolute and relative risk reductions should be presented. Presenting only relative risk reduction can mislead, while absolute risk reduction provides the necessary context for understanding the true magnitude of benefit.” Communicating Risks and Benefits: An Evidence-Based User’s Guide | FDA.
Appropriate Use of Risk Measures
Although the FDA recommends reporting both types of risk measures, there are important differences in the appropriate use of each measure. Indeed, RRR has legitimate applications in epidemiology and observational research. When studying associations — for example, the link between smoking and lung cancer — risk ratios relative to the baseline or reference risk help identify correlations and dose-dependent responses. In such contexts, RRR is the best evidence available, especially when clinical trials would be unethical (you can’t force study participants to smoke). Biostatistician Jerome Cornfield himself underscored that relative measures are appropriate in studies of association, while absolute risks must be reported in experimental controlled studies, including clinical trials.
Cornfield’s work in the 1950s linking smoking to lung cancer relied on relative measures to demonstrate strong associations in observational data. On the other hand, randomized controlled trials are designed to minimize confounding factors through randomization and strict experimental design. In this context, absolute risk reduction provides the direct difference in outcomes between treated and untreated groups — the information about treatment efficacy under controlled conditions that patients, clinicians, and policymakers need to make informed decisions.
By contrast, relative risk measures are properly used to determine effectiveness in real-world observational studies, where confounding cannot be fully controlled and associations are measured relative to a baseline or reference risk. RRR helps identify correlations and dose-dependent responses, but it does not quantify causation. As I argued in my peer-reviewed article, Relative risk reduction: Misinformative measure in clinical trials and COVID-19 vaccine efficacy – PubMed, “Randomized controlled clinical trials require absolute measures of risk reduction to prove causation of vaccine efficacy, not relative risk reductions that only observe associations of effectiveness.” The misuse of RRR in experimental contexts distorts public understanding and undermines policy integrity.
Why ARR Matters in Trials
- ARR is transparent: It shows the clinical difference in outcomes between treated and untreated groups.
- RRR exaggerates: Ratios are not risks, which can mislead policymakers and patients into believing a drug is far more beneficial than it is.
- Public clarity: ARR is intuitive—everyone understands the difference between a 1% and 2% risk through simple subtraction.
The Enforcement Gap
Federal law already requires companies to report adverse drug reactions under FDA rules: Postmarketing Adverse Event Reporting – Required Information | FDA. But penalties for non-compliance appear weak, often consisting of fines, recalls, or settlements that rarely exceed a fraction of profits. Fraud charges usually end in payouts absorbed as routine business expenses.
Without stronger enforcement, legislation becomes empty and meaningless. There is little deterrence to prevent unethical corporate behavior while the public and public health policy continue to be misled.
Policy Outline: Toward Truth in Clinical Trial Reporting
The following outline is not a full legislative draft, but rather a framework to guide discussion and action. Its purpose is to highlight the essential elements of reform that ensure clinical trial efficacy is communicated truthfully and transparently. It applies to all stakeholders involved in trial reporting and communication — including pharmaceutical sponsors, contract research organizations, regulatory agencies, journals, and marketing teams — so that patients, clinicians, policymakers, and the public receive accurate information about treatment benefits.
- Mandatory ARR disclosure: Require absolute risk reduction (ARR), baseline risks, and event counts in all trial publications, FDA submissions, and promotional materials.
- RRR as secondary: If preferred, allow relative risk reduction (RRR) in trial reportting only as a complementary measure, never without ARR and baseline risks.
- Standardized metrics: Include ARR, baseline risk, and the ARR’s reciprocal (1/ARR), which is the number needed to treat/harm (NNT/NNH), as primary endpoints.
- Transparency parity: Hold efficacy reporting to the same standard as adverse event reporting — mandatory, consistent, and enforceable.
- Enforcement mechanisms: Strengthen penalties for misrepresentation, scaling fines to revenue and considering criminal liability for knowing omissions. Penalties should have a measurable deterrence effect.
- Public accountability: Create a transparency dashboard displaying ARR and baseline risks for authorized products and registered trials.
This outline is intended as a starting point for experts, policymakers, and advocates to refine into actionable legislation and regulatory standards. Its purpose is to ensure that efficacy is communicated with honesty and clarity, restoring trust in clinical trial reporting.
Conclusion
Truth in clinical trial reporting is not optional—it is the foundation of valid science. RRR belongs to observational studies of effectiveness, ARR belongs to clinical trials of efficacy. Legally mandating ARR disclosure and strengthening enforcement and penalties for misrepresentation will increase deterrence and restore integrity to clinical trial communication. The status quo is intolerable, and lasting change is the only way forward.