The growing demand for real-world evidence
Real-world evidence has moved from a “supplementary” evidence category to a central pillar of pharmaceutical strategy. HTA bodies across Europe now routinely request RWE to complement clinical trial data. Payers use it to assess long-term outcomes, treatment patterns, and cost-effectiveness in routine clinical practice.
For medical affairs teams, this means RWE is no longer optional. It is a required component of the evidence package — for launch, for lifecycle management, and for market access negotiations across EMEA markets.
Key trend: Between 2020 and 2025, the number of HTA submissions in Europe that included RWE components increased by approximately 70%. Agencies like NICE, G-BA, and HAS now expect structured real-world data alongside RCT evidence.
Why RWE projects fail: common challenges
Despite the growing investment, a significant proportion of RWE projects fail to deliver usable results. Based on our experience across over a dozen projects in multiple therapeutic areas, the root causes fall into three categories:
- Unclear research question — starting with “let’s see what the data shows” rather than a specific, answerable question aligned to a decision point.
- Data source mismatch — selecting a database because it’s available, not because it contains the variables needed to answer the research question.
- Insufficient clinical input — designing the study protocol without clinician involvement, resulting in endpoints that don’t reflect real practice.
Each of these problems is avoidable with proper planning — but they require upfront investment in study design that many teams skip in the rush to generate “quick” evidence.
Data quality and access barriers
The single biggest challenge in RWE is data quality. Electronic health records, claims databases, and patient registries all have inherent limitations that must be understood and accounted for:
| Data source | Strengths | Key limitations |
|---|---|---|
| EHR systems | Rich clinical detail, longitudinal | Unstructured data, variable coding quality, missing outcomes |
| Claims databases | Large populations, complete treatment history | No clinical context, diagnosis accuracy depends on coding |
| Patient registries | Disease-specific, curated data | Selection bias, often small sample sizes, variable follow-up |
| Linked datasets | Combines clinical + administrative data | Complex governance, matching errors, time-consuming setup |
The GDPR factor
In EMEA markets, data access is further complicated by GDPR requirements and varying national interpretations. What is permissible for secondary use of health data in Finland may require explicit consent in France. These regulatory differences must be mapped before committing to a data source — not discovered mid-project.
Study design mistakes that compromise results
Even with good data, poor study design can render results unusable. The most common design mistakes we encounter:
- Ignoring confounding. Without randomization, treatment comparisons in observational data require rigorous adjustment. Propensity score methods, instrumental variables, or target trial emulation frameworks are not optional — they are essential.
- Endpoint selection mismatch. Choosing endpoints that matter for RCTs but are poorly captured in routine data. Overall survival is well-captured; patient-reported outcomes often are not.
- Immortal time bias. One of the most common and most damaging biases in RWE — systematically excluding patients who had events before treatment initiation, which inflates treatment benefit estimates.
- Inadequate sample size planning. Running a study without power calculations, then discovering that the sample is too small to detect clinically meaningful differences.
The best RWE study is one where you spend 60% of the project time on design and 40% on analysis — not the other way around.
— Yakov Pakhomov, Medical Director, MAGRegulatory and payer expectations
HTA bodies and payers are becoming more sophisticated in their evaluation of RWE. They no longer accept observational data at face value. Key expectations include:
- Pre-registration of protocols — increasingly expected for post-authorization studies. The EU PAS Register and ClinicalTrials.gov both support observational study registration.
- Transparency in methodology — full disclosure of statistical methods, sensitivity analyses, and handling of missing data.
- Relevance to decision context — data from the specific healthcare system where market access is being sought, not extrapolated from another country.
- Complementarity with RCT data — RWE should address specific evidence gaps identified in the clinical trial program, not simply replicate trial results in a less controlled setting.
Practical recommendations for pharma teams
Based on our project experience, here are five principles that separate successful RWE projects from failed ones:
- Start with the decision. Define what business or regulatory decision the evidence will inform. Work backward from there to the research question, study design, and data source.
- Invest in feasibility. Before committing to a full study, run a data feasibility assessment. Can the chosen database actually answer your question with sufficient precision?
- Engage clinicians early. Clinical experts should validate the study protocol, endpoint definitions, and clinical relevance of results before analysis begins.
- Plan for transparency. Write the statistical analysis plan before seeing the results. Pre-specify primary and sensitivity analyses. This protects both scientific integrity and regulatory credibility.
- Build for reuse. Design your data infrastructure so that the cleaned, validated dataset can support multiple analyses over time — not just one study.
Result: Teams that follow a structured feasibility-first approach report 3× higher completion rates for RWE projects and significantly faster HTA submission timelines.
