Evidence Strategy · SolveLetter #2

Why Do RWE and RWD Projects Face Challenges, and How Can We Avoid Them?

YP
Yakov Pakhomov, MD, PhD
April 2025 9 min read

The growing demand for real-world evidence

Real-world evidence has moved from a “supplementary” evidence category to a central pillar of pharmaceutical strategy. HTA bodies across Europe now routinely request RWE to complement clinical trial data. Payers use it to assess long-term outcomes, treatment patterns, and cost-effectiveness in routine clinical practice.

For medical affairs teams, this means RWE is no longer optional. It is a required component of the evidence package — for launch, for lifecycle management, and for market access negotiations across EMEA markets.

Key trend: Between 2020 and 2025, the number of HTA submissions in Europe that included RWE components increased by approximately 70%. Agencies like NICE, G-BA, and HAS now expect structured real-world data alongside RCT evidence.

Why RWE projects fail: common challenges

Despite the growing investment, a significant proportion of RWE projects fail to deliver usable results. Based on our experience across over a dozen projects in multiple therapeutic areas, the root causes fall into three categories:

  • Unclear research question — starting with “let’s see what the data shows” rather than a specific, answerable question aligned to a decision point.
  • Data source mismatch — selecting a database because it’s available, not because it contains the variables needed to answer the research question.
  • Insufficient clinical input — designing the study protocol without clinician involvement, resulting in endpoints that don’t reflect real practice.

Each of these problems is avoidable with proper planning — but they require upfront investment in study design that many teams skip in the rush to generate “quick” evidence.

Data quality and access barriers

The single biggest challenge in RWE is data quality. Electronic health records, claims databases, and patient registries all have inherent limitations that must be understood and accounted for:

Data sourceStrengthsKey limitations
EHR systemsRich clinical detail, longitudinalUnstructured data, variable coding quality, missing outcomes
Claims databasesLarge populations, complete treatment historyNo clinical context, diagnosis accuracy depends on coding
Patient registriesDisease-specific, curated dataSelection bias, often small sample sizes, variable follow-up
Linked datasetsCombines clinical + administrative dataComplex governance, matching errors, time-consuming setup

The GDPR factor

In EMEA markets, data access is further complicated by GDPR requirements and varying national interpretations. What is permissible for secondary use of health data in Finland may require explicit consent in France. These regulatory differences must be mapped before committing to a data source — not discovered mid-project.

Study design mistakes that compromise results

Even with good data, poor study design can render results unusable. The most common design mistakes we encounter:

  1. Ignoring confounding. Without randomization, treatment comparisons in observational data require rigorous adjustment. Propensity score methods, instrumental variables, or target trial emulation frameworks are not optional — they are essential.
  2. Endpoint selection mismatch. Choosing endpoints that matter for RCTs but are poorly captured in routine data. Overall survival is well-captured; patient-reported outcomes often are not.
  3. Immortal time bias. One of the most common and most damaging biases in RWE — systematically excluding patients who had events before treatment initiation, which inflates treatment benefit estimates.
  4. Inadequate sample size planning. Running a study without power calculations, then discovering that the sample is too small to detect clinically meaningful differences.

The best RWE study is one where you spend 60% of the project time on design and 40% on analysis — not the other way around.

— Yakov Pakhomov, Medical Director, MAG

Regulatory and payer expectations

HTA bodies and payers are becoming more sophisticated in their evaluation of RWE. They no longer accept observational data at face value. Key expectations include:

  • Pre-registration of protocols — increasingly expected for post-authorization studies. The EU PAS Register and ClinicalTrials.gov both support observational study registration.
  • Transparency in methodology — full disclosure of statistical methods, sensitivity analyses, and handling of missing data.
  • Relevance to decision context — data from the specific healthcare system where market access is being sought, not extrapolated from another country.
  • Complementarity with RCT data — RWE should address specific evidence gaps identified in the clinical trial program, not simply replicate trial results in a less controlled setting.

Practical recommendations for pharma teams

Based on our project experience, here are five principles that separate successful RWE projects from failed ones:

  1. Start with the decision. Define what business or regulatory decision the evidence will inform. Work backward from there to the research question, study design, and data source.
  2. Invest in feasibility. Before committing to a full study, run a data feasibility assessment. Can the chosen database actually answer your question with sufficient precision?
  3. Engage clinicians early. Clinical experts should validate the study protocol, endpoint definitions, and clinical relevance of results before analysis begins.
  4. Plan for transparency. Write the statistical analysis plan before seeing the results. Pre-specify primary and sensitivity analyses. This protects both scientific integrity and regulatory credibility.
  5. Build for reuse. Design your data infrastructure so that the cleaned, validated dataset can support multiple analyses over time — not just one study.

Result: Teams that follow a structured feasibility-first approach report 3× higher completion rates for RWE projects and significantly faster HTA submission timelines.

Newsletter
Get SolveLetter monthly in your inbox
Evidence strategy, advisory board insights, AI tools, and regulatory updates — curated for pharma medical affairs teams.
1–2 articles per month, never more
Practical insights from real projects
Free forever, unsubscribe anytime
Subscribe to SolveLetter
Join 1,200+ pharma professionals. No spam.
Evidence Scanner
Evidence ScannerTM
AI infrastructure

AI-powered.
Expert-validated.

We built AI workflows into our daily practice — not as a marketing claim, but as the infrastructure that lets our medical experts deliver faster without cutting corners.

Research
Structured PubMed queries with narrative or table outputs
Monitoring
Weekly literature digests by drug, target, or topic
AI-Enhanced EDC
Advisory board transcription + structured AI summary
Fact-Checker
Claim verification against your source documents
AI accelerates. Our experts validate.
Every output goes through expert medical review before it reaches your team. AI handles structure and speed — we handle scientific judgement and MLR readiness.
Evidence Scanner · Monitoring module
// Weekly digest: GLP-1 RA publications
monitor("GLP-1 receptor agonist", {
  frequency: "weekly",
  sources: ["pubmed", "congress_abstracts"]
})
Scanning 12 sources...
Weekly Digest · Feb 24–Mar 2
7 new publications found. 2 RCTs, 3 RWE studies, 2 meta-analyses. Key finding: MACE benefit confirmed in CVOT pooled analysis...