Published on 15/11/2025
Turning Epidemiology and Trial Landscape Signals into Realistic Enrollment Plans
Purpose, Principles, and the Global Frame
Epidemiology and competing-trials analysis is where feasibility moves from optimism to evidence. Done well, it translates the protocol’s inclusion/exclusion criteria and time windows into a credible eligibility funnel by country and by site, while quantifying how other studies will compete for the same participants, staff, scanners, pharmacists, and couriers. Done poorly, programs overestimate enrollment capacity, underprice start-up, and discover too late that the care pathway or trial landscape makes the plan infeasible. This article provides
Anchor in harmonized expectations. A quality-by-design posture—prioritizing controls that protect participant rights and endpoint integrity—is consistent with high-level principles shared by the International Council for Harmonisation. In the United States, teams often align feasibility documentation and selection rationales to public-facing materials found within FDA clinical trial oversight resources. European operations benefit from orientation notes and authorization cadence awareness provided by the European Medicines Agency. Ethical touchstones—respect, fairness, confidentiality—are emphasized in WHO research ethics guidance. For multinational programs reaching Japan or Australia, keep language and planning artifacts coherent with orientation material from PMDA and the Therapeutic Goods Administration so feasibility logic remains consistent across regions.
What feasibility must decide—before budgets harden. The combined epidemiology and landscape assessment should yield: (1) the addressable population by country/site after care-pathway and eligibility filters; (2) the conversion rate from pre-screen to consent to randomization; (3) the activation sequence by country and site that meets timelines with contingency; and (4) the risk profile from competing trials, resource conflicts, and seasonal/logistics constraints—with explicit thresholds that trigger redesign, re-sequencing, or vendor support. Each conclusion must be backed by sources and owners; opinions are not enough.
Inspection posture and ALCOA++ evidence. Auditors routinely ask, “Why these countries and sites for this protocol?” “Where is the evidence that the target population exists and can be converted within windows?” “How were competing trials considered?” Maintain an evidence pack for each decision: data sources, modeling assumptions, transformation steps, and the decision memo. Records should meet ALCOA++ attributes—attributable, legible, contemporaneous, original, accurate, complete, consistent, enduring, and available—so the path from protocol requirement → assumption → forecast → outcome can be retrieved in minutes.
Design the analysis as a system, not a one-time snapshot. Epidemiology and landscape signals should feed start-up timelines, country depots, IP resupply rules, and monitoring intensity. When an assumption changes—e.g., a competing pivotal launches in your population, or a reimbursement policy shifts your care pathway—the dashboard updates downstream plans automatically. Treat the model like controlled code: version it, redline the “what changed and why,” and rehearse five-minute retrieval drills using live examples.
Epidemiology That Predicts Conversion, Not Just Counts
Translate condition counts into an eligibility funnel. Start with country-level prevalence/incidence, but move quickly to the care-pathway-adjusted population. For each country, map where and when eligible patients appear (primary care, specialty clinics, tertiary centers), who influences referrals, and how long diagnosis and staging take relative to protocol windows. Build an eligibility funnel: total with condition → diagnosed and in care → reachable by participating sites → meet inclusion/exclusion → within visit/time windows → likely to consent → randomized. Document sources and assumption ranges at each step.
Window feasibility and clinical workflow. Protocol windows drive feasibility more than raw counts. If your primary endpoint requires Week 12 imaging on the same scanner with specific parameters, quantify scanner availability, maintenance windows, and radiology staffing. If a baseline biopsy must precede randomization, map typical scheduling delays and pathology turnaround. Where tele-visits or home-health assessments are permitted, quantify how decentralized options reduce friction and increase conversion—and state identity verification and data-quality assumptions explicitly.
Subpopulations and representativeness. If the target label will span broad demographics, feasibility should measure the reachable mix by age, sex, race/ethnicity where legally appropriate, comorbidity clusters, and language needs. Model whether recruitment tactics and site mix can achieve representative enrollment without imposing undue burden. Where structural barriers exist—distance, childcare, internet access—cost them in time/budget and specify countermeasures (mobile nurses, local labs, travel support) so the forecast reflects reality.
Rare diseases and pediatrics. Treat small populations as logistics problems as much as epidemiology problems. Map patient advocacy networks, diagnostic odysseys, and the small number of investigators who see enough cases. Expect multi-country accrual with heterogenous ethics paths and translations. For pediatric protocols, account for school calendars, assent rules, and caregiver availability. In both contexts, decentralization can unlock capacity, but identity verification, privacy, and data integrity controls must be spelled out in the feasibility assumptions.
From point estimates to ranges. Assign a base case and credible intervals for each step in the funnel. For example: diagnosed-and-in-care fraction 60–75%; eligibility 30–40%; consent 55–70%; randomization 90–95% of consented. Carry ranges through to monthly consents and randomizations per site. Publish the math so leaders understand risk, not just averages. Tie confidence levels to data quality; low-certainty inputs should produce wider bands and earlier contingency actions.
Operational filters you should never skip. Confirm: (1) translation needs and cycle times for consent/assent and questionnaires; (2) availability of labs, imaging, and device configuration required by endpoints; (3) pharmacy cold-chain storage and alarm thresholds; (4) courier pick-up windows and dry-ice or hazardous-goods restrictions; and (5) privacy and cross-border data transfer constraints for telehealth and device telemetry. These aren’t afterthoughts—they determine whether your eligible population can actually convert within windows.
Equity as a feasibility variable. Equity is not only an ethics goal; it’s a predictor of ramp stability. A site panel that mirrors the population—urban and rural, academic and community—reduces volatility when a single center pauses. Quantify how many participants can come from community settings with supportive navigation (transport/childcare/translation). If the model depends on such support, include it in budgets upfront and specify what happens if uptake lags.
How to document epidemiology assumptions. Use a short template: “Assumption,” “Basis,” “Range,” “Owner,” “Date,” and “Downstream Impacts.” Example: “Baseline CT capacity supports ≤12 protocol scans/day (Basis: hospital PACS report; Range: 8–12; Owner: Imaging Lead; Impacts: visit windows, resupply cadence).” Store the template and citations with your country/site decision memo so inspectors can follow the logic quickly.
Competing Trials and Resource Conflicts—Seeing the Invisible Friction
Beyond patient overlap: the resource view. Most landscape reviews stop at counting trials that recruit the same diagnosis stage. Go further: list trials that compete for the same resources—investigators, coordinators, pharmacies, scanners, couriers, home-health capacity, and even sponsor attention—within a 50–100 km catchment of each site. A first-in-class device trial might not share your inclusion/exclusion, yet it can saturate the same MRI slots or divert the same unblinded pharmacist time, throttling your conversion.
Define a structured comparison grid. Capture, for each competing study: indication and stage, key inclusion/exclusion contrasts, visit burden, invasive procedures, DCT options, compensation levels (if publicly available), randomization ratios, and expected duration. Where feasible, include the trial’s likely “ease of saying yes” versus yours—if your protocol demands more invasive work or tighter windows, factor a lower consent probability when both studies are running at the same site.
Investigator network and panel fatigue. Investigators with overlapping trials may steer referrals based on perceived feasibility, personal interest, or site capacity. Chart investigator networks: who co-authors, who regularly partners with the same CROs, who runs the same imaging core protocols. If your protocol aligns with their workflows, you gain speed; if it conflicts (e.g., same-day imaging bottleneck), expect slower accrual. Build a plan for site enablement (navigator staffing, pre-scheduled imaging blocks) to compensate.
Vendor and depot capacity as a shared constraint. Multiple sponsors might rely on the same central lab couriers, depots, and cold-chain lanes. Monitor seasonal spikes (e.g., holidays, monsoon/winter storms), customs throughput, and hazardous-goods caps. Your depot strategy should include alternates and roll-forward rules when country starts slip, so inventory follows activation without waste. If competing launches consume courier capacity, build surge contracts or at-risk vendor fees into the plan before it becomes a crisis.
Device and diagnostic specifics. For device or diagnostic trials, track firmware/software version changes and human-factors training loads across local studies. If multiple trials need the same scanner sequences or probe types, coordinate acceptance testing and QC windows. A site that juggles incompatible software versions across studies will suffer error rates and re-scans; feasibility should quantify that risk and either simplify your configuration or pick sites with cleaner ecosystems.
Signals and thresholds. Define KRIs that trigger action: (1) ≥25% of pre-screens report “enrolled elsewhere”; (2) imaging slot utilization ≥85% across the catchment; (3) courier exception rates > threshold for two weeks; (4) unblinded pharmacy hours booked >80% for the month; (5) surge of newly posted trials recruiting overlapping populations. Red/amber levels should map to specific responses: add sites, shift activation sequence, fund mobile imaging, or refine the eligibility script to pre-qualify earlier in the pathway.
Scenarios, not slogans. Build three scenarios—Conservative, Base, Stretch—each with explicit assumptions on consent probability, screen-fail reasons, and resource constraints. Tie site targets and resupply logic to the Conservative plan; use Base to manage budgets; treat Stretch as upside for governance, not as the commitment in contracts. Revisit scenarios monthly; when a KRI is red, switch the operating plan to Conservative automatically until metrics stabilize.
Countermeasures that actually work. If competing trials surge, you can: (a) re-sequence countries to open faster-approval regions first; (b) add community or regional centers to reduce reliance on one academic hub; (c) deploy decentralized options (home sample kits, mobile nurses, tele-consent) where permitted; (d) pre-book imaging blocks and negotiate service levels; (e) fund patient-navigation micro-budgets (transport/childcare/translation) within fair-market value limits; and (f) simplify eligibility by amendment if a non-critical criterion blocks many otherwise eligible candidates. Document “what changed and why,” train, and file evidence.
Documentation and transparency. Each competitive assessment should end with a one-page decision memo: the trials/resources that matter, your KRIs and thresholds, actions selected, owners, and review date. File with the data cut you used, so a reviewer can replicate your conclusion. Quiet edits—changing assumptions without recorded rationale—are a common inspection finding; avoid them by version-controlling your landscape model like controlled code.
Governance, Metrics, and a Ready-to-Use Checklist
Ownership with the meaning of approval. Keep the core team small and named: an Epidemiology Lead (funnel math), a Landscape Lead (competing resources), a Start-Up Lead (activation and contracts), a Supply/Depot Lead, and Quality (ALCOA++ verification). Approval signatures should state their meaning—“funnel validated,” “KRI thresholds approved,” “depot and courier constraints confirmed,” “documentation meets ALCOA++”—so accountability is explicit and traceable.
Dashboards that connect assumptions to outcomes. Your start-up dashboard should display: (1) country and site eligibility funnels with ranges; (2) consent, randomization, and screen-fail rates by reason; (3) resource KRIs (scanner utilization, unblinded pharmacy hours, courier exceptions); (4) activation milestones; (5) depot inventory and resupply risks; and (6) early query rates on endpoint windows (a leading indicator of operational mismatch). When a KRI flips to red, the dashboard should open a ticket with an owner and due date and adjust site targets or cadence per your Conservative scenario.
KPIs that predict control (review monthly).
- Timeliness: days from site activation to first consent; days from consent to randomization; percentage of visits within window for the first 10 randomized per site; median days to schedule required imaging/biopsy.
- Quality: first-pass acceptance of essential documents; early data query rate for primary endpoint fields; rate of re-scans or repeat procedures due to resource conflicts.
- Consistency: divergence between forecast and actual consents/randomizations; recurrence of the same screen-fail reason; site-to-site variability in conversion for the same pathway.
- Traceability: five-minute retrieval pass rate from protocol requirement → funnel assumption → evidence source → decision memo → KPI/KRI outcome.
- Effectiveness: time-to-green after a countermeasure; reduction in “enrolled elsewhere” screen-fails; inspection/audit observations tied to feasibility rationale.
30–60–90-day operating plan. Days 1–30: publish the epidemiology-funnel template and the resource-competition grid; set KRI thresholds; configure dashboard widgets; and define the meaning of approval for signatures. Days 31–60: run pilot funnels for two countries and a first wave of sites; stress-test resource KRIs (imaging slots, pharmacy hours, courier lanes); conduct a table-top simulation of a competing pivotal launch; rehearse the five-minute retrieval drill. Days 61–90: scale to full network; lock Conservative/Base/Stretch scenarios; wire KRIs to automatic ticketing and cadence changes; and finalize vendor SLAs (imaging blocks, courier surge capacity, depot alternates) with at-risk fees for persistent red metrics.
Common pitfalls—and durable fixes.
- Counting patients, not pathways. Fix by mapping diagnosis-to-randomization steps, owners, and delays; put time windows in the funnel, not just criteria.
- Landscape snapshots that age out. Fix by scheduling monthly refreshes and version-controlling assumptions with “what changed and why” memos.
- Optimistic consent math. Fix by benchmarking against comparable protocols and adjusting for competing studies, burden, and decentralized support.
- Ignoring resource conflicts. Fix by tracking scanner utilization, pharmacy hours, and courier exceptions; negotiate blocks and surge capacity early.
- Quiet edits to models. Fix with controlled change logs, approvals that capture meaning, and retrieval drills that expose drift.
Ready-to-use feasibility checklist (paste into your SOP).
- Epidemiology funnel built per country with ranges and sources; care-pathway timing mapped to protocol windows.
- Representativeness modeled; barriers and countermeasures costed (transport, childcare, interpreters, decentralized options).
- Resource-competition grid complete: investigators, coordinators, scanners, pharmacy hours, couriers, home-health capacity.
- KRIs and thresholds defined (enrolled-elsewhere rate, scanner utilization, courier exceptions, pharmacy hours saturation).
- Conservative/Base/Stretch scenarios approved; site targets tied to Conservative until KRIs green for two cycles.
- Depot and resupply plan validated with alternates; seasonal/logistics risks modeled; at-risk vendor fees defined.
- Decision memos filed with ALCOA++ evidence; change logs capture “what changed and why.”
- Dashboard live; tickets open automatically on red KRIs; cadence/targets adjust per scenario logic.
- Five-minute retrieval drill passed: protocol line → funnel assumption → evidence → action → KPI/KRI outcome.
- Inspection readiness confirmed: transparent rationale for country/site selection that matches study conduct and results.
Bottom line. Enrollment becomes predictable when epidemiology is translated into a pathway-aware eligibility funnel and when the trial landscape is treated as shared resources, not just overlapping patients. With explicit ranges, KRIs tied to countermeasures, and documentation that is easy to retrieve, sponsors can stage activations, price start-up honestly, and deliver timelines the team can actually meet—study after study, region after region.