Published on 15/11/2025
Recruitment Forecasting and Site Targets—From Epidemiology to Executable Ramps
Purpose, Principles, and the Global Frame
Recruitment forecasting and site target setting convert feasibility insights into an operational plan that investigators can actually deliver. A forecast is more than a spreadsheet; it is a governed model that ties epidemiology, care pathways, eligibility criteria, site capacity, and competing-trial pressure to a month-by-month ramp. When designed well, timelines hold, screen failures fall, and finance can commit realistic cash flows. When improvised, first-patient-in slips, site morale erodes, and inspection narratives become defensive because no one
Anchor in harmonized expectations. A proportionate, quality-by-design posture—controlling the steps that protect participant rights and endpoint integrity—aligns with high-level good-practice principles presented by the International Council for Harmonisation. In the United States, teams often calibrate operational expectations and investigator responsibilities using public materials within FDA clinical trial oversight resources. For EU and UK programs, authorization cadence, transparency requirements, and local operating realities are informed by resources hosted by the European Medicines Agency. Ethical touchstones—respect, voluntariness, confidentiality, and fairness—are reinforced by the World Health Organization’s research ethics materials. In Japan and Australia, ensure feasibility assumptions and outreach practices remain coherent with orientation published by the PMDA and Australia’s Therapeutic Goods Administration.
What a forecast must prove. The model should make four things explicit: (1) the addressable population exists and is reachable within protocol windows; (2) the conversion rates from pre-screen to consent to randomization reflect procedural burden, visit timing, and support for participation; (3) the activation sequence across countries and sites meets timeline goals with meaningful contingency; and (4) risks and buffers are transparent, trigger actions when crossed, and are documented with ALCOA++ attributes—attributable, legible, contemporaneous, original, accurate, complete, consistent, enduring, and available.
Design the forecast as a system. The recruitment model must connect to start-up tasks, depot planning, and monitoring intensity. If a feasibility assumption changes—say, scanner availability drops during a hospital upgrade—the dashboard should adjust site targets, resupply logic, and monitoring focus automatically. Treat the model like controlled code: version it, redline “what changed and why,” capture approval meanings (e.g., “clinical accuracy verified,” “statistical review complete,” “ALCOA++ check”), and rehearse five-minute retrieval drills from any number on the dashboard back to its evidence pack. The goal is not a perfect prediction; the goal is an engineered forecasting process that produces fast, defensible adjustments when reality moves.
From Epidemiology to a Credible Eligibility Funnel
Describe the pathway, then the math. Start with the care pathway: where eligible participants are diagnosed and by whom, how long staging takes, and which steps collide with your protocol windows. Build a funnel for each country and site: total with condition → diagnosed and in care → reachable by participating centers → meet inclusion/exclusion → within time windows → likely to consent → randomized. Document sources and assumption ranges at each step so leaders see risk bands, not just single numbers.
Burden matters more than counts. Consent probability and screen-fail rates are not constants; they change with visit length, invasive procedures, tele-visit availability, identity-verification friction, and reimbursement for travel or lost wages. Capture these drivers explicitly. If the protocol requires Week-12 imaging with specific sequences, quantify scanner availability, technologist coverage, and maintenance windows. If home assessments are permitted, state identity-verification and data-quality controls; decentralized options can raise conversion, but only if participants can realistically complete them.
Pre-screen logic that saves everyone time. Convert inclusion/exclusion text into a scripted pre-screen that any coordinator can run in under five minutes. Order questions by yield and cost: start with high-exclusion, low-burden items; defer expensive tests until later. Record the most common screen-fail reasons and update scripts monthly so sites stop re-contacting candidates who will predictably fail.
Representativeness and equity as forecast variables. Model subpopulations (age, sex, race/ethnicity where legally and ethically appropriate, language and literacy, comorbidity clusters) and the barriers they face. Translate barriers into countermeasures—patient navigators, interpreters, mobile nursing, evening clinics, travel support—and cost them. If the ramp depends on these supports, include them in site budgets and statements of work; otherwise, your “base case” is aspirational and will not withstand inspection.
Ranges, not single numbers. Publish Conservative, Base, and Stretch scenarios for each site with explicit assumptions for pre-screen yield, consent probability, and randomization rate. Tie buffers and go/no-go rules to the Conservative plan; treat Stretch as upside, not a promise. Use forecast volatility—the week-to-week change in expected monthly randomizations—as an early KRI for unrealistic assumptions or rising competition from other studies.
Rare disease and pediatric nuance. In small populations, the funnel is as much logistics as epidemiology. Map referral hubs, advocacy networks, and the handful of investigators who see enough cases. Expect longer staging and travel planning; encode those lags into the ramp. For pediatrics, include assent procedures, caregiver schedules, school holidays, and child-appropriate visit lengths; a “standard adult ramp” is rarely realistic.
Common failure modes and durable fixes. If one eligibility criterion drives ≥25% of screen fails, refine pre-screen scripts, resequence tests (cheap before expensive), or—if non-critical—propose a protocol amendment. If participants decline due to time burden, bundle procedures, add mobile options, or broaden windows within scientific limits. If consent rates sag at particular sites, examine interpreter availability, privacy explanations, and the clarity of visit expectations; training alone rarely fixes design problems.
ALCOA++ evidence packs. Keep a short evidence pack per assumption: data cut, transformation steps, owner, and downstream impacts. Example: “Pre-screen→consent 55–70% (basis: two recent protocols with similar burden and navigator support; owner: Site Operations Lead; impacts: staffing and stipend budgets).” Link packs to decision memos so auditors can trace a recruitment number to its origin in minutes.
Converting the Funnel into Site Targets and Ramps
Capacity before ambition. A qualified site may still be overcommitted. Assign targets only after verifying coordinator FTEs, clinic slots, pharmacy bandwidth, imaging blocks, and unblinded pharmacist coverage. Require a capacity statement per site—monthly consents and randomizations supported by named staff and hours—and map backups for vacations or turnover. A site that cannot show clinic time, interpreter access, and courier windows is not ready to hold a target.
Ramp curves that reflect reality. Most sites ramp S-shaped: slow early weeks (training, first logistics), then steady state, then taper. Encode this shape rather than assuming a flat line. For fast starters, front-load kit shipments and imaging blocks; for slower centers, add navigation support and pre-book procedures. Tie resupply and depot logic to the Conservative scenario so stockouts are rare even if Stretch never materializes. If you rely on direct-to-patient shipments, include identity-verification throughput and courier dry-ice capacity as explicit constraints.
Balancing the panel. Blend academic hubs (high capability, slower contracting) with community sites (closer to the patient journey). Diversify by geography so public holidays, storms, or a single committee delay do not stall the program. In rare diseases, add cross-border referral agreements and concierge travel within ethical limits; then reflect the longer lead times in the ramp. If one site controls a scarce scanner or surgical slot, avoid making the global ramp hostage to that single resource.
Targets as contracts, not wishes. Convert monthly targets into SOW language: outreach executed, navigator hours staffed, clinic sessions reserved, tele-visit windows published, and specific conversion ratios monitored. Avoid incentives that look like inducement; link any performance payments to quality (on-time windows, clean first-pass data, timely SAE submissions) rather than to volume alone. Publish the evidence required for payment so disputes do not consume the same coordinators you rely on for recruitment.
Governance that owns the numbers. Keep ownership small and named: an Enrollment Lead (forecast owner), a Start-Up Lead (activation sequence), a Data Science Lead (model integrity), a Supply Lead (depot/resupply), and Quality (ALCOA++ verification). Signatures must state the meaning of approval—“funnel validated,” “operational feasibility confirmed,” “model math checked,” “ALCOA++ evidence verified”—so accountability is explicit and traceable.
KRIs and QTLs that force action. Define early-warning thresholds: “no consent within 21 days of activation,” “>20% deviation from monthly ramp for two consecutive months,” “>30% of screen fails due to one criterion,” “>10% missed endpoint windows in first 10 participants,” and “>15% courier exceptions.” Convert the most critical limits into Quality Tolerance Limits (QTLs) that trigger cross-functional review, documented remediation, and—if needed—country/site resequencing.
Budgets that match the ramp. Price navigator roles, interpreters, travel support, and decentralized options explicitly. Pay partial screening for true effort and allow rescreens when logistics—not participant choice—caused delays. Align consent reimbursement language with budgets to avoid ethics queries and reconsent. When funds are tight, state what scope will be reduced (e.g., fewer mobile-nursing windows) and the expected impact on conversion; ambiguity is the enemy of timelines and credibility.
Vendors and specialty partners. If outreach, call centers, eConsent, or home-health workflows are outsourced, write role-based access, immutable logs, content approvals for participant-facing materials, and conversion reporting into statements of work. Require weekly feeds—from impressions to pre-screens to consents to randomizations—with explanations for anomalies. Persistent red metrics should trigger credits or at-risk fees, plus a corrective roadmap with dates and named owners.
Dashboards, Operating Cadence, and a Ready-to-Use Checklist
Dashboards that change behavior. Display the funnel and ramp by country/site with ranges; consent and randomization conversion; reason-coded screen-fail composition; early deviation types; imaging and pharmacy readiness; courier exceptions; navigator workload; and buffer position vs. the Conservative scenario. Every number should click through to the underlying artifact—evidence packs, training logs, UAT reports, courier bills of lading—in the eTMF/ISF. If it does not click through, it is not inspection-ready. Track a “click-through rate” (target ≥95%) to ensure traceability is real, not aspirational.
Operating cadence—30/60/90 and beyond. Days 1–30: publish the funnel template, define scenario assumptions, set KRIs/QTLs, and wire dashboard tiles to artifacts. Days 31–60: run pilots in two countries, pre-book imaging blocks where constrained, launch navigator staffing, and dry-run decentralized identity verification and courier pickups. Days 61–90: activate the first wave; execute weekly risk huddles; switch automatically to the Conservative plan when a KRI turns red; file “what changed and why” memos for any forecast update; and rehearse five-minute retrieval from a dashboard number to its evidence chain.
Early-ramp indicators. Watch eligibility errors, missed windows, SAE clock misses, identity-verification failures, high “enrolled elsewhere” rates, and re-scan frequencies for imaging or device configuration issues. Treat persistent reds as design signals: simplify eligibility, add mobile services, adjust windows within scientific limits, or resequence countries/sites. Record changes with a brief rationale, retrain affected roles, and re-estimate within two cycles so governance sees cause and effect.
Five-minute retrieval drills. Once per month, select a site and retrieve the chain for a single month’s target: funnel assumptions → country/site decision memo → outreach-material approvals → navigator staffing roster → UAT and training evidence → courier and depot readiness → actuals. Time the retrieval and fix filing gaps immediately. Simple drills prevent complex inspections from going sideways.
Common pitfalls—and durable fixes.
- Point estimates presented as facts. Fix with scenario ranges and buffer ownership tied to the Conservative plan until KRIs are green for two cycles.
- Counting patients, not steps. Fix by mapping the pathway and timing; cost barriers and supports; make windows explicit in the funnel and in consent scripts.
- Quiet edits to models. Fix with version control, redline memos, and approval meanings; rehearse retrieval from any number to its evidence pack.
- Unfunded countermeasures. Fix by pricing navigators, interpreters, mobile nursing, and courier changes in the budget; otherwise the base case is fiction.
- Vendor opacity. Fix with end-to-end conversion reporting (impressions → pre-screens → consents → randomizations) and at-risk fees for persistent defects.
Ready-to-use checklist (paste into your SOP).
- Eligibility funnel per country/site with sources, ranges, owners, and explicit time windows and capacity constraints.
- Conservative/Base/Stretch scenarios approved; buffers and go/no-go rules tied to the Conservative plan.
- Site capacity statements filed; S-curve ramps encoded; depot/resupply aligned to the Conservative scenario.
- Representativeness barriers and countermeasures costed; budgets and consent reimbursement language aligned.
- Vendor SOWs include role-based access, immutable logs, participant-content approvals, and weekly conversion reporting.
- KRIs/QTLs active with auto-ticketing; early-ramp indicators monitored; resequencing rules documented.
- Dashboards click through to artifacts in the eTMF/ISF; five-minute retrieval drill passed monthly (≥95% click-through).
- Change control for models in place; “what changed and why” memos filed; CAPA uses design changes first, then retraining.
Bottom line. Recruitment becomes predictable when forecasting is treated as an engineered system: pathway-aware funnels, scenario-based targets, balanced site panels, budgets that fund the right countermeasures, and dashboards that click through to evidence. Build that system once, rehearse it often, and you will hit dates without cutting corners—study after study, region after region.