Published on 16/11/2025
Country and Site Feasibility—Evidence-Driven Selection That Withstands Inspection
Purpose, Principles, and the Global Frame for Feasibility
Country and site feasibility work is the earliest—and often most consequential—quality gate in a clinical program. Decisions made here determine whether eligible participants can be reached, whether start-up proceeds on schedule, and whether downstream monitoring and data integrity workloads are manageable. A disciplined feasibility process converts strategic intent into testable assumptions about epidemiology, regulatory and ethics cadence, logistics, staffing, and digital readiness. When the process is explicit and documented, sponsors earn predictable activation and defensible
Anchor in shared principles. A proportionate, risk-based posture—focusing controls on factors that protect participants and endpoint integrity—tracks with internationally harmonized good-practice thinking presented by the ICH Good Clinical Practice principles. In the United States, many sponsors orient feasibility criteria and documentation standards to public guidance and educational materials available from the FDA’s clinical trial oversight resources. For EU and UK programs, feasibility planning must respect authorization cadence and transparency obligations; teams often calibrate expectations using resources from the European Medicines Agency. Ethical touchstones—respect, fairness, confidentiality—are reinforced by the World Health Organization’s research ethics materials, which also help frame community and outreach strategies.
Multiregional implications. When a program includes Japan or Australia, ensure feasibility tools and checklists reflect terminology, ethics process nuances, and submission artifacts consistent with the PMDA’s clinical guidance and Australia’s Therapeutic Goods Administration clinical trial guidance. These anchors keep questions, thresholds, and required documents coherent across countries so that local start-up teams do not need to reinterpret expectations study by study.
What feasibility must decide—clearly and early. The process should produce yes/no decisions on: (1) which countries can reach the target population within the window; (2) which sites have both historical performance and available capacity to execute the protocol; (3) whether decentralized or hybrid procedures are necessary to meet timelines; and (4) what risk controls, budgets, and vendor support are required to make the plan executable. Outcomes must be captured as traceable assumptions with owners and review dates, not as unstructured opinions.
Inspection posture. Auditors and inspectors typically ask: Why were these countries and sites selected for this protocol? What data—epidemiology, competing-trial load, historical performance—support the choice? Did the sponsor evaluate data privacy, import/export and depot needs, language and translation burdens, and digital system readiness? Can the sponsor retrieve, within minutes, the chain from protocol requirement → feasibility assumption → evidence source → decision memo → start-up metrics? A regulator-ready feasibility system answers yes to all of the above.
Design feasibility as a system, not a survey. Replace one-off questionnaires with a connected set of tools: a country screen, a site screen, a capacity and readiness check, and a governance layer that turns red/amber indicators into explicit actions. The same assumptions should feed start-up timelines, recruitment forecasts, budgets, depot planning, and monitoring intensity. If an assumption changes (e.g., competing trials surge locally), dashboards and owners should update downstream plans automatically.
Country Feasibility—Inputs, Scoring, and Risk-Informed Decisions
Start from the participant population. Confirm that the target condition exists at the planned incidence/prevalence and within reachable care pathways. For each candidate country, model access points (tertiary hospitals, specialty clinics, community practices), referral patterns, and standard-of-care timing that intersect protocol windows. When the protocol requires rare diagnostics, complex imaging, or specialized equipment, count real-world availability and maintenance/QA capacity—assumptions about “upgradable” facilities often slip timelines by months.
Map the healthcare and regulatory context. Document the ethics and authority path (central vs. local committees, parallel vs. sequential reviews), expected turnaround, and common pitfalls. Capture privacy and data transfer constraints, sample export rules, import permits for investigational product (IP) and devices, and whether named depots exist or must be established. For decentralized elements, record what tele-health, home health, or eConsent is permissible and what identity verification or language rules apply.
Quantify competing-trial load and operational friction. Use public registries and commercial feeds to estimate the number of actively recruiting studies that share patients, investigators, or core vendors. Layer in macro-friction—visa requirements, seasonal constraints, holidays, courier reliability, customs clearance SLAs, cold-chain stability, and hazardous-goods restrictions. Feasibility should not only count patients; it should also measure a country’s ability to convert patients to on-time, protocol-adherent visits.
Define a transparent scoring model. Build a weighted score with four dimensions: (1) Population fit (epidemiology reach, care pathways, language coverage); (2) Regulatory/Ethics cadence (timeline predictability, document complexity, transparency duties); (3) Operational readiness (depot and import/export feasibility, courier network, central lab and imaging access, eSystems permissibility); and (4) Equity and representativeness (ability to enroll planned subgroups ethically and practically). Publish the weights and rationale so program leaders can challenge or adjust them intentionally rather than implicitly reweighting with anecdotes.
Turn scores into decisions with thresholds. A high composite score should not override a red critical factor (e.g., prohibited home phlebotomy in a DCT-heavy protocol). Document minimum thresholds (“no-go if IP import license lead time > X weeks after CTA approval” or “no-go if national reference lab cannot validate the biomarker within X weeks”). For “slow-go” countries, define what additional vendor support, budget, or protocol flex is required to proceed.
Plan the activation sequence and depot strategy. Use country scores to stage activations. Early-wave countries should have fast regulatory cadence, high population fit, and strong logistics; late-wave countries can backfill if enrollment lags or support subgroups. Select depot locations with proven cold-chain reliability and predictable customs; pre-qualify alternates in case of geopolitical or seasonal disruption. Create roll-forward and roll-back rules for IP inventory so stock moves with minimal waste when country starts slip.
Budget realism and fair market value. Translate logistics and document complexity into country-level FMV ranges (translation counts, central vs. local ethics fees, courier premiums, import agent costs). Attach a confidence interval to each estimate; finance will assume precision unless uncertainty is explicit. When budgets are tight, state what scope will be dropped (e.g., fewer start-up site visits, later community outreach) and the probable impact on timelines or representativeness.
Capture assumptions and owners. Country screens should end with a one-page decision memo: the scorecard, the thresholds, the go/slow/no-go decision, the owner of each red/amber item, and the review date. Store the memo in a retrievable location with a short evidence pack (data sources, vendor quotes, regulatory references). This is the artifact you will show in audits to demonstrate rational, data-anchored selection.
Signals that trigger reevaluation. Define early-warning indicators: extended customs delays, surges in competing trials in the same indication, courier exception rates above threshold, or a change in privacy law affecting cross-border transfers. A red signal should open a formal review: contain (mitigate locally), correct (adjust plan), or communicate (escalate and restage activations).
Site Feasibility—Evidence, Capacity, and Digital Readiness
Begin with protocol-critical capabilities. Feasibility questionnaires must mirror the schedule of activities and critical-to-quality factors: eligibility decision sources, primary endpoint measurements and timing, investigational product handling, imaging and lab specifics, device configuration/version controls, and decentralized options. Ask questions that produce decidable answers: “How many patients meeting inclusion A+B did you randomize in the last 12 months?” beats “Do you have eligible patients?” Avoid generic checklists that invite optimistic yeses.
Demand verifiable performance history. Request de-identified enrollment curves from recent, comparable studies; screen-fail compositions; deviation rates for endpoint windows; monitoring finding rates; and database lock readiness. Where possible, corroborate site-reported numbers with sponsor records or CRO databases. Reward sites that provide transparent histories by offering earlier activation; penalize unverifiable claims by requiring on-site validation before first shipment.
Assess capacity, not just capability. A site may be qualified but overcommitted. Capture active study load by coordinator and sub-investigator, planned vacations or turnover risk, and access to backup staff. Require a capacity statement (target monthly consents and randomizations) tied to named FTEs and clinic slots. For high-burden protocols, ask for a visit-slot map across the first three months to surface bottlenecks before they become deviations.
Probe digital maturity and decentralized readiness. Document eConsent acceptance, remote source review policies, and ability to use ePRO/eCOA, tele-visits, wearables, and home health partners. Confirm identity verification steps, privacy protections, and audit trail integrity for remote activities. If the protocol requires data uploads (imaging, device telemetry), verify bandwidth, firewall rules, and help-desk contact paths; the absence of these basics is a leading indicator of slow data flow and elevated query rates.
Validate pharmacy and cold-chain logistics. Require evidence of temperature monitoring and alarm thresholds, excursion logs, and reconciliation practices that keep physical and IWRS/IRT stocks in sync. For DTP (direct-to-patient) shipments, check identity verification on delivery, tamper-evident seals, and reship criteria. When device kits are involved, confirm version tracking and quarantine/release rules. Ask to see the last three excursion case files; if documentation is thin, expect waste and audit risk.
Confirm ethics and document readiness. Collect actual approval timelines from the past year for the same ethics pathway; check translation needs and local privacy notice templates. Pre-stage essential documents: investigator CVs/licenses, GCP training attestations, financial disclosures, radiation or biosafety approvals if relevant. The time it takes a site to deliver complete, correct documents is often a proxy for the time it will take to deliver clean data.
Score with transparency and tie to targets. Build a site score with weighted dimensions (patient access, endpoint logistics, staffing capacity, digital readiness, pharmacy/supply, document timeliness, historical performance). Publish the weights and provide feedback to sites; a transparent score motivates improvement. Translate the score to a ramp target (consents and randomizations by month) and to monitoring intensity (e.g., targeted source verification for lower-scoring sites until metrics stabilize).
Greenlight criteria and conditional activation. Define what “ready” means: executed contract and budget, complete essential documents, trained staff, IWRS/IRT and EDC access, depot linkage tested, sample shipping validated, and a successful mock visit or table-top walk-through of the first dosing day. When evidence is partial, use conditional activation with clear time-boxed items and a pause rule if milestones slip.
Fair market value and budget alignment. Link visit burden and decentralized elements to per-patient fees and reimbursements. Plan small “friction-fix” budgets (parking vouchers, mobile minutes, translation checks) within ethics and FMV guardrails. Misaligned budgets create downstream deviation and retention problems—better to price realistically than to push sites into unfunded workarounds.
Document the decision trail. End every site screen with a short memo: the score, documented risks, activation decision (go/conditional/no-go), owner of each risk, and first review date. File with supporting evidence (questionnaire, performance history, screenshots of eSystem tests). This traceability is what turns a feasibility opinion into a defensible selection.
Governance, Metrics, and a Ready-to-Use Feasibility Checklist
Small-team ownership with meaning of approval. Name a Country Feasibility Lead, a Site Feasibility Lead, a Start-Up Lead, and Quality. Each approval should carry the meaning of the signature—“epidemiology validated,” “logistics verified,” “ethics path confirmed,” “ALCOA++ evidence checked”—so accountability is explicit. Keep the decision board small enough to move quickly but diverse enough to challenge assumptions.
Dashboards that connect assumptions to outcomes. A feasibility dashboard should display: country and site scores; activation forecasts vs. actuals; contract and ethics milestones; essential document completeness; courier exception rates; depot readiness; early enrollment velocity; and first-pass query rates. Red/amber indicators must open tickets with named owners and due dates. If an assumption (e.g., ethics turn-around) comes in slower than modeled, downstream plans (monitoring cadence, recruitment tactics, budget) should update automatically.
KPIs that predict control.
- Timeliness: median days from selection to ethics/authority submission; from greenlight to activation; from activation to first consent; and from first consent to first randomization.
- Quality: essential document first-pass acceptance; rate of endpoint-window deviations in first 10 participants per site; completeness of temperature/logger files in first shipment; first-pass eSystem access provisioning.
- Consistency: divergence between forecast and actual ramp; recurrence of the same feasibility defect category across sites/countries; proportion of conditional activations that become pauses.
- Traceability: five-minute retrieval pass rate for protocol requirement → feasibility assumption → evidence → decision memo → start-up metric.
- Effectiveness: reduction in screen-fail reasons after feasibility-driven script changes; time-to-green after CAPA; inspection observations related to selection or start-up.
KRIs and escalation rules. Watch for persistent late essential documents, repeated import delays, courier exceptions > threshold, unverified performance histories, or high early query rates. For any red KRI over two cycles, convene a risk huddle and decide whether to add vendor support, restage activation, or pause enrollment at affected sites.
Vendor oversight baked in. If CROs or specialty vendors run country or site screens, put obligations into quality agreements and SOWs: validated questionnaires, immutable edit logs, evidence packs for claims, synchronized clocks across eSystems, and participation in five-minute retrieval drills. Require weekly feeds that hit the dashboard and enforce at-risk fees or credits for persistent red metrics.
30–60–90-day operating plan. Days 1–30: publish country and site screen templates; finalize scoring weights and thresholds; connect dashboards to evidence repositories; confirm approval blocks with meaning of signature. Days 31–60: run two pilot countries and a first wave of sites; perform mock depot and courier tests; execute a table-top of first-day dosing; rehearse retrieval drills on three decision chains. Days 61–90: scale to the full network; add conditional activation rules; launch weekly red/amber reviews; close CAPA with design changes (question wording, thresholds, vendor support), not just retraining.
Ready-to-use feasibility checklist (paste into your SOP).
- Country screen complete with population fit, regulatory/ethics cadence, operational readiness, equity/representativeness, and budget realism; thresholds defined.
- Depot and import/export path validated; alternate depot identified; courier SLAs and dry-ice/hazardous-goods rules documented.
- Activation sequence staged (early/late waves) with explicit go/slow/no-go decisions and owners for red/amber items.
- Site screen mirrors protocol CtQ: eligibility sources, endpoint logistics, pharmacy/cold chain, digital readiness, decentralized options.
- Verifiable performance history on file (enrollment curves, deviation rates, lock readiness); capacity statement tied to named FTEs and slots.
- Essential documents pre-staged; eSystem access paths tested; translation needs and privacy notices mapped.
- Site score computed with published weights; ramp targets set; monitoring intensity tied to score until metrics stabilize.
- Greenlight criteria met or conditional activation with time-boxed items; pause rules defined for misses.
- Dashboards live; KPIs/KRIs monitored; red/amber items open tickets with owners and due dates; automatic plan updates on assumption drift.
- Five-minute retrieval drill passed: protocol requirement → assumption → evidence → decision → outcome; CAPA uses design changes first.
Bottom line. Feasibility that is explicit, evidence-based, and connected to start-up execution is a competitive advantage. When country and site choices are tied to real capacity and logistics, when scores and thresholds are transparent, and when assumptions flow directly into dashboards, budgets, and activation plans, sponsors realize predictable timelines, cleaner data, and inspection-ready selection decisions—study after study, region after region.