Published on 17/11/2025
Making the Tough Calls on Deviations, Re-draws, and Re-tests—Safely, Lawfully, and Defensibly
Define the playing field: deviation taxonomy, risk, and decision rights
A laboratory that works on clinical trials will inevitably see things go off script—mislabels, missed windows, temperature blips, instrument hiccups, out-of-range results, or sample losses. The difference between a compliant lab and a chaotic one is not “zero deviations”; it is a system that classifies, prioritizes, investigates, and documents departures consistently. Start by publishing a deviation taxonomy that staff can use without a meeting: operational (pack-out/receipt, chain of
Clarity of ownership is non-negotiable. Site staff own collection and labeling; couriers own custody during transport; the receiving lab owns intake and storage; analytic sections own method performance and result release; quality owns the governance, oversight, and inspection-readiness evidence. For trials that use both central and local labs, define how disagreements are handled and who adjudicates. If a value triggers critical value notification, medical monitors and investigators must be in the loop with auditable timestamps regardless of paperwork status—safety outranks neatness.
Write a compact policy for laboratory deviations that sets three things: (1) what gets logged (all departures that could affect identity, integrity, or interpretability), (2) how fast (within the same shift for anything safety-relevant; within 24 hours otherwise), and (3) who triages (section lead or duty manager). Pair it with a “first response” card: contain, preserve, notify. Contain the risk (quarantine questionable specimens, stop a failing run), preserve evidence (don’t overwrite audit trails, keep printouts/snapshots, save data files), and notify the right roles. This is not bureaucracy; it is how you keep facts intact when stress is high.
Compliance spine matters. Systems that hold study-relevant electronic records must satisfy 21 CFR Part 11 audit trails and demonstrate ALCOA+ data integrity: attributable, legible, contemporaneous, original, accurate—plus complete, consistent, enduring, and available. Clinical trial labs should operate under GCLP compliance principles, while clinical reporting labs must also meet jurisdictional accreditation (e.g., CLIA CAP nonconformance management in the U.S.). “We fixed it” without traceable evidence is not a fix in regulated work; every action must be visible to an auditor.
Finally, decide early how your program will treat OOS OOT investigations (out-of-specification, out-of-trend). In bioanalytical or stability-linked contexts, OOS/OOT logic prevents “testing into compliance.” In safety labs, OOR (out-of-range) and delta checks are more common, but the principle is the same: changes are evaluated scientifically and governed procedurally. Capture these choices in SOPs so staff do not improvise under pressure. When this foundation exists, downstream calls—re-draw policy clinical trials versus re-test strategy validation—become predictable, fast, and defensible.
Investigate with discipline: containment, root cause, and impact you can defend
The moment a deviation is logged, the clock starts. First response aims to prevent harm and preserve truth: stop the affected process, tag impacted samples/instruments, snapshot configurations and raw data, and secure audit trails. Next comes scoping: identify all potentially affected records by time window, instrument, method, analyst, lot, and shipment. Use manifests and LIMS filters to build a candidate list; err on inclusion and narrow with evidence.
Once you have a scope, move to structured investigation. Blend root cause analysis 5-Why with an Ishikawa fishbone (people, process, equipment, environment, materials, measurement) to keep hypotheses honest and to separate primary from contributing causes. For temperature problems, your temperature excursion assessment must reconcile logger traces, stability excursions claims, and elapsed time since draw—calculate a stability budget in hours across pre-analytical, transit, and holding steps. For identity problems, test whether a chain of custody break occurred by replaying scans, signatures, and intake logs; a single unexplained gap may force invalidation or re-draw policy clinical trials activation.
Analytical deviations require method-specific logic. If calibration or QC fails, follow your method SOP: halt release; troubleshoot; repeat calibration with documented reasons; evaluate carryover/contamination; and perform a controlled re-test strategy validation (re-injection, re-extraction, or re-run) only when pre-defined criteria allow. In bioanalysis, re-injections without re-extraction may be valid for autosampler drift; extraction problems demand re-extraction. In central safety labs, reflex testing is appropriate when decision rules say so (e.g., confirmatory methods). Across both worlds, “testing into compliance” is prohibited—decisions are evidence-based and pre-specified.
Don’t skip the data layer. Conduct a sample integrity investigation on the records: verify time stamps, file checksums, and e-signatures; review 21 CFR Part 11 audit trails for edits, late entries, and reprocessing; and confirm ALCOA+ data integrity behaviors (contemporaneous recording, no orphan data). When outside labs are involved, demand the same artifacts; your oversight cannot end at the firewall.
Every investigation must end with a documented impact assessment. For patient safety, verify whether any critical value notification windows were compromised and whether treating clinicians were informed. For data reliability, state whether results are valid as-is, require qualification (e.g., “interpreted with caution”), or must be invalidated and replaced via re-draw/re-test. For regulatory exposure, record whether reporting obligations are triggered (e.g., protocol deviation to the sponsor, potential inspection disclosure). Close with clear next steps: corrective actions, preventive actions, and temporary controls while CAPA is executed.
Make the call: when to re-draw, when to re-test, and when to stand by the original
Great laboratories decide consistently because the rules are written before emotions run high. Codify your re-draw policy clinical trials with criteria staff can apply in minutes:
- Identity uncertain: label mismatch, unreadable code, or a provable chain of custody break → re-draw mandatory.
- Stability exceeded: cumulative time/temperature budget surpassed and no scientific justification to accept data → re-draw preferred; consider re-test only if another aliquot with preserved integrity exists.
- Insufficient volume/incorrect tube: if partial analysis invalidates the endpoint or reflex testing cannot be performed → re-draw as per protocol with documented subject safety consideration.
- Analytical anomaly with preserved pre-analytical chain: run-specific issue (carryover, calibration failure) → controlled re-test strategy validation according to method SOP (re-injection, re-extraction, or full re-run).
Spell out specimen rejection criteria at intake and at bench: hemolysis above acceptance for the assay, gross lipemia (where method is sensitive), wrong anticoagulant, broken seals, thawed “frozen” specimens, condensation-damaged labels, or missing requisition elements that block interpretation. If you reject, record reason codes, photos, and who made the call; then trigger a re-draw request with transport guidance that minimizes repeat failure (e.g., field-ready kits, clearer IFUs).
Design re-tests to answer a question, not to “get a better number.” Pre-define the re-test ladder for each method: when is re-injection allowed? when is re-extraction mandatory? what constitutes a valid repeat (same sample preparation vs fresh aliquot)? which acceptance rules apply (e.g., both results within total error; or second run supersedes if QC passes and investigation indicates the first was compromised)? For quantitative bioanalysis, align re-test logic with validation claims (matrix effects, recovery, stability) and with incurred sample reanalysis principles; for clinical chemistry, link to reflex/confirmatory pathways and delta checks.
In decentralized or home-health settings, re-draws carry cost and burden. Plan mitigation up front: better kits and labels, mobile phlebotomy training, controlled pack-out checklists, and contingency labels to avoid handwriting. When remote logistics delay courier pickup, your temperature excursion assessment must include pre-pickup room-temperature exposure and the actual shipper hold profile. A crisp stability budget and alternate routes (local lab for screening, central for endpoints) often save subjects an extra needle stick.
Remember proportionality. Not every deviation demands a re-draw or re-test. Where identity is confirmed, stability is defended, and the analytical issue is explainable with tight evidence, stand by the original result and document why. Over-correction harms timelines and subjects; disciplined OOS OOT investigations separate real risk from noise and keep quality where it belongs—scientific and operational, not performative.
Close the loop: CAPA, trending, vendor oversight, and global alignment
Deviations teach; CAPA proves you learned. Write corrective actions that fix the proximate cause (e.g., revise IFU photo; add seal-through-zip visual; change wash program), and preventive actions that reduce recurrence (training, supplier change, poka-yoke fixtures). Measure CAPA effectiveness metrics with numbers, not adjectives: acceptance failure rate, repeat deviation rate by site, median time to critical value notification, logger excursion rate, and query aging. Close CAPA only after the trend holds for a defined window (e.g., 60–90 days). File everything—deviation, investigation, documented impact assessment, CAPA plan, and effectiveness check—in your TMF/QMS so inspection-readiness evidence is one click away.
Trend across sources, not silos. Combine internal deviations, courier exceptions, intake rejections, QC failures, CLIA CAP nonconformance logs (if applicable), and vendor audit findings. Visualize by process step and root-cause category to target high-leverage fixes. When “label unreadable after condensation” appears across programs, the solution is a label/laminate change, not more emails. When “carryover on instrument X” repeats, the fix may be hardware service and method edits, not admonitions.
Vendors extend your risk surface. Bake deviation and CAPA clauses into contracts with central labs, bioanalytical CROs, and couriers: what must be reported, how fast, with what evidence, and who can authorize re-draws or re-tests. During oversight, review ALCOA+ data integrity behavior, 21 CFR Part 11 audit trails, and GCLP compliance artifacts. Require partners to show their own trending and CAPA effectiveness metrics. If a partner’s practices diverge from your SOPs, align them or document controlled differences with risk justifications.
Train for reality. Short, scenario-based refreshers beat long slide decks: “logger missing,” “delta check triggers repeat,” “dry ice delayed,” “illegible label,” “wrong anticoagulant,” “chain of custody gap.” Teach staff how to fill a deviation form with just the facts, how to attach photos and files, and how to avoid opinionated prose. Reinforce that logging a small, honest deviation is a sign of maturity; hiding it is a firing offense in regulated work.
Ground your program in globally recognized anchors to keep expectations aligned across USA, UK, EU, Japan, and Australia. Use one authoritative link per body in SOPs and governance packs so teams land on primary guidance: the U.S. Food & Drug Administration (FDA), the European Medicines Agency (EMA), the International Council for Harmonisation (ICH), the World Health Organization (WHO), Japan’s PMDA, and Australia’s TGA. With these anchors, your deviation management FDA posture will read as competent and consistent to inspectors everywhere.
Operational checklist (maps to keywords):
- Publish clear laboratory deviations SOPs with triage, containment, and logging rules; enforce ALCOA+ data integrity.
- Run disciplined root cause analysis 5-Why/Ishikawa fishbone and close every case with a documented impact assessment.
- Standardize specimen rejection criteria, re-draw policy clinical trials, and re-test strategy validation by method.
- Operationalize OOS OOT investigations and forbid testing into compliance; protect critical value notification.
- Qualify vendors; verify 21 CFR Part 11 audit trails, GCLP compliance, and partner trending.
- Trend KRIs and prove CAPA effectiveness metrics; keep inspection-readiness evidence current.