Published on 15/11/2025
Adverse Events of Special Interest (AESIs)—From Medical Logic to Repeatable, Audit-Ready Practice
Purpose, Scope, and the Global Compliance Frame
Adverse Events of Special Interest (AESIs) are pre-specified safety topics that demand enhanced capture, faster review, and consistent adjudication because they could materially alter the risk–benefit profile or the interpretability of endpoints. AESIs are not “all serious events”; they are a curated set of medically plausible risks derived from product class, mechanism, nonclinical signals, prior clinical experience, and context (population, route, procedure). When well designed, AESIs shorten the time from signal to
Principles and harmonization. A proportionate, quality-by-design posture—controlling the steps that protect participants and endpoint integrity—is consistent with concepts discussed by the International Council for Harmonisation. Operational expectations for investigator responsibilities, adverse event assessment, and trustworthy records are explained in educational materials provided through FDA clinical trial safety resources. In Europe and the UK, pharmacovigilance practices and expedited pathways are described in publicly available content from the European Medicines Agency. Ethical guardrails—respect, fairness, and accessible communication—are repeatedly emphasized by the World Health Organization research ethics guidance. For Japan and Australia, maintain terminology and artifacts coherent with orientation shared by PMDA and the Therapeutic Goods Administration so that AESI definitions and workflows translate cleanly across jurisdictions.
Why AESIs exist. AESIs give sponsors a disciplined way to (1) standardize definitions for risks that are particularly relevant to the product or indication, (2) ensure that case capture and follow-up make clinical sense for that risk, (3) accelerate adjudication and appropriate unblinding for safety when warranted, (4) enable consistent trending and signal detection, and (5) pre-commit to actions (e.g., pausing enrollment, enhanced monitoring) when certain patterns occur. AESIs sit between single-case vigilance and aggregate signal management; they link day-to-day case handling to a study’s Safety Monitoring Plan, Risk Management Plan, and statistical analysis of safety endpoints.
What an AESI is—and is not. An AESI is a concept + definition + workflow. Concepts might include immune-mediated events, drug-induced liver injury (DILI), anaphylaxis, QTc prolongation, venous thromboembolism (VTE), serious bleeding, neurologic demyelination, endocrinopathies (adrenal crisis, thyroiditis), ophthalmic inflammation, pancreatitis, or device-specific concerns (thermal injury, electrical shocks, software alarms that could lead to harm if repeated). An AESI is not just a MedDRA term list; it includes thresholds, labs, imaging, and decision rules that remove ambiguity. Equally, not every interesting event is an AESI; the list must be short enough to run fast and deep enough to be clinically meaningful.
ALCOA++ as the backbone. Every AESI artifact—definition tables, adjudication outcomes, lab trends, ECGs, imaging, device logs—must be attributable, legible, contemporaneous, original, accurate, complete, consistent, enduring, and available. Practically, that means immutable timestamps, version-locked definitions, one record-of-record per attachment, and a five-minute retrieval drill from any dashboard tile to the underlying evidence. If a reviewer cannot reproduce the AESI decision chain quickly, the system is not inspection-ready.
Blinding and independence. AESIs often pressure blinding because rapid action may be necessary. Protect the study with a minimal-disclosure unblinded safety unit and clear firewalls. The default is blinded review; unblinding occurs only when a predefined AESI rule says it is necessary to protect participants or preserve endpoint integrity. Record who learned what and why.
Designing AESIs—From Medical Logic to Reproducible Definitions
Start with the risk inventory. Pull from mechanism (on-target and off-target), class effects, nonclinical signals, prior clinical data, and disease-/procedure-specific hazards. Rank risks by clinical impact and plausibility, then pick a small set of AESIs (often 6–12) that would materially change clinical practice or program decisions. For each AESI, define objectives, case-finding methods, and actions.
Turn concepts into crisp definitions. Definitions must be diagnostic, not poetic. Examples:
- Anaphylaxis. Acute onset with skin/mucosal involvement and either respiratory compromise or hypotension, or two-system involvement after exposure; require tryptase within 1–2 hours when feasible; capture epinephrine use and response.
- DILI (Hy’s-law focus). ALT or AST ≥3× ULN and total bilirubin ≥2× ULN without cholestasis (ALP <2× ULN); require onset date/time, baseline labs, viral serologies, imaging, and alternative etiologies; define “possible” vs “probable” tiers.
- QTc prolongation. QTcF ≥500 ms or increase ≥60 ms from baseline, measured at HR between 50–100 with a prespecified method; require central over-read and electrolytes within 24 hours.
- VTE. Objectively confirmed DVT (compression ultrasound) or PE (CTPA/VQ scan); capture provoking factors (surgery, immobility), D-dimer, anticoagulant start and dose.
- Serious bleeding. Fatal, critical-site, or ≥2 g/dL Hb drop or 2+ units transfused; record anticoagulant/antiplatelet exposure and procedures.
- Immune-mediated AESIs. Define organ-specific patterns (colitis, hepatitis, endocrinopathies) with grade thresholds and steroid initiation criteria.
- Device-specific AESIs. Thermal injury (depth/size), electrical shock with arrhythmia, software alarm patterns; require device model/firmware, environment, user role, and returned-unit logistics.
Code plus context. Provide a curated list of MedDRA Preferred Terms (PTs) and, where appropriate, Standardised MedDRA Queries (SMQs) to guide coding and signal analytics. But definitions must also specify non-terminology requirements—lab thresholds, imaging modalities, ECG methods, device logs, or human-factors evidence. State whether one event maps to one primary PT (default) and how overlapping diagnoses are handled.
Expectedness and expedited interfaces. For each AESI, pre-map expectedness to the Reference Safety Information (RSI)/label categories. Clarify how a qualifying AESI intersects with expedited rules: “If serious + related + unexpected → expedited transmission,” and state which documents (ECG PDFs, lab panels) are required at first transmission vs follow-up. This prevents last-minute debates and missed clocks.
Adjudication charter. Establish a small, independent panel (or single safety physician for lower-complexity AESIs) with a charter that defines intake criteria, evidence packets, permissible unblinding, clinical questions, voting rules, expected turnaround time, and output categories (confirmed, probable, possible, not a case, insufficient data). Capture the meaning of approval with each signature (“case meets Hy’s-law; alternative etiologies excluded”).
Children, pregnancy, comorbidity, and diversity. AESIs can behave differently in pediatrics, pregnancy/lactation, renal/hepatic impairment, or under-represented populations. Add subgroup rules (e.g., age-adjusted QTc limits; pregnancy-specific DILI differential with cholestasis of pregnancy). Require interpreters and culturally appropriate materials where language and literacy may affect symptom reporting.
Keep the list short—and living. AESIs should be rare enough to warrant extra work. Re-evaluate definitions after the first 20–30 cases or at prespecified timepoints; update with a “what changed and why” memo and training addendum. Version-lock old cases; never overwrite history.
Operationalizing AESIs—Capture, Queries, Adjudication, and Unblinding for Safety
Design the forms to collect the right evidence the first time. eCRF modules should mirror the definition: auto-pull latest labs; enforce units and reference ranges; require ECG method/rate for QTc; require imaging modality for VTE; prompt for tryptase timing for anaphylaxis; and capture device model/firmware, alarm texts, and returned-unit IDs for device AESIs. Require clock times (not just dates) for onset, dose, ECG, and labs to support temporality. Where AESIs are symptom-driven (e.g., immune-related colitis), add targeted ePRO questions with plain-language descriptors.
Targeted queries beat fishing expeditions. Build a query catalog per AESI with short checklists: DILI (alcohol, acetaminophen, viral panels, autoimmune markers, RUQ ultrasound), QTc (electrolytes, drugs that prolong QT, repeat ECG with same method), VTE (imaging confirmation, provoking factor checklist), bleeding (Hb trend, transfusion, site), anaphylaxis (epinephrine use, tryptase). Each query states why it matters and the timeline for reply.
Coding and narrative discipline. Generate narrative shells from structured fields so the story and codes match. Require an explicit one-sentence causality rationale and an expectedness citation (RSI/label version/date). Store PDFs and device logs as single records of record linked from the case; avoid duplicates. Version additional information with a brief header: “Added ECG over-read; causality unchanged; expectedness unchanged.”
Adjudication workflow that moves fast. Cases meeting AESI triggers should auto-route to the adjudicator queue with a defined service level (e.g., 48–72 hours). The packet includes timeline, labs/imaging/ECG, device logs, and blinded treatment identifiers. Adjudicators record a category and rationale, plus recommendations (continue, hold, discontinue; dose modify; add monitoring). Where adjudication recommends unblinding for participant safety, the minimal-disclosure path is executed by the unblinded unit; blinded teams see only the clinical recommendation.
Interfaces and reconciliation. Reconcile AESI cases between the safety database and EDC: onset date, PT, seriousness, relatedness, expectedness, adjudication outcome, and action taken. Device portfolios must also reconcile returned-unit tracking and engineering conclusions to the safety file. Discrepancies are closed with audit-trailed notes.
Decentralized and time-sensitive logistics. Tele-visits require identity verification and synchronized clocks (local time and UTC). Labs and ECGs performed locally must be uploaded promptly with method metadata; build turnaround expectations into site agreements. Couriers of returned devices should use immutable logs; time drift undermines plausibility and root-cause analysis.
Training that changes behavior. Provide visual quick guides (e.g., Hy’s-law flow, anaphylaxis criteria), short case vignettes that differ by one fact, and a “three-click” knowledge check during SIV and refreshers after any definition update. Make clear that quality beats speed, but both are required; timeboxes exist to protect participants and timelines.
Privacy and respect. AESI modules often include sensitive information (pregnancy tests, HIV/hepatitis status, substance use). Store the minimum necessary data, document consent, and mask identifiers per local rules. Maintain a respectful tone in queries and narratives; plain-language explanations help participants understand next steps and why tests are needed.
Governance, Dashboards, KRIs/QTLs, and a Ready-to-Use Checklist
Ownership with the meaning of approval. Keep decision rights small and named: an AESI Medical Lead (accountable), Safety Operations (routing and timelines), Data Management (reconciliation), Device Engineer where applicable, and Quality (ALCOA++/traceability). Each signature records its meaning—“definition applied,” “evidence complete,” “expectedness checked,” “ALCOA++ verified.” Ambiguous signatures invite inspection questions.
Dashboards that drive action. Show AESI volumes by type; awareness-to-validity time; intake-to-adjudication time; narrative-field consistency rate; proportion with complete evidence packets at first pass; expedited clock burn-down for serious related unexpected AESIs; device returned-unit turnaround; and five-minute retrieval pass rate. Every tile must click to the evidence pack; if a number cannot click through, it is not inspection-ready.
Key Risk Indicators (KRIs) and Quality Tolerance Limits (QTLs). KRIs: spike in “unassessable” causality; missing RSI version/date in expedited AESIs; persistent narrative-code mismatches; overdue adjudications; device malfunction AESIs without recurrence-risk assessment; ECGs without method metadata; DILI panels incomplete at lock. Convert the highest-impact KRIs to QTLs, for example: “≥10% of AESI cases locked without mandatory evidence fields,” “≥5% of expedited AESIs missing explicit expectedness reference/version,” or “≥72-hour adjudication delay for ≥3 cases in a week.” Crossing a QTL triggers a documented review, containment, and a due-dated corrective plan.
Aggregate use of AESIs. AESIs feed signal management and periodic reports. Configure SMQ-based surveillance for the AESI families and align tables in interim analyses and the final CSR. Track cumulative incidence, exposure-adjusted rates, severity distributions, time-to-onset, and dechallenge/rechallenge outcomes. For immune-mediated AESIs, show steroid initiation, taper success, and recurrence after rechallenge; for DILI, show Hy’s-law flags and adjudication outcomes; for QTc, show central over-read distributions and electrolyte status at onset.
Common pitfalls—and durable fixes.
- Definitions that do not drive evidence collection. Fix by wiring definitions into eCRFs with mandatory fields and automated unit checks.
- Over-long AESI lists. Fix by pruning to high-impact risks and adding a living change-control process.
- Narratives that contradict coded fields. Fix with narrative shells generated from structured data and a pre-lock consistency check.
- Adjudication bottlenecks. Fix with service-level commitments, backup adjudicators, and a red tile that auto-escalates at 48–72 hours.
- Device AESIs without engineering closure. Fix with a returned-unit placeholder at intake and a 24-hour SLA for preliminary disposition.
- Unnecessary unblinding. Fix with a minimal-disclosure path and explicit rules stating when safety requires unblinding.
30–60–90-day plan. Days 1–30: finalize AESI list and crisp definitions; publish coding lists/SMQs; build eCRF modules; define adjudication charter; wire dashboards to artifacts; set KRIs/QTLs; train on quick guides. Days 31–60: pilot in two countries; run weekend drills for DILI, anaphylaxis, and QTc workflows; test adjudication turnaround; tune query catalogs; rehearse five-minute retrieval. Days 61–90: scale to all sites; enforce red tile escalations; integrate device returned-unit logistics; start monthly case rounds and quarterly definition reviews; close CAPA with design fixes, not reminders.
Ready-to-use AESI checklist (paste into your Safety Monitoring Plan/SOP).
- Short, ranked AESI list approved; definitions include PT/SMQs and non-terminology criteria (labs, imaging, ECG method, device logs).
- eCRF modules mirror definitions; mandatory fields and unit checks active; clock times captured for onset, doses, tests.
- Targeted query catalog per AESI; due dates and rationale included; translation support ready for site documentation.
- Adjudication charter in force; service levels defined; minimal-disclosure unblinding path documented; signatures state meaning of approval.
- Narrative shells generated from structured fields; one-sentence causality rationale; expectedness citation with RSI/label version/date.
- Safety–EDC reconciliation scheduled (onset, PT, seriousness, relatedness, expectedness, adjudication, action taken); discrepancies closed with audit trails.
- Device AESIs include model/firmware, alarm text, environment, human-factors notes, and returned-unit tracking with engineering disposition.
- Dashboards wired to artifacts; KRIs/QTLs monitored; auto-escalation for overdue adjudications; five-minute retrieval drill passed monthly.
- Aggregate views configured (incidence, EAERs, time-to-onset, dechallenge/rechallenge, steroid use for immune AESIs, Hy’s-law flags for DILI, QTc distributions).
- Change control active; “what changed and why” memos filed for definition updates; training addenda issued.
Bottom line. AESIs work when they are engineered as a small, disciplined system: a short list of high-impact risks, crisp definitions that drive evidence capture, fast adjudication with minimal-disclosure unblinding when necessary, and dashboards that click through to proof. Build that system once—definitions, forms, routing, KRIs/QTLs, and retrieval drills—and you will protect participants, produce clean aggregate outputs, and be ready to show why every decision made clinical and regulatory sense across regions and study types.