View Article

  • Ai-Driven Prediction of Adverse Drug Reactions: Clinical Pattern Analysis and Risk Assessment

  • 1Pham.D (Intern), SRM College of Pharmacy (SRM Institute of Science & Technology), Chennai, Tamil Nadu

    2Pharm.D (Intern), Sir C R Reddy College of Pharmaceutical Sciences, Eluru, Andhra Pradesh

    3BS.c, Biotechnology, GD Goenka University, Gurugram, Haryana

    4Pharm.D (Intern), Parul University, Vadodara, Gujrat

    5MS.c, Organic Chemistry, D.B.J College (Autonomous), Chiplun, Maharashtra

Abstract

Adverse Drug Reactions (ADRs) represent a critical global patient safety challenge, accounting for 5–10% of all hospital admissions and an estimated 197,000 deaths annually within the European Union alone. Traditional pharmacovigilance systems relying on spontaneous reporting are beset by chronic underreporting rates exceeding 90%, temporal delays, and limited ability to identify complex multi-factorial risk patterns. Artificial intelligence (AI) offers transformative potential for proactive, high-accuracy ADR prediction. This paper provides a comprehensive synthesis of AI-driven methodologies—including classical machine learning, deep learning architectures, natural language processing (NLP), graph neural networks (GNNs), and large language models (LLMs)—applied to ADR prediction, clinical pattern analysis, and patient risk stratification. We conducted a narrative and systematic synthesis of peer-reviewed literature from PubMed, Embase, IEEE Xplore, and Web of Science (2018–2025), focusing on studies employing AI models for ADR prediction from electronic health records (EHRs), spontaneous reporting databases (FAERS, VigiBase), clinical notes, and multi-omics data. Random forest models demonstrate mean AUC values of 0.72–0.94 for ADR prediction from EHRs. Deep learning architectures (CNNs, LSTMs) achieve accuracies of 85–92%. BERT-based LLMs applied to clinical notes report F1 scores of 61–68%. GNN-based frameworks (PreciseADR) outperform state-of-the-art baselines by 3.2% in AUC. XAI techniques including SHAP and LIME enhance clinical interpretability. Key risk factors include polypharmacy, length of hospital stay, admission type, renal dysfunction, and pharmacogenomic variants. AI-driven ADR prediction represents a paradigm shift from reactive pharmacovigilance to proactive risk management. Integration of multimodal data, explainable AI, and pharmacogenomics within clinical decision support systems offers the most promising pathway toward personalized, safe pharmacotherapy. Standardization of datasets, regulatory frameworks, and prospective validation studies are urgently needed.

Keywords

Adverse Drug Reactions, Machine Learning, Deep Learning, Pharmacovigilance, Natural Language Processing, Graph Neural Networks, Explainable AI

Introduction

Adverse Drug Reactions (ADRs) are defined by the World Health Organization (WHO) as "noxious and unintended responses to a drug at doses normally used in humans" (WHO, 1972). They represent one of the most significant yet preventable burdens on global healthcare systems. Epidemiological evidence consistently positions ADRs among the fourth and sixth leading causes of mortality worldwide, comparable in impact to major diseases such as heart disease, cancer, and stroke [Backman et al., 2023]. The scale of ADR-related harm is staggering. In European Union member states, ADRs cause approximately 197,000 deaths annually, with median hospital admission rates of 3.5% and in-hospital ADR occurrence rates of 10.1% across prospective epidemiological studies [Bouvy et al., 2015]. In the United States, ADRs account for 5–10% of all hospital admissions, with an overall incidence of serious ADRs reaching 6.7% and fatal ADRs at 0.32% among hospitalized patients [Seo et al., 2023]. Globally, approximately 16.9% of hospitalized patients experience an ADR during their stay [Rothschild et al., 2010]. The economic burden compounds this human toll—direct costs include prolonged hospitalization, additional interventions, and emergency visits, while indirect costs encompass lost productivity and long-term disability. Despite this recognized burden, conventional pharmacovigilance systems are fundamentally reactive and structurally limited. The Yellow Card Scheme (UK), MedWatch (USA), and WHO VigiBase represent passive, voluntary reporting systems characterized by severe underreporting—estimated at over 90% in most settings—and significant temporal gaps between ADR occurrence and signal detection [Al Meslamani, 2023]. Signal detection algorithms such as the Proportional Reporting Ratio (PRR) and Reporting Odds Ratio (ROR) offer statistical screening but cannot contextualize individual patient risk profiles or predict reactions prior to drug exposure. The emergence of artificial intelligence and machine learning technologies offers a transformative opportunity to reconceptualize pharmacovigilance from a reactive surveillance paradigm to a proactive, predictive framework. The exponential growth of structured and unstructured clinical data—Electronic Health Records (EHRs), genomic databases, spontaneous reporting systems, social media, and wearable sensor streams—creates an unprecedented substrate for AI-driven risk modeling. Machine learning algorithms can identify subtle, high-dimensional patterns across thousands of clinical variables simultaneously, uncovering risk signatures that are invisible to conventional statistical approaches. Recent years have witnessed rapid methodological diversification in AI-based ADR research. Classical supervised learning models (random forests, gradient boosting, support vector machines) have been succeeded by deep learning architectures employing convolutional and recurrent neural networks, transformer-based language models including BERT and GPT variants, and graph neural networks capable of modeling complex drug-drug and drug-patient interaction networks [Hu et al., 2024; Li et al., 2024; Gao et al., 2024]. Simultaneously, explainable AI (XAI) frameworks—particularly SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations)—are bridging the gap between model performance and clinical interpretability [Salih et al., 2025]. This paper provides a comprehensive synthesis of the state-of-the-art in AI-driven ADR prediction, organized around five thematic pillars: (1) the epidemiological and clinical burden of ADRs; (2) classical ML approaches and their performance benchmarks; (3) deep learning and NLP methodologies; (4) graph neural network architectures for relational drug-patient modeling; and (5) XAI, pharmacogenomics, and translational integration into clinical decision support systems. We conclude with a critical appraisal of current limitations and a research agenda for the next decade.

Epidemiology And Clinical Burden of Adr

The global burden of ADRs constitutes a major public health emergency that has persisted despite decades of pharmacovigilance effort. Epidemiological evidence accumulated over the past three decades reveals a consistent pattern of high incidence, preventability, and healthcare system impact across diverse healthcare settings worldwide. In developed healthcare systems, the median prevalence of ADR-related hospitalization ranges from 3.3% to 11.0% (median 6.3%), while in developing countries the corresponding range is 1.1% to 16.9% (median 5.5%) [Kongkaew et al., 2016]. The comparability of these figures across high- and low-income settings underscores the universality of the ADR challenge, irrespective of healthcare system maturity. Critically, the proportion of preventable ADRs in developed countries reaches 71.7% (IQR 62.3–80.0%), indicating that the majority of ADR-related harm is potentially avoidable [Kongkaew et al., 2016].

Age-related vulnerability represents a particularly significant dimension of ADR epidemiology. Elderly patients (aged 65 years) are disproportionately represented in ADR incidence data due to multiple intersecting mechanisms: age-related pharmacokinetic changes (reduced renal clearance, altered hepatic metabolism, decreased protein binding), higher prevalence of polypharmacy, increased pharmacodynamic sensitivity, and greater comorbidity burden. A U.S. study estimates approximately 22% of adults aged 40–79 take five or more medications concurrently—a pattern strongly associated with heightened ADR risk [Marques-Garcia et al., 2026]. The economic dimensions of ADR burden are equally substantial. Direct costs attributable to ADR-related hospitalizations, extended lengths of stay, and additional diagnostic and therapeutic interventions impose enormous pressure on healthcare budgets. A comprehensive cost-of-illness analysis indicates that drug-related morbidity and mortality represent billions of dollars in annual direct healthcare costs in the United States alone. Additional indirect costs arise from lost productivity, disability, and premature mortality.

Classification Frameworks for ADRs

Systematic classification of ADRs is foundational to both clinical management and AI-based prediction modeling. The most widely adopted taxonomic framework distinguishes ADR types by mechanism, predictability, and dose-dependency.

The Rawlins-Thompson classification divides ADRs into Type A (Augmented), representing dose-dependent, pharmacologically predictable extensions of the drug's known mechanism, and Type B (Bizarre), which are dose-independent, idiosyncratic reactions often mediated by immunological or pharmacogenomic mechanisms. Subsequent extensions introduced Types C (Chronic, related to long-term exposure), D (Delayed, emerging after treatment cessation), E (End-of-use, withdrawal reactions), and F (Failure of therapy). The DoTS (Dose, Time-course, Susceptibility) classification provides a more granular framework applicable to clinical risk stratification. For AI modeling purposes, the MedDRA (Medical Dictionary for Regulatory Activities) coding system provides the operational taxonomy through which ADRs are recorded in spontaneous reporting databases including FAERS and VigiBase. The System Organ Class (SOC) hierarchy within MedDRA enables both signal detection and supervised model training across standardized, granular adverse event categories. Table 1 summarizes the most commonly predicted ADR categories from recent machine learning studies.

Table 1. Most Commonly Predicted ADR Categories in Machine Learning Studies (2020–2025)

ADR Category (SOC)

Predicted ADRs

ML Model Used

AUC Range

Key Reference

Renal/Urinary Disorders

Nephrotoxicity, AKI

Random Forest, XGBoost

0.85–0.94

Chiu et al., 2024

Hepatobiliary Disorders

Drug-induced liver injury

Random Forest, LSTM

0.89–0.90

Lai et al., 2020

Cardiac Disorders

Cardiac events, QT prolongation

SVM, Deep Learning

0.78–0.88

Barbieri et al., 2025

Gastrointestinal Disorders

GI bleeding, nausea/vomiting

Random Forest, LR

0.72–0.83

Hu et al., 2024

Hematologic Disorders

Thrombocytopenia, neutropenia

AdaBoost, RF

0.76–0.91

Frontiers, 2024

Endocrine Disorders

Thyroid dysfunction

Random Forest, CNN

0.80–0.87

Lu et al., 2023

Skin/Subcutaneous Disorders

Rash, Stevens-Johnson

SVM, Naive Bayes

0.74–0.86

FAERS Analysis, 2024

Nervous System Disorders

Seizures, cognitive impairment

GNN, BERT

0.81–0.89

Gao et al., 2024

Classic Machine Learning Approach

 Random Forest and Ensemble Methods

Classical supervised machine learning methods constitute the most extensively validated category of AI approaches in ADR prediction research. A 2024 systematic review and meta-analysis by Hu et al., encompassing studies from PubMed, Web of Science, Embase, and IEEE Xplore through November 2023, identified 10 high-quality studies employing 20 distinct ML methods for predicting multiple adverse drug events from EHR data [Hu et al., 2024]. Random forest (RF) emerged as the most commonly deployed algorithm (n=9 studies), followed by AdaBoost (n=4), eXtreme Gradient Boosting/XGBoost (n=3), and Support Vector Machine/SVM (n=3). The mean area under the ROC curve (AUC) across all included studies was 0.76 (95% CI: 0.26–0.95), with a pooled estimated AUC of 0.72 (95% CI: 0.68–0.75) [Hu et al., 2024]. Notably, random forest models combined with resampling-based approaches to address class imbalance achieved the highest documented AUCs in the reviewed literature, reaching 0.9448–0.9457. The superiority of ensemble methods in ADR prediction contexts reflects several intrinsic algorithmic advantages. Random forests' capacity to handle high-dimensional EHR feature spaces (often comprising hundreds of clinical variables), their robustness to missing data, and their inherent feature importance metrics render them particularly suited to clinical prediction tasks. Gradient boosting methods including XGBoost and LightGBM further improve on base RF performance through sequential error minimization, demonstrating particular strength in structured tabular clinical data. Feature importance analyses across RF and ensemble studies consistently identify common risk determinants of ADEs: (1) length of hospital stay; (2) number of prescribed drugs (polypharmacy index); (3) admission type; (4) age and sex; (5) renal function markers (serum creatinine, eGFR); (6) hepatic function markers (ALT, AST, bilirubin); and (7) prior ADR history [Hu et al., 2024]. These clinically interpretable risk factors provide a foundation for translating model outputs into actionable clinical decision support.

Support Vector Machines and Logistic Regression

Support Vector Machines (SVMs) have demonstrated consistent utility in ADR prediction tasks involving chemical structure-based features, pharmacological descriptors, and binary outcome classification. Their strength lies in effective margin maximization in high-dimensional feature spaces, rendering them suitable for molecular descriptor-based prediction of drug toxicity profiles. In clinical applications employing FAERS demographic and drug characteristic data, SVM models achieved accuracy rates of approximately 80%, compared to 78% for logistic regression and 85% for more complex CNN architectures [AI-driven pharmacovigilance, 2025]. Logistic regression, despite its comparative simplicity, remains a relevant baseline and clinical decision tool due to its inherent interpretability, calibration properties, and regulatory acceptability. In contexts where feature-outcome relationships are approximately linear and clinical transparency is paramount, logistic regression models enriched with interaction terms offer practical value. The LASSO (Least Absolute Shrinkage and Selection Operator) variant provides automatic variable selection, addressing multicollinearity in high-dimensional EHR data.

Class Imbalance and Data Quality Challenges

A fundamental challenge confronting all supervised ML approaches in ADR prediction is severe class imbalance: adverse reactions typically represent a minority class, often comprising 1–15% of the training dataset. Uncorrected imbalance results in classifiers heavily biased toward the majority non-ADR class, generating misleadingly high overall accuracy while providing poor ADR sensitivity—precisely the metric of greatest clinical relevance. Established remediation strategies include oversampling techniques (SMOTE—Synthetic Minority Oversampling TEchnique, ADASYN), undersampling of the majority class, cost-sensitive learning with asymmetric misclassification penalties, and ensemble methods specifically designed for imbalanced data (BalancedBaggingClassifier, EasyEnsemble). The combination of RF with resampling approaches achieving AUCs of 0.94 exemplifies the performance gains attainable through appropriate imbalance handling [Hu et al., 2024]. Data quality represents a complementary challenge. EHR data suffer from incompleteness, inconsistent coding practices across institutions, temporal misalignment between drug exposure and outcome recording, and confounding by indication (patients receiving specific drugs may have baseline conditions that independently predict the outcome). FAERS and VigiBase data introduce additional complexities including duplicate reports, unverified diagnoses, and variable reporter expertise. Rigorous preprocessing pipelines—encompassing imputation, standardization, deduplication, and temporal feature engineering—are prerequisite to reliable model development.

Deep Learning Architecture for Adr Prediction

Convolutional and Recurrent Neural Networks

Deep learning architectures have substantially expanded the representational capacity of AI-based ADR prediction beyond the constraints of handcrafted features. Convolutional Neural Networks (CNNs), originally developed for image recognition, have been adapted for ADR prediction from molecular structures (treating chemical fingerprints as 1D sequences), sequential clinical events, and multimodal clinical data integration. A hybrid AI-driven framework integrating structured data (demographics, lab results) with unstructured clinical notes via CNN achieved an accuracy of 85%, outperforming traditional models including Logistic Regression (78%) and SVM (80%) [AI-driven pharmacovigilance, 2025]. Recurrent Neural Networks (RNNs), particularly Long Short-Term Memory (LSTM) networks and bidirectional LSTMs, capture temporal dependencies in longitudinal clinical data—a critical capability for modeling ADR trajectories over time. The ability of LSTM architectures to learn which elements of a patient's clinical history (medication sequences, laboratory trends, vital sign trajectories) are predictive of future ADR occurrence makes them especially well-suited to EHR-based risk modeling. Bidirectional LSTMs process sequential input in both forward and backward temporal directions, enabling richer contextual feature extraction. Attention mechanisms, originally introduced in the context of machine translation, have been incorporated into deep learning ADR models to provide interpretable feature weighting across time steps. Attention weights indicate which temporal events or clinical measurements most strongly influenced a given prediction, offering a layer of mechanistic interpretability that raw neural network outputs typically lack.

Transformer Architectures and Large Language Models

The introduction of the transformer architecture (Vaswani et al., 2017) and its subsequent biomedical adaptations has created a paradigm shift in clinical NLP and text-based ADR detection. BERT (Bidirectional Encoder Representations from Transformers) and its domain-adapted variants—BioBERT, ClinicalBERT, PubMed BERT—have demonstrated state-of-the-art performance in clinical NLP tasks including named entity recognition, relation extraction, and adverse event detection from clinical narratives [Dong et al., 2024]. ClinicalBERT, pre-trained on large-scale MIMIC-III clinical notes, employs a 12-layer transformer architecture with 110 million parameters and captures clinical terminology and semantic nuances absent from general-domain BERT models. Applied to ADR detection tasks, ClinicalBERT and its variants demonstrate performance advantages of 5–10% accuracy over previously published NLP models [McMaster et al., 2023]. In a pilot study of serious adverse event (SAE) identification from outpatient inflammatory bowel disease notes, the UCSF-BERT model achieved accuracy of 88–92% with macro F1 scores of 61–68% [Nishioka et al., 2024]. BERT-based models applied to social media pharmacovigilance—extracting ADR mentions from Twitter/X, patient forums, and drug review websites—have demonstrated precision improvements of 15–20% over rule-based NLP baselines [Dong et al., 2024]. Social media platforms provide near-real-time surveillance data from patient-reported experiences, offering complementary coverage to formal spontaneous reporting systems. The FDA has recognized the pharmacovigilance value of social media data, with NLP analysis of VAERS COVID-19 vaccine reports demonstrating the applicability of transformer models to signal validation workflows [Artificial Intelligence in Pharmacovigilance, 2025]. The emergence of generative large language models (GPT-4, Llama, Gemini) introduces new capabilities for ADR prediction including zero-shot and few-shot learning, cross-lingual pharmacovigilance, and conversational clinical decision support. A 2025 narrative review synthesizing LLM applications to adverse drug event detection found that chatbot-assisted systems have been explored as tools in clinical management, aiding medication adherence monitoring and symptom surveillance [LLMs for ADEs, 2025]. However, LLM-specific challenges including hallucination, knowledge cutoff limitations, and lack of calibrated uncertainty remain barriers to clinical deployment.

Natural Language Processing for Clinical Text Mining

Structured vs. Unstructured Data in ADR Detection

A critical insight driving NLP adoption in pharmacovigilance is the recognition that the majority of clinically actionable ADR information resides not in structured database fields but in free-text clinical narratives—physician notes, discharge summaries, pharmacy records, and nursing documentation. A 2025 scoping review found that NLP/ML techniques applied to unstructured EHR data significantly improved detection of under-reported adverse events and safety signals that would not have been apparent through structured data alone [McMaster et al., 2023; NLP/ML Scoping Review, 2025].

The pipeline for NLP-based ADR detection from clinical text encompasses: (1) preprocessing (tokenization, normalization, de-identification); (2) named entity recognition (NER) to identify drug names and adverse event mentions; (3) relation extraction to link drug-ADR pairs; (4) negation and speculation detection to exclude non-positive assertions; (5) temporal reasoning to establish causal chronology; and (6) severity classification. Each stage introduces specific technical challenges amplified by the heterogeneous, institution-specific language of clinical documentation.

Rule-Based vs. Statistical vs. Deep Learning NLP Approaches

NLP methodologies for ADE extraction span three generations of technical sophistication. Rule-based systems employing hand-crafted lexicons, regular expressions, and grammatical patterns offer high precision for well-defined entities but limited recall and portability across institutions with different documentation styles. MedEx, cTAKES (clinical Text Analysis and Knowledge Extraction System), and NegEx exemplify rule-based tools deployed in clinical informatics. Statistical NLP models using conditional random fields (CRFs), maximum entropy classifiers, and structured perceptrons improved upon rule-based systems through data-driven feature learning, enabling better generalization across documentation styles. However, their reliance on handcrafted feature engineering limited scalability. Deep learning NLP models—and transformer-based architectures in particular—achieve substantial performance gains through end-to-end feature learning from large text corpora. Comparative studies across rule-based, statistical, and deep learning NLP approaches consistently demonstrate the superiority of transformer architectures on standard pharmacovigilance benchmarks. The SMM4H (Social Media Mining for Health) shared tasks, now in their eighth iteration, provide standardized evaluation platforms demonstrating progressive performance improvement aligned with architectural advances [Klein et al., 2024].

Graph Neural Network for Relational and Modeling

Drug-Drug Interaction Prediction

Polypharmacy—defined as the concurrent use of five or more medications—is both increasingly prevalent and a major ADR risk amplifier. A U.S. study estimates 22% of adults aged 40–79 receive five or more concurrent medications, with elderly populations experiencing the highest polypharmacy burden [Marques-Garcia et al., 2026]. Drug-drug interactions (DDIs) arising from polypharmacy can trigger unexpected pharmacological consequences including ADEs, toxicity, and even fatality [Li et al., 2024]. Graph Neural Networks (GNNs) offer a uniquely powerful paradigm for DDI and ADR prediction by natively representing drugs, patients, diseases, and adverse events as nodes within heterogeneous relational graphs, with edges encoding known interactions, therapeutic relationships, and causal linkages. The structural inductive bias of GNNs aligns with the inherently relational nature of pharmacological knowledge. GraphDDI, introduced at the AIiH 2024 conference, employs GNN architectures incorporating drug target knowledge and topological graph features to predict DDIs with demonstrated performance advantages over feature-based approaches [GraphDDI, 2024]. MGDDI (Multi-scale Graph neural network for DDI prediction) uses multi-scale graph convolutional layers combined with attention-based substructure interaction modules, capturing interaction details between drug pairs at multiple molecular scales [Geng et al., 2024]. AutoDDI employs neural architecture search to automatically discover optimal GNN configurations for DDI prediction, improving both accuracy and computational efficiency [Gao et al., 2024].

Patient-Level ADR Prediction via Heterogeneous GNNs

A significant limitation of earlier drug-centric ADR prediction approaches was their failure to account for individual patient heterogeneity—the reality that the same drug can elicit radically different responses across patients with differing demographics, comorbidities, genetic profiles, and medication histories. Heterogeneous GNN frameworks addressing patient-level ADR prediction represent the methodological frontier. PreciseADR, developed by Gao et al. at Zhejiang University (2024), introduces a patient-level ADR prediction framework leveraging heterogeneous GNNs that integrates relationships between patients, diseases, drugs, and ADRs as graph components [Gao et al., 2024]. Trained on the FDA Adverse Event Reporting System (FAERS) dataset, PreciseADR constructs a heterogeneous graph capturing patient disease burden, current medication regimens, and historical ADR patterns. The model captures both local neighborhood interactions and global graph-level dependencies, identifying subtle patterns predictive of ADR occurrence. Experimental results demonstrate PreciseADR outperforms the strongest baseline by 3.2% in AUC and 4.9% in Hit@10, validating the patient-level personalization approach. The PreciseADR architecture employs separate message-passing operations for different edge types (patient-drug, patient-disease, drug-ADR), followed by cross-type attention aggregation that dynamically weights information from different relation types based on predictive relevance. A statistical analysis of FAERS data incorporated within the model identifies ADR patterns associated with demographic subgroups defined by gender and age, enabling precision medicine-aligned risk stratification.

Explainable Ai in Clinical Adr Decision Support

The Black-Box Problem in Clinical AI

The deployment of high-performance but opaque "black-box" AI models in clinical settings faces a fundamental epistemological barrier: clinicians cannot responsibly act on predictions they cannot interpret, validate, or explain to patients. Regulatory frameworks—including FDA guidance on AI/ML-based medical devices and the EU's Medical Device Regulation—increasingly require interpretability as a precondition for clinical AI approval [Lavecchia, 2025]. This tension between predictive performance and transparency is the central challenge of translational clinical AI. The explainable AI (XAI) field addresses this challenge through post-hoc interpretation methods applied to trained models and intrinsically interpretable model architectures. Post-hoc methods include global explanation techniques quantifying overall feature importance distributions and local explanation methods providing instance-specific reasoning for individual predictions. Intrinsically interpretable alternatives include decision trees, logistic regression with regularization, and attention-mechanism-equipped neural networks.

SHAP and LIME for ADR Interpretability

SHAP (SHapley Additive exPlanations), grounded in cooperative game theory, computes each feature's marginal contribution to a specific prediction by systematically evaluating all possible feature coalitions. SHAP values satisfy desirable axioms including local accuracy, missingness, and consistency, making them theoretically well-founded. In ADR prediction models, SHAP analysis has been applied to identify which clinical variables—renal function, polypharmacy index, age—most strongly drive individual ADR risk estimates, enabling clinicians to understand and validate model reasoning [Salih et al., 2025]. LIME (Local Interpretable Model-agnostic Explanations) generates locally faithful explanations by fitting a simple linear model in the neighborhood of a specific prediction, approximating the decision boundary of complex models in a locally tractable manner. LIME is model-agnostic and applicable to any classifier or regressor, including deep learning architectures. Comparative evaluations indicate that SHAP generally outperforms LIME on consistency and robustness metrics while LIME provides simpler,  more concise explanations at higher interpretability thresholds [Salih et al., 2025; medRxiv, 2025]. Other XAI methods contributing to clinical ADR model transparency include: (1) Grad-CAM (Gradient-weighted Class Activation Mapping) for CNN-based visual and temporal models; (2) attention visualization for transformer architectures; (3) Partial Dependence Plots (PDPs) for global feature effects; and (4) counterfactual explanations indicating the minimum feature changes required to alter a prediction—the latter being particularly valuable for identifying modifiable risk factors amenable to clinical intervention The integration of XAI techniques into clinical decision support systems (CDSS) for ADR management is increasingly recognized as essential for regulatory compliance, clinician trust calibration, and medico-legal accountability. A comprehensive systematic review of AI-driven drug interaction prediction (147 studies, 2018–2024) emphasized the transformative role of SHAP, LIME, and attention mechanisms in enhancing clinical interpretability of complex graph and language models [systematic review, 2025].

Table 2. Comparative Overview of Explainable AI Methods for Clinical ADR Model

XAI Method

Type

Scope

Clinical Utility

Computational Cost

SHAP

Post-hoc

Local + Global

Feature contribution per prediction; model audit

Moderate-High

LIME

Post-hoc

Local

Simple decision boundary explanation

Moderate

Grad-CAM

Post-hoc

Local

Visual attribution for CNN/temporal models

Low

Attention Weights

Inherent

Local

Sequence-level feature importance (transformers)

Low

Partial Dep. Plots

Post-hoc

Global

Feature effect visualization across population

Moderate

Counterfactuals

Post-hoc

Local

Identify actionable risk modification targets

High

Pharmacogenomics And Precision Medicine Integration

Genetic Determinants of ADR Risk

Individual variability in drug response—and ADR susceptibility—is substantially determined by genetic polymorphisms affecting pharmacokinetic and pharmacodynamic pathways. Pharmacogenomics provides the mechanistic framework linking genotype to ADR phenotype. Key pharmacogenes influencing ADR risk include: CYP2D6 and CYP2C19 (cytochrome P450 enzymes governing metabolism of approximately 25% of marketed drugs); HLA alleles (HLA-B*57:01 associated with abacavir hypersensitivity; HLA-B*15:02 with carbamazepine-induced Stevens-Johnson syndrome); and transporters including OATP1B1 (statin-induced myopathy) and ABCB1 (multidrug resistance modulation). Multi-omics integration represents the next frontier in AI-driven ADR prediction. AI models incorporating genomics, transcriptomics, proteomics, and metabolomics alongside clinical EHR data provide the most complete picture of individual ADR risk. The DeepDRA model (2024) achieved a precision-recall AUC of 0.99 in drug response prediction using integrated multi-omics data, illustrating the performance ceiling attainable with comprehensive molecular profiling [Sciencedirect, 2025]. Key pharmacogenomic variants associated with antituberculosis drug-induced hepatotoxicity—including NAT2, OATP1B1 polymorphisms, and UGT1A1*27/*28—have been incorporated into ML models achieving AUC values of 0.89–0.90 [Lai et al., 2020].

 AI-Enhanced Pharmacogenomic Testing

Traditional preemptive pharmacogenomic testing focuses on single gene-drug pairs (e.g., TPMT before thiopurines, DPYD before fluoropyrimidines). AI models integrating multiple pharmacogenomic variants, drug-drug interaction predictions, and patient clinical profiles enable more holistic, personalized risk assessment. Clinical pharmacogenomics implementation studies have demonstrated strong value potential in optimizing common drugs including statins, anticoagulants, beta-blockers, and opioids—all major contributors to ADR-related hospitalizations [clinician experiences, 2024]. The convergence of AI and pharmacogenomics within clinical decision support frameworks enables prospective, genotype-informed prescribing recommendations. Primary care and tertiary care implementations of pre-emptive pharmacogenomic testing, supported by AI-driven drug interaction alerts, have demonstrated feasibility and clinical benefit. However, barriers including cost, turnaround time, clinician education, and reimbursement policies limit widespread adoption.

Data Sources and Ai Modelling Infrastructure

Electronic Health Records and Clinical Registries

Electronic Health Records represent the primary substrate for EHR-based ADR prediction modeling, providing longitudinal, multi-dimensional patient data including medication records, laboratory values, diagnoses (ICD codes), vital signs, clinical notes, and procedural records. A systematic review of AI-based pharmacoepidemiology models for ADE prediction found that 80% of included studies used only structured EHR or claims data, with 20% incorporating NLP components alongside structured data [Frontiers, 2026]. Large EHR repositories such as MIMIC-III/IV (Medical Information Mart for Intensive Care), the UK Biobank, and All of Us (NIH) provide rich, well-characterized datasets enabling model development and external validation. However, EHR data present well-documented quality challenges including missingness, inconsistent coding, confounding by indication, and incomplete capture of out-of-hospital medication use. Real-world evidence derived from insurance claims databases complements EHR data by providing longitudinal, population-level medication exposure records but lacks clinical granularity.

Spontaneous Reporting Databases

The FDA Adverse Event Reporting System (FAERS) represents the world's largest pharmacovigilance database, containing over 25 million individual case safety reports submitted since 1969. FAERS data—comprising patient demographics, suspect drugs, concomitant medications, reported reactions (MedDRA-coded), and outcomes—provide the primary substrate for signal detection AI and population-level ADR modeling. PreciseADR's FAERS-based training demonstrated the utility of this resource for patient-level heterogeneous GNN models [Gao et al., 2024].

WHO's VigiBase, maintained by the Uppsala Monitoring Centre, contains over 30 million individual case safety reports from 140 member countries and is the world's most comprehensive spontaneous reporting resource. VigiLyze, the AI-enhanced signal detection platform built on VigiBase, employs disproportionality analysis alongside predictive modeling to identify emerging safety signals. The EU's EudraVigilance and national pharmacovigilance databases (Yellow Card UK, BfArM Germany) provide regional complements to these global repositories.

Social Media and Patient-Generated Data

Social media platforms—Twitter/X, Reddit, MedHelp, WebMD forums—have emerged as real-time pharmacovigilance data sources providing patient-reported ADR experiences outside formal healthcare settings. Analyses suggest social media can detect ADR signals 2–3 months earlier than FAERS, reflecting patients' immediate digital reporting of adverse experiences. NLP models trained on social media ADR data must address unique challenges including informal language, abbreviations, negation ambiguity, and signal-to-noise ratios substantially lower than clinical text [Dong et al., 2024].

Table 3. Performance Benchmarks of Selected AI Models for ADR Prediction (2020–2025)

Model / Study

Data Source

Task

Best AUC/Accuracy

Key Algorithm

Hu et al. (2024)

EHR (10 studies)

Multi-ADE prediction

0.94 (RF+SMOTE)

Random Forest

AI-driven pharma (2025)

EHR + Clinical Notes

SAE detection

85% accuracy

CNN

PreciseADR (2024)

FAERS

Patient-level ADR

AUC +3.2% vs SoTA

Heterogeneous GNN

UCSF-BERT (2024)

Outpatient IBD notes

SAE identification

88–92% accuracy

ClinicalBERT

NLP/ML Scoping Review

Unstructured EHR

ADE detection

Improved signal detection

Transformer NLP

Farnoush et al. (2024)

FAERS (2012–2023)

30 common ADRs

Per-ADR AUC varied

RF + Deep Learning

GraphDDI (2024)

Drug knowledge graph

DDI prediction

Superior to baselines

GNN

BERT-ADR (2024)

Social media (FDA)

Drug ADE extraction

High precision/recall

BERT

 

 
 

Clinical Decision Support Integration

Architecture of AI-Driven CDSS for ADR Prevention

Clinical Decision Support Systems (CDSS) integrating AI-based ADR prediction represent the translational endpoint of the research developments reviewed above. An effective AI-driven ADR CDSS architecture encompasses five functional layers: (1) Data Ingestion and Harmonization—real-time extraction and standardization of EHR data, laboratory values, medication records, and genomic profiles; (2) Feature Engineering Pipeline—automated generation of clinically meaningful predictive features from raw clinical data; (3) Prediction Model Layer—ensemble of trained ML/DL/GNN models generating probability scores for specific ADR outcomes; (4) Explanation Layer—SHAP/LIME-based feature attribution providing human-interpretable rationale for risk estimates; and (5) Alert and Interface Layer—workflow-integrated risk alerts, patient-specific risk summaries, and actionable prescribing recommendations. The integration of AI-generated ADR risk scores within EHR workflow at point-of-care prescribing constitutes the most impactful deployment scenario. Prospective studies of AI-integrated CDSS have demonstrated reductions in preventable ADR rates of 20–45% in high-risk patient populations, with alert specificity critical to avoiding alert fatigue—a pervasive limitation of first-generation rule-based CDSS.

Challenges in CDSS Implementation

The translation of AI ADR prediction models from research to clinical deployment faces multidimensional challenges spanning technical, organizational, regulatory, and behavioral domains. Technically, models trained on data from a single institution may exhibit diminished performance when deployed at others due to population differences, coding practice variations, and EHR system heterogeneity—the external validity problem. Federated learning architectures, enabling model training across distributed EHR systems without centralizing sensitive patient data, represent a promising solution to this challenge [federated LLM review, 2025]. Alert fatigue—a condition in which clinicians become desensitized to alerts due to excessive false-positive rates—remains one of the most significant implementation barriers. Studies document clinical override rates for CDSS drug alerts exceeding 90% in some settings, largely attributable to low specificity. AI models achieving AUCs of 0.80–0.90 generate substantially fewer false positives than rule-based systems, but the clinical utility threshold for ADR alert systems depends critically on the specific ADR's severity, preventability, and clinical context. Regulatory pathways for AI/ML-based medical devices present evolving requirements. The FDA's evolving framework for AI/ML-Based Software as a Medical Device (SaMD) addresses the unique challenge of continuously learning AI systems whose performance characteristics may shift post-deployment through ongoing model updates. In the EU, the AI Act (2024) imposes specific requirements for high-risk AI systems in healthcare, including conformity assessments, post-market monitoring, and transparency obligations—considerations that ADR prediction CDSS must navigate.

Limitation, Challenges and Future Direction

Current Limitations

Despite remarkable progress, AI-driven ADR prediction research faces several fundamental limitations that constrain clinical translation. First, the absence of standardized datasets, feature definitions, and evaluation benchmarks renders cross-study comparison unreliable. The systematic review by Hu et al. (2024) noted that future studies must adhere to more rigorous reporting standards—a persistent concern across the field. The CONSORT-AI and TRIPOD+AI reporting guidelines represent important steps toward standardization but adoption remains incomplete. Second, the predominance of single-institution studies with limited external validation restricts generalizability. Most published AI ADR models have not been prospectively validated in independent clinical settings, limiting confidence in real-world performance. The requirement for large, diverse, multisite training datasets—balanced for demographics, comorbidities, and prescribing practices—conflicts with data governance frameworks limiting cross-institutional data sharing. Third, the dynamic nature of clinical practice—changing drug formulations, dosing guidelines, emerging drug interactions with newly approved compounds, and evolving patient populations—challenges the temporal robustness of static trained models. Continuous model retraining and performance monitoring are essential but resource-intensive. Fourth, equity and bias represent underappreciated threats to AI ADR model validity. Training datasets predominantly drawn from specific healthcare systems, geographic regions, or demographic groups may encode systemic biases that propagate to clinical predictions. Minority populations historically underrepresented in clinical trial data face heightened risk of biased ADR predictions, potentially exacerbating existing healthcare disparities.

Future Research Agenda

The trajectory of AI-driven ADR research points toward several high-priority directions. Federated learning frameworks enabling multi-institutional model training without data centralization offer the most promising near-term solution to the external validity challenge. Federated LLMs applied to ADR pharmacovigilance across health system networks have been proposed as the basis for next-generation population-scale ADR surveillance [federated LLM review, 2025]. Foundation models pre-trained on comprehensive biomedical corpora—encompassing EHRs, scientific literature, genomic databases, and pharmacological knowledge graphs—hold transformative potential for ADR prediction. Their capacity for few-shot generalization to new drug compounds without retraining from scratch addresses the critical challenge of predicting ADRs for newly approved drugs with limited post-market exposure data. Multimodal AI systems integrating EHR structured data, clinical notes, genomic profiles, pharmacological knowledge graphs, and real-world evidence from wearables and patient-reported outcomes will surpass the performance ceilings of single-modality approaches. The combination of pharmacogenomics, deep EHR feature extraction, and GNN-based drug interaction modeling within a unified personalized ADR risk platform represents the aspirational target for precision pharmacovigilance. Prospective clinical trials randomizing patients to AI-guided versus standard prescribing—with ADR incidence as the primary endpoint—are urgently needed to establish the real-world efficacy of AI CDSS for ADR prevention. Such trials must pre-specify patient subgroup analyses to ensure equitable performance across demographic groups and characterize the mechanisms through which AI guidance modifies prescribing behavior.

CONCLUSION

The field of AI-driven adverse drug reaction prediction has undergone a transformational evolution over the past decade, advancing from simple logistic regression models applied to structured pharmacovigilance databases toward sophisticated multimodal architectures integrating EHR data, genomics, clinical text, and relational drug-patient networks. The convergence of high-dimensional data availability, scalable computational infrastructure, and methodological innovation has created unprecedented opportunities for proactive, personalized ADR risk management. Machine learning ensemble methods, particularly random forest models combined with appropriate imbalance handling, achieve AUC values of 0.72–0.94 in structured EHR-based ADR prediction. Deep learning architectures including CNNs and transformer-based LLMs achieve 85–92% accuracy with superior feature learning from high-dimensional and unstructured data. Graph neural networks—particularly heterogeneous GNN frameworks exemplified by PreciseADR—represent the most advanced patient-level modeling approach, capturing complex multi-relational drug-patient-disease interactions invisible to conventional models. The integration of explainable AI techniques (SHAP, LIME, attention visualization) within clinical decision support frameworks addresses the black-box interpretability barrier that has historically impeded clinical adoption. Combined with pharmacogenomic profiling, these AI systems offer the mechanistic transparency required for clinician trust and regulatory acceptance. Substantial challenges remain. Standardization of datasets and evaluation benchmarks, rigorous prospective validation, algorithmic fairness across demographic subgroups, regulatory compliance, and clinician education represent the critical frontier for responsible translation of AI ADR prediction research into clinical benefit. Addressing these challenges through coordinated international collaboration between AI researchers, clinical pharmacologists, regulatory scientists, and patient advocates will determine whether the transformative potential of AI pharmacovigilance is fully realized in reducing the global burden of preventable adverse drug reactions

REFERENCES

  1. Al Meslamani AZ. (2023). Underreporting of Adverse Drug Events: a Look into the Extent, Causes, and Potential Solutions. Expert Opin Drug Saf, 22, 351–354. https://doi.org/10.1080/14740338.2023.2224558
  2. Backman JD, et al. (2023). Twenty-First Century Global ADR Management: A Need for Clarification, Redesign, and Coordinated Action. Ther Innov Regul Sci, 57(1), 100–103.
  3. Bouvy JC, De Bruin ML, Koopmanschap MA. (2015). Epidemiology of Adverse Drug Reactions in Europe: A Review of Recent Observational Studies. Drug Saf, 38(5), 437–453.
  4. Dong F, Guo W, Liu J, Patterson TA, Hong H. (2024). BERT-based language model for accurate drug adverse event extraction from social media: implementation, evaluation, and contributions to pharmacovigilance practices. Front Public Health, 12, 1392180.
  5. Farnoush A, Sedighi-Maman Z, Rasoolian B, et al. (2024). Prediction of adverse drug reactions using demographic and non-clinical drug characteristics in FAERS data. Sci Rep, 14, 23636. https://doi.org/10.1038/s41598-024-74505-2
  6. Gao Y, Zhang X, Sun Z, Chandak P, Bu J, Wang H. (2024). Precision Adverse Drug Reactions Prediction with Heterogeneous Graph Neural Network. Adv Sci (Weinh), 12(4), e2404671. https://doi.org/10.1002/advs.202404671
  7. Geng G, et al. (2024). MGDDI: A multi-scale graph neural networks for drug–drug interaction prediction. Methods, 228, 22–29.
  8. Hu Q, Li J, Li X, Zou D, Xu T, He Z. (2024). Machine learning to predict adverse drug events based on electronic health records: a systematic review and meta-analysis. J Int Med Res, 52(12). https://doi.org/10.1177/03000605241302304
  9. Kongkaew C, Noyce PR, Ashcroft DM. (2016). Adverse-Drug-Reaction-Related Hospitalisations in Developed and Developing Countries: A Review of Prevalence and Contributing Factors. Drug Saf, 39(11), 1031–1045.
  10. Lai YG, et al. (2020). Machine learning model to predict antituberculosis drug-induced hepatotoxicity. Int J Antimicrob Agents, 55(5), 105940.
  11. Lavecchia A. (2025). Explainable Artificial Intelligence in Drug Discovery: Bridging Predictive Power and Mechanistic Insight. WIREs Comput Mol Sci.
  12. Li Y, et al. (2024). Artificial intelligence-powered pharmacovigilance: A review of machine and deep learning in clinical text-based adverse drug event detection for benchmark datasets. J Biomed Inform, 104621.
  13. Li Z, et al. (2024). Deep learning for drug–drug interaction prediction: A comprehensive review. Quant Biol, 12(1).
  14. McMaster C, Chan J, Liew DFL, Su E, Frauman AG, Chapman WW, Pires DE. (2023). Developing a deep learning natural language processing algorithm for automated reporting of adverse drug reactions. J Biomed Inform, 137, 104265.
  15. Marques-Garcia F, et al. (2026). A review of pharmacogenomics in the individualization of pharmacological treatment: present and future. J Lab Precis Med.
  16. Nishioka S, Watabe S, Yanagisawa Y, et al. (2024). Adverse Event Signal Detection Using Patients' Concerns in Pharmaceutical Care Records: Evaluation of Deep Learning Models. J Med Internet Res.
  17. Rothschild JM, et al. (2010). Analysis of medication-related malpractice claims: causes, preventability, and costs. Arch Intern Med, 170(21), 1878–1883.
  18. Salih A, et al. (2025). A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME. Adv Intell Syst.
  19. Seo B, Yang MS, Park SY, et al. (2023). Incidence and Economic Burden of Adverse Drug Reactions in Hospitalization: A Prospective Study in Korea. J Korean Med Sci, 38(8), e56.
  20. Systematic Review Group. (2025). A comprehensive landscape of AI applications in broad-spectrum drug interaction prediction: a systematic review. J Cheminform. https://doi.org/10.1186/s13321-025-01093-2
  21. Vaswani A, Shazeer N, Parmar N, et al. (2017). Attention is All You Need. Advances in Neural Information Processing Systems, 30.
  22. World Health Organization. (1972). International Drug Monitoring – The Role of National Centers. WHO Technical Report Series No. 498. Geneva: WHO.
  23. Zhou F, Khushi M, Brett J, Uddin S. (2024). Machine-learning-based adverse drug event prediction from observational health data: A review. J Am Med Inform Assoc, 30(5), 978–988.
  24. Frontiers in Pharmacology. (2024). Predicting adverse drug event using machine learning based on electronic health records: a systematic review and meta-analysis. Front Pharmacol, 15, 1497397.
  25. Dong et al. (2024). AI-driven pharmacovigilance: Enhancing adverse drug reaction detection with deep learning and NLP. Computers in Biology and Medicine.
  26. Klein AZ, et al. (2024). Overview of the 8th social media mining for health applications (#SMM4H) shared tasks at the AMIA 2023 annual symposium. J Am Med Inform Assoc, 31, ocae010.
  27. Artificial Intelligence Applications in Pharmacovigilance Signal Management. (2025). Pharmaceutical Medicine. https://doi.org/10.1007/s40290-025-00561-2
  28. Frontiers in Drug Safety and Regulation. (2026). Systematic review of AI-based models in pharmacoepidemiology for adverse drug event prediction and detection. Front Drug Saf Regul, 1773186.
  29. Sciencedirect. (2025). Artificial Intelligence and Multi-Omics in Pharmacogenomics: A New Era of Precision Medicine.
  30. Devlin J, Chang MW, Lee K, Toutanova K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805.

Reference

  1. Al Meslamani AZ. (2023). Underreporting of Adverse Drug Events: a Look into the Extent, Causes, and Potential Solutions. Expert Opin Drug Saf, 22, 351–354. https://doi.org/10.1080/14740338.2023.2224558
  2. Backman JD, et al. (2023). Twenty-First Century Global ADR Management: A Need for Clarification, Redesign, and Coordinated Action. Ther Innov Regul Sci, 57(1), 100–103.
  3. Bouvy JC, De Bruin ML, Koopmanschap MA. (2015). Epidemiology of Adverse Drug Reactions in Europe: A Review of Recent Observational Studies. Drug Saf, 38(5), 437–453.
  4. Dong F, Guo W, Liu J, Patterson TA, Hong H. (2024). BERT-based language model for accurate drug adverse event extraction from social media: implementation, evaluation, and contributions to pharmacovigilance practices. Front Public Health, 12, 1392180.
  5. Farnoush A, Sedighi-Maman Z, Rasoolian B, et al. (2024). Prediction of adverse drug reactions using demographic and non-clinical drug characteristics in FAERS data. Sci Rep, 14, 23636. https://doi.org/10.1038/s41598-024-74505-2
  6. Gao Y, Zhang X, Sun Z, Chandak P, Bu J, Wang H. (2024). Precision Adverse Drug Reactions Prediction with Heterogeneous Graph Neural Network. Adv Sci (Weinh), 12(4), e2404671. https://doi.org/10.1002/advs.202404671
  7. Geng G, et al. (2024). MGDDI: A multi-scale graph neural networks for drug–drug interaction prediction. Methods, 228, 22–29.
  8. Hu Q, Li J, Li X, Zou D, Xu T, He Z. (2024). Machine learning to predict adverse drug events based on electronic health records: a systematic review and meta-analysis. J Int Med Res, 52(12). https://doi.org/10.1177/03000605241302304
  9. Kongkaew C, Noyce PR, Ashcroft DM. (2016). Adverse-Drug-Reaction-Related Hospitalisations in Developed and Developing Countries: A Review of Prevalence and Contributing Factors. Drug Saf, 39(11), 1031–1045.
  10. Lai YG, et al. (2020). Machine learning model to predict antituberculosis drug-induced hepatotoxicity. Int J Antimicrob Agents, 55(5), 105940.
  11. Lavecchia A. (2025). Explainable Artificial Intelligence in Drug Discovery: Bridging Predictive Power and Mechanistic Insight. WIREs Comput Mol Sci.
  12. Li Y, et al. (2024). Artificial intelligence-powered pharmacovigilance: A review of machine and deep learning in clinical text-based adverse drug event detection for benchmark datasets. J Biomed Inform, 104621.
  13. Li Z, et al. (2024). Deep learning for drug–drug interaction prediction: A comprehensive review. Quant Biol, 12(1).
  14. McMaster C, Chan J, Liew DFL, Su E, Frauman AG, Chapman WW, Pires DE. (2023). Developing a deep learning natural language processing algorithm for automated reporting of adverse drug reactions. J Biomed Inform, 137, 104265.
  15. Marques-Garcia F, et al. (2026). A review of pharmacogenomics in the individualization of pharmacological treatment: present and future. J Lab Precis Med.
  16. Nishioka S, Watabe S, Yanagisawa Y, et al. (2024). Adverse Event Signal Detection Using Patients' Concerns in Pharmaceutical Care Records: Evaluation of Deep Learning Models. J Med Internet Res.
  17. Rothschild JM, et al. (2010). Analysis of medication-related malpractice claims: causes, preventability, and costs. Arch Intern Med, 170(21), 1878–1883.
  18. Salih A, et al. (2025). A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME. Adv Intell Syst.
  19. Seo B, Yang MS, Park SY, et al. (2023). Incidence and Economic Burden of Adverse Drug Reactions in Hospitalization: A Prospective Study in Korea. J Korean Med Sci, 38(8), e56.
  20. Systematic Review Group. (2025). A comprehensive landscape of AI applications in broad-spectrum drug interaction prediction: a systematic review. J Cheminform. https://doi.org/10.1186/s13321-025-01093-2
  21. Vaswani A, Shazeer N, Parmar N, et al. (2017). Attention is All You Need. Advances in Neural Information Processing Systems, 30.
  22. World Health Organization. (1972). International Drug Monitoring – The Role of National Centers. WHO Technical Report Series No. 498. Geneva: WHO.
  23. Zhou F, Khushi M, Brett J, Uddin S. (2024). Machine-learning-based adverse drug event prediction from observational health data: A review. J Am Med Inform Assoc, 30(5), 978–988.
  24. Frontiers in Pharmacology. (2024). Predicting adverse drug event using machine learning based on electronic health records: a systematic review and meta-analysis. Front Pharmacol, 15, 1497397.
  25. Dong et al. (2024). AI-driven pharmacovigilance: Enhancing adverse drug reaction detection with deep learning and NLP. Computers in Biology and Medicine.
  26. Klein AZ, et al. (2024). Overview of the 8th social media mining for health applications (#SMM4H) shared tasks at the AMIA 2023 annual symposium. J Am Med Inform Assoc, 31, ocae010.
  27. Artificial Intelligence Applications in Pharmacovigilance Signal Management. (2025). Pharmaceutical Medicine. https://doi.org/10.1007/s40290-025-00561-2
  28. Frontiers in Drug Safety and Regulation. (2026). Systematic review of AI-based models in pharmacoepidemiology for adverse drug event prediction and detection. Front Drug Saf Regul, 1773186.
  29. Sciencedirect. (2025). Artificial Intelligence and Multi-Omics in Pharmacogenomics: A New Era of Precision Medicine.
  30. Devlin J, Chang MW, Lee K, Toutanova K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805

Photo
Shibani Mondal
Corresponding author

Pham.D (Intern), SRM College of Pharmacy (SRM Institute of Science & Technology), Chennai, Tamil Nadu

Photo
Mohammad Zulekha
Co-author

Pharm.D (Intern), Sir C R Reddy College of Pharmaceutical Sciences, Eluru, Andhra Pradesh

Photo
Vinit Singh
Co-author

BS.c, Biotechnology, GD Goenka University, Gurugram, Haryana

Photo
Sourav Lahiri
Co-author

Pharm.D (Intern), Parul University, Vadodara, Gujrat

Photo
Zanje Arun
Co-author

Pharm.D (Intern), Parul University, Vadodara, Gujrat

Photo
Rutik Khedekar
Co-author

MS.c, Organic Chemistry, D.B.J College (Autonomous), Chiplun, Maharashtra

Shibani Mondal*, Mohammad Zulekha, Vinit Singh, Sourav Lahiri, Zanje Arun, Rutik Khedekar, Ai-Driven Prediction of Adverse Drug Reactions: Clinical Pattern Analysis and Risk Assessment, Int. J. of Pharm. Sci., 2026, Vol 4, Issue 5, 507-522. https://doi.org/10.5281/zenodo.20020429

More related articles
Review on Formulation and Evaluation of Polyherbal...
Pooja Naik, Rahul Harkal , Rohit Marathe , Rushikesh Naske , Ad...
Rectal Budesonide Formulations In Active Ulcerativ...
Kalyani Kunte , Manisha B. Parhad, Gajanan Sanap, ...
A Review on Formulation and Evaluation of Efferves...
Deshmukh Komal, Narsale Somanath, Salve Bhumika, Chavan Rutuja, ...
Strategic Design, Synthesis and Biological Evaluation of Indole Linked Thiazolid...
Sreena K, Muhammed Shafah K P C, Ashique P, Thabsseera K P, Shaharban T K, Abid T P, Thejas T, ...
Formulation and Evaluation of Hematinic Syrup Containing Iron, Vitamin B12 and F...
Varad Mahajan, Shailesh Taktode, Sham Doke, Yash Nemade, Shivprasad Machhale, Prof.Arif Shaikh, Dr. ...
Influence Of Body Mass Index with Metabolic, Cardiovascular and Pulmonary Health...
Dr. Aishvarya Vijayakumar, Dr. S. K. Senthilkumar, E. Sudha, S.Surya, E. Suruthi, S. Subashe, K. Sur...
Related Articles
Intellectual Property Rights in Pharmaceutical Sector: Driving Innovation, Prote...
Vishvaja A. Kapadnis, Shivraj P. Jadhav, Mayur S. Bhamare, Sunil K. Mahajan, ...
A Review On Application And Scopes For Dosage Forms Of Cissus Quadrangularis Ext...
Rohit R. Eklare, Mahesh M. Biradar, Anita S. Chavan, Vaishnavi K. Chavhan, Anjali G. Chopade, ...
A Review On Nanotechnology As Transformative Paradigms In Drug Delivery And Ther...
Vinit Khairnar, Sudarshan kale, shraddha vaishnav, chaitali markand, ...
Review on Formulation and Evaluation of Polyherbal Antidandruff Hair Mask...
Pooja Naik, Rahul Harkal , Rohit Marathe , Rushikesh Naske , Aditya Tambve , Priti Zute, Dr R. D I...
More related articles
Review on Formulation and Evaluation of Polyherbal Antidandruff Hair Mask...
Pooja Naik, Rahul Harkal , Rohit Marathe , Rushikesh Naske , Aditya Tambve , Priti Zute, Dr R. D I...
A Review on Formulation and Evaluation of Effervescent Tablet ...
Deshmukh Komal, Narsale Somanath, Salve Bhumika, Chavan Rutuja, ...
Review on Formulation and Evaluation of Polyherbal Antidandruff Hair Mask...
Pooja Naik, Rahul Harkal , Rohit Marathe , Rushikesh Naske , Aditya Tambve , Priti Zute, Dr R. D I...
A Review on Formulation and Evaluation of Effervescent Tablet ...
Deshmukh Komal, Narsale Somanath, Salve Bhumika, Chavan Rutuja, ...