View Article

  • Artificial Intelligence in Modern Drug Discovery and Development: Leveraging Deep Informatics for Efficiency, Interpretability, and Regulatory Compliance

  • KMCH College of Pharmacy, Affiliated to The Tamil Nadu Dr. MGR Medical University, Coimbatore-641048, Tamil Nadu, India.

Abstract

This review examines the transformative impact of Artificial Intelligence (AI) on drug discovery and development. It details how AI methodologies—including machine learning, deep learning, and natural language processing—are overcoming systemic inefficiencies in the traditional pipeline. Key applications discussed encompass AI-driven target identification via multi-omics integration, de novo molecular design using advanced architectures like Graph Neural Networks (GNNs) and generative models, and predictive pharmacology for ADMET and toxicity profiling. The analysis underscores the critical importance of Explainable AI (XAI) for model interpretability and regulatory trust, the emerging role of Large Language Models (PLMs) in protein science, and the foundational need for FAIR data principles. Finally, the evolving regulatory landscape, particularly the FDA's risk-based credibility framework, is highlighted as essential for the clinical integration of AI. The review concludes that while AI accelerates development and reduces costs, its sustained success depends on a paradigm shift from optimizing proxy metrics to elucidating biological mechanisms within robust data governance and regulatory frameworks.

Keywords

Artificial Intelligence, Drug Discovery, De Novo Molecular Design, Graph Neural Networks, Explainable AI, Multi-Omics Integration, ADMET Prediction, FAIR Data Principles, Regulatory Frameworks

Introduction

The traditional pathway for drug discovery and development is characterized by significant economic and temporal burdens, often requiring investment exceeding $2.5 billion and taking 10 to 15 years for a new chemical entity (NCE) to reach market approval. Compounding these resource constraints is a low probability of success: historical data indicates that fewer than 10% of compounds entering clinical trials achieve regulatory approval. These persistent challenges underscore the urgent need for innovative methodologies capable of accelerating the research pipeline and enhancing predictive accuracy.[1]

Artificial Intelligence (AI), encompassing Machine Learning (ML), Deep Learning (DL), Natural Language Processing (NLP), and sophisticated generative models, offers powerful computational solutions to overcome these systemic barriers. By applying these techniques, AI facilitates the rapid identification of druggable targets, accelerates lead discovery through virtual screening, and optimizes molecular design by predicting critical pharmacokinetic (PK), pharmacodynamic (PD), and toxicity profiles. Furthermore, AI plays a pivotal role in clinical translation, assisting in patient stratification, adaptive protocol development, and real-time safety monitoring.

1.1 Quantitative Impact and Efficiency Metrics

The integration of AI into pharmaceutical R&D represents a paradigm shift from traditional, labor-intensive processes toward computationally driven strategies. When compared to conventional high-throughput screening, AI methodologies reduce associated costs and time while markedly increasing the accuracy of identifying viable drug candidates. In the later stages of development, AI optimizes trial designs, particularly through improved patient recruitment and monitoring, contributing to a substantial reduction in operational duration. Analysis of these applications suggests that AI can cut the length of clinical trials by an estimated 15% to 30%. Moreover, the deployment of new data-driven AI models for ADMET and toxicity predictions allows for multifactorial biological analysis, enabling safer testing parameters and more reliable model implementation in pharmaceutical translation.[2,3]

Despite these immediate efficiency gains, the community must address a critical conceptual challenge: the warning that if AI adheres strictly to the current drug discovery paradigm—one optimized for intermediate, proxy metrics such as in vitro binding affinity—it may only serve to "make failures faster and cheaper" rather than significantly improving the ultimate success rate for effective treatments against complex, incurable ailments. For AI to provide sustained improvements in final regulatory approval rates, models must transition from optimizing proxy metrics to focusing on biological relevance and complex mechanism elucidation, leveraging richer, translational datasets such as multi-omics and real-world evidence.[4]

1.2 Scope and Structure of the Review

This review consolidates major advancements in AI within the pharmaceutical sector, focusing on the specialized technical architectures, data governance needs, and the evolving regulatory landscape . Specific  emphasis is placed on the technical application of specialized deep learning architectures, the interpretability imperative driven by Explainable AI (XAI), the foundational role of FAIR data principles, and the regulatory frameworks recently issued by the FDA and EMA.[5]

2. AI-Driven Target Identification and Validation via Multi-Omics  Integration

The success of drug development is predicated on accurately identifying and validating drug targets. Traditional approaches relying on single-omics technologies (e.g., genomics alone) often fail to establish a clear causal connection between a drug, its target, and the complex phenotypes characteristic of disease. Addressing complex biological systems requires an integrated approach that captures the regulatory and interactional nuances across molecular layers.

2.1 Limitations of Single-Omics and the Need for Integration

With the advancement of large-scale sequencing and high-throughput technologies, the trend in drug-target identification has decisively shifted toward integrated multi-omics techniques. This integrated approach combines heterogeneous data layers, including genomics, transcriptomics, proteomics, metabolomics, and epigenomics. This fusion provides an unparalleled holistic view of disease pathways, moving research from population-based to individualized therapeutic strategies.

2.2 Methodologies for Multimodal Data Fusion

The complexity and heterogeneity of multimodal data pose significant analytical challenges. AI and ML techniques are specifically employed to effectively integrate and interpret these multi-omics layers. Methodologies for fusing these high-dimensional datasets are broadly categorized into three types:

  • Concatenation-based strategies: These involve the simple merging of feature vectors, though this approach frequently fails to capture the intricate cross-modal relationships.
  • Transformation-based strategies: Techniques such as autoencoders are used to project different omics layers into a shared, lower-dimensional space, facilitating the identification of latent patterns.
  • Network-based strategies: These involve constructing sophisticated biological networks (e.g., protein interaction networks) where nodes and edges are dynamically informed by multiple omics sources. Graph Neural Networks (GNNs) are then utilized to analyze these networks, effectively identifying critical disease hubs.

Deep learning architectures are uniquely suited to uncover the hidden, non-linear correlations across these integrated layers. Furthermore, Large Language Models (LLMs) are emerging as pivotal tools for synthesizing structured omics data with complex, unstructured text-based knowledge extracted from biomedical literature and electronic health records (EHRs), enhancing the precision of target prediction. ( Figure 1 )

2.3 Mitigation of Clinical Attrition through  Multi-Omics

The successful application of multi-omics integration with AI has been shown to reduce attrition rates in late-stage clinical trials. Late-stage failures are often rooted in unforeseen toxicity or lack of efficacy, which arise when targets are validated against an incomplete view of the biological system. By providing a holistic, pathway-level understanding of the disease, AI models validate targets against comprehensive, high-dimensional data. This mechanism allows for the earlier identification and abandonment of non-viable biological pathways, resulting in reduced late-stage failure rates and significantly improving the overall investment security of the pharmaceutical R&D pipeline.[6-8]

Figure 1 (Schematic representation of multimodal  omics data analysis)

3. Advanced Architectures in De Novo Molecular Design and Lead Optimization

De novo molecular design leverages generative AI to explore and generate novel chemical space, moving beyond traditional virtual screening of pre-existing compound libraries. This advanced stage is dominated by specialized deep learning architectures that handle complex chemical representations.

3.1 Graph Neural Networks (GNNs) for Chemical Representation

GNNs are the predominant deep learning architecture for processing molecular graphs, representing molecules as nodes (atoms) and edges (bonds). This representation is superior to sequential representations like SMILES strings because it intrinsically captures the non-linear, topological relationships within the molecule. GNNs iteratively update atom and bond representations starting from random vectors, enabling the creation of highly meaningful structural embeddings.

However, translating a latent vector back into a valid molecular graph (graph decoding) remains computationally challenging, particularly for larger molecules. The decoder must ensure strict adherence to chemical plausibility, including correct valency, aromaticity, and charge constraints. Strategies to mitigate the computational burden include using coarse-grained graphs, where molecular fragments act as nodes, or framing generation as a stepwise node extension process.[9]

3.2 Generative Adversarial Networks (GANs) and Deep Reinforcement Learning (DRL)

Generative Adversarial Networks (GANs) are frequently used in combination with GNNs or Variational Autoencoders (VAEs) to enhance molecule generation. GANs operate via a competitive dual-network structure, wherein a generator attempts to create realistic molecular instances while a discriminator scores the quality of those creations. Although effective in generating novel distributions, GANs are often difficult to train and prone to mode collapse, where the generator restricts its output to a narrow range of chemical structures, thereby limiting the desired chemical diversity.

Deep Reinforcement Learning (DRL) offers an alternative for conditional design. DRL combines a generative model (often an RNN or GNN) with a reinforcement learning agent that iteratively modifies molecular structures to maximize a predefined reward function. This reward function guides the system toward specific, desirable properties, such as optimal target specificity or favorable ADMET profiles, enabling  highly targeted de novo  designing.[10]

3.3 Evaluation Protocols and the Synthesizability Gap

Evaluating generative models requires robust metrics that move beyond simple predictive activity, as no universally accepted guidelines currently exist. Evaluation must encompass multiple factors, including chemical validity (plausibility), diversity (novelty), and critical alignment with the intended design objectives, particularly synthetic accessibility.

A profound practical challenge in this domain is the gap between theoretical validity and practical synthesizability. While a generated molecule may be chemically plausible, traditional synthetic accessibility (SA) scores frequently fail to account for real-world constraints, such as reaction selectivity, complexity, and the commercial availability of necessary building blocks. A molecule that is optimized for biological activity but requires an extraordinarily complex, low-yield synthetic route holds little value. Therefore, the trajectory of generative AI development must involve integrating reaction planning and chemical synthesis knowledge directly into the optimization process (e.g., within the DRL reward function) to ensure that generated candidates are truly accessible to medicinal chemists.

Table 1: Deep Learning Architectures for De Novo Molecular Design

Model Type

Molecular Representation

Primary Function/Advantage

Key Technical Challenge

Supporting Source

Graph Neural Networks (GNNs)

Graphs (Atoms & Bonds)

Handles non-linear structure; iterative structural updates.

Graph decoding complexity; preserving chemical validity during reconstruction.

 

Generative Adversarial Networks (GANs)

Graphs or SMILES strings

Generates novel, high-quality molecular distributions.

Training instability; mode collapse.

 

Deep Reinforcement Learning (DRL)

SMILES strings or Graphs

Maximizes specific desirable properties through iterative reward.

Defining optimal reward function; ensuring exploration space is chemically relevant.

 

4. Predictive Pharmacology: ADMET and Toxicology Modeling

Preclinical evaluation of Absorption, Distribution, Metabolism, Excretion, and Toxicity (ADMET) properties is a critical bottleneck, contributing significantly to the high attrition rates in drug discovery. AI methodologies are transforming this phase by providing rapid and reliable predictions.  ( Figure 2)

Figure 2 (Flow chart representing the AI driven ADMET studies in Drug development)

4.1 Transition from QSAR to Deep Learning

Modern ML-based models, especially those utilizing deep learning, have demonstrated significant efficacy in predicting key ADMET endpoints, often surpassing the performance of traditional Quantitative Structure-Activity Relationship (QSAR) models. Robust ADMET prediction platforms now integrate multi-task deep learning methodologies and graph-based molecular embeddings to enhance efficiency and reduce experimental costs. These sophisticated systems establish a strong scientific foundation for early risk assessment and compound prioritization.[11-12]

4.2 Application of GNNs in PK/PD Simulation

Graph Neural Networks, particularly when combined with transfer learning, have shown promise in predicting complex pharmacokinetic parameters, such as oral bioavailability, by automatically extracting nuanced features from molecular structures. The predictive power of AI in toxicology and PK/PD simulations supports the reduction of animal testing and proactively mitigates the risk of late-stage clinical failures.[13]

4.3 Predictive Power through Multi-Factorial Analysis

The advancements in AI-driven ADMET modeling extend beyond single-endpoint prediction to incorporate multifactorial biological analysis of toxicity and drug behaviors. Traditional toxicology studies often isolate single properties, but failure in human trials frequently results from complex, multi-systemic interactions (e.g., the combination of metabolic instability and off-target activity). AI facilitates the simultaneous integration and modeling of these diverse factors through techniques like multi-task learning, providing a holistic prediction of safety. This comprehensive perspective allows for the implementation of safer testing parameters and ensures more reliable model deployment during the transition of a compound from preclinical investigation to clinical implementation.[14]

5. The Interpretability Imperative: Explainable AI (XAI) in Drug Discovery

The reliance on complex deep learning architectures has introduced the "black box" problem, where predictions are made without clear human-understandable justification. Interpretability, delivered by Explainable AI (XAI), is crucial for building trust among clinicians and regulators and is necessary for scientifically justifying therapeutic mechanisms.

5.1 Why Interpretability is Essential

The lack of interpretable results presents a barrier to regulatory acceptance. In high-stakes environments like medicine, decision-making absent any transparent reasoning or justification poses ethical concerns and challenges the moral responsibilities of healthcare professionals. XAI moves research beyond simple correlative prediction by helping researchers identify and explain the precise mechanism of action for drugs, thereby establishing the necessary causal understanding required for scientific validation.

5.2 Model-Agnostic XAI Techniques

Post-hoc, model-agnostic XAI techniques are essential for providing explanations for complex predictive systems:

  • SHAP (Shapley Additive Explanations): SHAP, an extension of LIME, utilizes principles from cooperative game theory to provide rigorous, additive explanations for model predictions. It is widely used for determining feature importance in complex ML models across the drug pipeline. For example, SHAP is employed in pharmacovigilance to explain predictions of adverse drug reactions (ADRs) derived from Electronic Health Record (EHR) data.
  • LIME (Local Interpretable Model-agnostic Explanations): LIME generates localized explanations for individual predictions. Its applications range from clinical diagnostics, such as classifying and providing explanations for Parkinson's disease diagnosis using imaging data, to assessing local feature relevance in drug property predictions.

5.3 XAI in ADMET and Drug-Target Interaction (DTI) Studies

XAI is indispensable for improving the transparency of ADMET predictions, enabling researchers to pinpoint the exact molecular features driving a predicted toxic or metabolic outcome. Furthermore, in Drug-Target Interaction (DTI) studies, GNNs often incorporate attention mechanisms. These mechanisms provide enhanced predictive accuracy while simultaneously improving interpretability by highlighting the specific molecular interactions (atoms, bonds, residues) that are critical for binding.

The rapid adoption of XAI technology marks a transition in the thematic maturity of AI research. The field is moving beyond the early focus on maximizing raw predictive accuracy to prioritizing the establishment of regulatory trust and achieving mechanism elucidation. Because regulators and clinicians require justifiable, actionable evidence, XAI is becoming a critical, mandatory requirement for the wide-scale clinical and institutional integration of AI-driven methods.[15]

6. The Emerging Role of Large Language Models (LLMs) and Protein Language Models (PLMs)

The technological advancements seen in Natural Language Processing (NLP) are now being applied to biological sequences, giving rise to specialized architectures such as Protein Language Models (PLMs).

6.1 PLMs for Protein Structure and Function Prediction

PLMs, a subset of LLMs, are revolutionizing protein science by learning the complex "language" or grammar of amino acid sequences. These  models enable more efficient and highly accurate prediction of protein structure, function annotation, and the de novo design of new protein sequences ( Figure 3 ). Models like ProGen demonstrate the ability to generate novel protein sequences with predictable functions across large protein families, akin to generating grammatically correct sentences.

Figure 3 (illustration of contact-based protein structure prediction model)

6.2 PLMs in Therapeutic Design

The applications of PLMs are broad, including the rapid identification of novel drug targets and the rational design of therapeutic antibodies. By understanding the sequence-function relationship, PLMs streamline protein engineering and therapeutic optimization.

6.3 Opening the PLM "Black Box"

Similar to other deep learning modalities, a key challenge involves understanding the internal decision-making process of PLMs—specifically, which protein features drive their structural or functional predictions. Researchers are actively developing techniques to "open the black box" of PLMs, revealing the specific protein features that the model prioritizes. This enhanced transparency allows researchers to select or refine models based on their mechanistic relevance to a specific biological task. Critically, identifying the features tracked by PLMs has the potential to reveal novel biological insights that were previously not apparent to researchers, accelerating fundamental biological discovery beyond mere predictive utility.

7. Data Governance, Infrastructure, and FAIRification

The reliability and scalability of AI in drug discovery are fundamentally dependent on the quality and structure of the underlying data. Unfortunately, the current pharmaceutical R&D environment is often characterized by data fragmentation, heterogeneity, and inconsistent metadata standards.

7.1 Data Quality and Infrastructure Challenges

Most biomedical datasets lack sufficient annotation, leading to critical issues in data quality and heterogeneity that impair model reliability. Data assets are frequently scattered across fragmented IT ecosystems, residing in diverse formats and proprietary systems. This fragmentation complicates data location and access, while interoperability issues—arising from incompatible software systems and non-standard vocabularies—hinder effective integration and analysis.

7.2 The Imperative of FAIR Data Principles

The FAIR (Findable, Accessible, Interoperable, Reusable) data principles provide a rigorous, structured methodology for organizing and sharing scientific data, enabling data to become a long-term scientific asset usable by both humans and machines. Achieving interoperability, in particular, requires the consistent structuring of data and metadata using standardized formats, shared vocabularies, and formal ontologies to facilitate seamless integration and analysis across different R&D teams and organizations.

7.3 Barriers to FAIR Adoption

The "FAIRification" process encounters substantial organizational and technical obstacles. Organizational barriers include the significant financial investment required for technical infrastructure and curation, legal compliance costs related to data protection regulations, and cultural hurdles regarding data sharing and management policies. Technical barriers include the lack of necessary tools, such as persistent identifier services, metadata registries, and standardized ontology services.

Adherence to the FAIR principles is not solely an internal IT requirement; it is a strategic gateway to regulatory credibility for AI-driven drug development. Regulatory bodies, as detailed below, demand rigorous documentation of AI model validation and traceability. If the foundational data is not Findable, Accessible, or Interoperable, sponsors cannot adequately demonstrate the necessary data quality, provenance, or lack of bias required by the FDA or EMA for high-risk submissions. Thus, investing in FAIR infrastructure is a mandatory component of ensuring regulatory compliance.[16]

8. Regulatory Frameworks for AI/ML Systems (FDA and EMA Perspectives)

The increasing integration of AI across the drug life cycle, encompassing manufacturing, nonclinical, clinical, and postmarketing phases, has spurred active development of regulatory frameworks by agencies like the FDA and EMA. These frameworks aim to establish appropriate guardrails for non-deterministic and continuously learning algorithms.

8.1 Evolving Regulatory Landscape (2024–2025)

Both the FDA and EMA are committed to guiding the responsible incorporation of AI, requiring validation, transparency, and continuous monitoring to ensure patient safety and data integrity. The FDA has demonstrated a coordinated approach across its centers to address AI in medical products. The FDA’s CDER, which has observed a substantial increase in drug applications using AI components, issued key draft guidance in January 2025. Similarly, the EMA has developed tools and guidelines emphasizing the continuous monitoring and validation of AI systems, particularly in pharmacovigilance.

This regulatory focus on continuous validation and monitoring directly addresses the challenge posed by the non-deterministic and dynamic nature of AI/ML systems. Unlike traditional static validation, AI systems can change behavior post-deployment. The regulatory response emphasizes a risk-based life cycle approach and the recommendation for pre-defined change control plans, ensuring that the dynamic evolution of AI models is managed responsibly under GMP and clinical safety standards.

8.2 FDA's Risk-Based Credibility Assessment Framework (2025 Guidance)

The FDA’s 2025 draft guidance, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, establishes a formal framework for evaluating the credibility of AI models used to generate information supporting regulatory decisions regarding safety, effectiveness, or quality. This framework applies across numerous applications, including novel clinical trial designs, AI-enabled digital health technology, pharmacovigilance, and manufacturing.

The framework’s core principle is that the required documentation detail varies based on the AI model’s assessed risk and its Context of Use (COU). Higher-risk models, where errors could significantly impact patient safety or regulatory outcome, necessitate the submission of extensive documentation regarding validation and performance.[17]

8.3 Detailed Steps of the Credibility Assessment Framework

Sponsors are mandated to follow a structured, seven-step process to establish AI model credibility:

  • Define the Question of Interest (QoI): Clearly state the specific problem or decision the AI model addresses.
  • Define the Context of Use (COU): Specify the intended application environment (e.g., population, data sources).
  • Assess the AI Model Risk: Determine risk based on potential patient harm or impact on the regulatory decision. This step dictates the required rigor of subsequent documentation.
  • Develop a Plan to Establish Credibility: Outline rigorous validation metrics, verification methods, and data handling protocols.
  • Execute the Plan: Implement the validation plan.
  • Document Results and Deviations: Provide full documentation of performance results and transparency regarding any deviations from the original plan.
  • Determine Adequacy: Conclude whether the AI model is sufficiently credible for its defined COU, leading to the regulatory decision.

9. CONCLUSION AND FUTURE DIRECTIONS

Artificial intelligence has established itself as a central pillar of modern pharmaceutical R&D, moving from an auxiliary tool to a driver of fundamental innovation. It has demonstrated measurable success in accelerating timelines (reducing clinical trial duration by 15% to 30%) and streamlining early-stage processes through advanced generative chemistry and multi-omics integration.

The critical future challenges are managerial and structural: ensuring high data quality through FAIRification, establishing model transparency via XAI, and achieving regulatory compliance through formalized, risk-based frameworks. The thematic shift from maximizing predictive accuracy to focusing on mechanistic justification and regulatory credibility is non-negotiable for sustained clinical adoption.

Future advancements will continue to integrate multi-omics data, including emerging single-cell and spatial techniques, to enhance personalized medicine. Furthermore, the evolution of sophisticated AI architectures, particularly PLMs and LLMs, will accelerate de novo design and automate knowledge synthesis. To realize this potential, the industry must develop collaborative frameworks linking computational experts, pharmaceutical developers, and regulators to establish harmonized global guidelines that ensure the safe, transparent, and ethical implementation of AI technologies.

REFERENCES

  1. Artificial Intelligence (AI) Applications in Drug Discovery and Drug Delivery: Revolutionizing Personalized Medicine. PubMed Central (PMC).
  2. Zhavoronkov A, Ivanenkov YA, Aliper A, Veselov MS, Aladinskiy VA, Aladinskaya AV, et al. AI-Driven Drug Discovery: A Comprehensive Review. ACS Omega. 2020;5(36):21417–21425.
  3. Liu Y, Zhang L, Chen Z, et al. AI-driven multi-omics integration for multi-scale predictive modeling of causal genotype-environment-phenotype relationships. arXiv preprint. 2024.
  4. Walters WP, Barzilay R, Jaakkola T. A survey of generative AI for de novo drug design: new frontiers in molecule and protein modeling. J Cheminform. 2023;15(1):65.
  5. U.S. Food and Drug Administration. Key takeaways from FDA's draft guidance on use of AI in drug and biological product lifecycle. FDA. 2024.
  6. Advances in Integrated Multi-omics Analysis for Drug-Target Identification. PubMed Central (PMC).
  7. Multi-Omics Data Integration with AI for Drug Discovery. Research Paper.
  8. Harnessing Artificial Intelligence in Multimodal Omics Data Integration: Paving the Path for the Next Frontier in Precision Medicine. Front Genet. 2023; 14:1152387.
  9. Generative Deep Learning for de Novo Drug Design. Drug Discov Today. 2022;27(11):103366.
  10. Xie L, Chen T, Li J, et al. De novo drug design by iterative multiobjective deep reinforcement learning with graph-based molecular quality assessment. Bioinformatics. 2023;39(4).
  11. Popova M, Isayev O, Tropsha A. Advances in De Novo Drug Design: From Conventional to Machine Learning Methods. Front Pharmacol. 2021; 12:791572.
  12. Leveraging machine learning models in evaluating ADMET properties for drug discovery and development. Front Pharmacol. 2024; 15:12205928.
  13. Beyond Black-Box Models: Advances in Data-Driven ADMET Modeling. Technology Networks. 2024.
  14. He Z, Zhang J, Wang Y, et al. Computational toxicology in drug discovery: applications of artificial intelligence in ADMET and toxicity prediction. Brief Bioinform. 2024;26(5).
  15. Explainable Artificial Intelligence in the Field of Drug Research. Front Pharmacol. 2024;15: 12129466.
  16. Kose U, Yalcin I, Yalcin A. Using Artificial Intelligence for Drug Discovery: A Bibliometric Study and Future Research Agenda. Pharmaceuticals (Basel). 2022;15(12):1492.
  17. Sheridan RP. Interpretation of Compound Activity Predictions from Complex Machine Learning Models Using Local Approximations and Shapley Values. J Med Chem. 2019;62(16):8576–8586.

Reference

  1. Artificial Intelligence (AI) Applications in Drug Discovery and Drug Delivery: Revolutionizing Personalized Medicine. PubMed Central (PMC).
  2. Zhavoronkov A, Ivanenkov YA, Aliper A, Veselov MS, Aladinskiy VA, Aladinskaya AV, et al. AI-Driven Drug Discovery: A Comprehensive Review. ACS Omega. 2020;5(36):21417–21425.
  3. Liu Y, Zhang L, Chen Z, et al. AI-driven multi-omics integration for multi-scale predictive modeling of causal genotype-environment-phenotype relationships. arXiv preprint. 2024.
  4. Walters WP, Barzilay R, Jaakkola T. A survey of generative AI for de novo drug design: new frontiers in molecule and protein modeling. J Cheminform. 2023;15(1):65.
  5. U.S. Food and Drug Administration. Key takeaways from FDA's draft guidance on use of AI in drug and biological product lifecycle. FDA. 2024.
  6. Advances in Integrated Multi-omics Analysis for Drug-Target Identification. PubMed Central (PMC).
  7. Multi-Omics Data Integration with AI for Drug Discovery. Research Paper.
  8. Harnessing Artificial Intelligence in Multimodal Omics Data Integration: Paving the Path for the Next Frontier in Precision Medicine. Front Genet. 2023; 14:1152387.
  9. Generative Deep Learning for de Novo Drug Design. Drug Discov Today. 2022;27(11):103366.
  10. Xie L, Chen T, Li J, et al. De novo drug design by iterative multiobjective deep reinforcement learning with graph-based molecular quality assessment. Bioinformatics. 2023;39(4).
  11. Popova M, Isayev O, Tropsha A. Advances in De Novo Drug Design: From Conventional to Machine Learning Methods. Front Pharmacol. 2021; 12:791572.
  12. Leveraging machine learning models in evaluating ADMET properties for drug discovery and development. Front Pharmacol. 2024; 15:12205928.
  13. Beyond Black-Box Models: Advances in Data-Driven ADMET Modeling. Technology Networks. 2024.
  14. He Z, Zhang J, Wang Y, et al. Computational toxicology in drug discovery: applications of artificial intelligence in ADMET and toxicity prediction. Brief Bioinform. 2024;26(5).
  15. Explainable Artificial Intelligence in the Field of Drug Research. Front Pharmacol. 2024;15: 12129466.
  16. Kose U, Yalcin I, Yalcin A. Using Artificial Intelligence for Drug Discovery: A Bibliometric Study and Future Research Agenda. Pharmaceuticals (Basel). 2022;15(12):1492.
  17. Sheridan RP. Interpretation of Compound Activity Predictions from Complex Machine Learning Models Using Local Approximations and Shapley Values. J Med Chem. 2019;62(16):8576–8586.

Photo
Vivian Raj A.
Corresponding author

KMCH College of Pharmacy, Affiliated to The Tamil Nadu Dr. MGR Medical University, Coimbatore-641048, Tamil Nadu, India.

Photo
Suganth G. S.
Co-author

KMCH College of Pharmacy, Affiliated to The Tamil Nadu Dr. MGR Medical University, Coimbatore-641048, Tamil Nadu, India.

Photo
S. Dhinesh Kumar
Co-author

KMCH College of Pharmacy, Affiliated to The Tamil Nadu Dr. MGR Medical University, Coimbatore-641048, Tamil Nadu, India.

Vivian Raj A.*, Suganth G. S., S. Dhinesh Kumar, Artificial Intelligence in Modern Drug Discovery and Development: Leveraging Deep Informatics for Efficiency, Interpretability, and Regulatory Compliance, Int. J. of Pharm. Sci., 2025, Vol 3, Issue 11, 2745-2756 https://doi.org/10.5281/zenodo.17645690

More related articles
Nanomedicine-Driven Drug Delivery Innovations in B...
Johny Lakra, Harpreet Kaur, Ritika Gupta, Jasdeep Kaur, Parul Cho...
Artificial Intelligence in Community Pharmacy: Bri...
Aarti Aher, Jadhav Sakshi , Dhanashri Mali, Shubham Pachorkar , ...
A Review on Pharmacovigilance Adverse Drug Reactio...
Baliram Sable, Suchita Lathi, Shaikh Shahebaj, Ajinkya Pisule, Ra...
Recent Advances in High- Performance Liquid Chromatography (HPLC): Principles, M...
Ganesh Tupe, Monali Khatake, Kanchan Vetal, Archana Tupe, Ashwini Amrutkar, ...
Comprehensive Review on Advancements in Buccal Drug Delivery...
Dr. Vishal Rasve, Achal Ghate, Mozes Durgawad, Dr. Amit Ingle, Shraddha Lunge, Ankita Jatale, Pallav...
Related Articles
Tradition to Translational Science: A Review of Modern Applications of Ayurvedic...
Tejaswini Vadar, Pranjali Karande, Vishal Mote, Dr. Dhanraj Jadge, ...
The Role of Artificial Intelligence in Drug Discovery...
Sanika Kamble, Sneha Wavdane, Vishal Mote, Dhanraj Jadge, ...
A Review on the Treatment of Acne Vulgaris Using Alloparhic Drugs...
Apurva Kamble, Harshada Pardhi, Aryan Satpute, Shalaka Katkar, Prajakta Vidhate, G. K. Bramha, ...
Advancements in Molecular Docking: A Comprehensive Review of Methods and Their A...
Kartiki Deshmukh, Diksha Gangurde, Dr. Kanchan Jagtap, ...
Nanomedicine-Driven Drug Delivery Innovations in Breast Cancer Management: Targe...
Johny Lakra, Harpreet Kaur, Ritika Gupta, Jasdeep Kaur, Parul Choudhary, Tanmay Ghosh, Mohini Sharma...
More related articles
Nanomedicine-Driven Drug Delivery Innovations in Breast Cancer Management: Targe...
Johny Lakra, Harpreet Kaur, Ritika Gupta, Jasdeep Kaur, Parul Choudhary, Tanmay Ghosh, Mohini Sharma...
Artificial Intelligence in Community Pharmacy: Bridging Human Touch with Digital...
Aarti Aher, Jadhav Sakshi , Dhanashri Mali, Shubham Pachorkar , Dr. M.R.N Shaikh, ...
A Review on Pharmacovigilance Adverse Drug Reactions Analysis Clinical Pharmacy ...
Baliram Sable, Suchita Lathi, Shaikh Shahebaj, Ajinkya Pisule, Radheshyam Jadhav, ...
Nanomedicine-Driven Drug Delivery Innovations in Breast Cancer Management: Targe...
Johny Lakra, Harpreet Kaur, Ritika Gupta, Jasdeep Kaur, Parul Choudhary, Tanmay Ghosh, Mohini Sharma...
Artificial Intelligence in Community Pharmacy: Bridging Human Touch with Digital...
Aarti Aher, Jadhav Sakshi , Dhanashri Mali, Shubham Pachorkar , Dr. M.R.N Shaikh, ...
A Review on Pharmacovigilance Adverse Drug Reactions Analysis Clinical Pharmacy ...
Baliram Sable, Suchita Lathi, Shaikh Shahebaj, Ajinkya Pisule, Radheshyam Jadhav, ...