View Article

Abstract

Clinical Data Management (CDM) is a vital component of clinical research, responsible for producing high-quality, reliable, and statistically sound data from clinical trials. This phase significantly shortens the time from drug development to market launch. CDM team members are engaged throughout all stages of a clinical trial, from its inception to its conclusion. They need to possess thorough process knowledge to uphold the quality standards of CDM procedures. Key activities in CDM, such as designing and annotating Case Report Forms (CRFs), database design, data entry, data validation, discrepancy management, medical coding, data extraction, and database locking, are routinely evaluated for quality during the trial. Currently, there is a growing need to enhance CDM standards to meet regulatory demands and to stay competitive by accelerating product commercialization. The adoption of data management tools that comply with regulatory standards enables CDM teams to address these challenges. Additionally, the requirement for electronic data submission is becoming increasingly mandatory for companies. CDM professionals must adhere to high standards for data quality, meet industry expectations, and remain agile in adapting to rapidly evolving technology. This article outlines the key processes involved, offering an overview of the tools, standards, roles, and responsibilities in CDM.

Keywords

Clinical data interchange standards consortium, clinical data management systems, data management, p-CRF, good clinical data management practices, validation, Database

Introduction

Clinical trials aim to answer research questions by generating data to support or refute a hypothesis. The quality of this data is crucial to the study's outcomes. Many research students often wonder, "What is Clinical Data Management (CDM), and why is it important?" CDM is a vital component of clinical trials, and researchers, knowingly or unknowingly, engage in CDM activities during their research. Even without recognizing the technical stages, we often carry out some of the processes involved in CDM during our studies. This article outlines the key processes of CDM, providing an overview of how data is managed in clinical trials. CDM involves the systematic collection, cleaning, and management of subject data, ensuring compliance with regulatory standards. The main goal of CDM is to produce high-quality data by minimizing errors and missing information while maximizing the data available for analysis. To achieve this, best practices are followed to ensure data is complete, reliable, and accurately processed. The use of specialized software, which maintains an audit trail and facilitates the identification and correction of data discrepancies, has greatly enhanced CDM. Advanced innovations have further empowered CDM to manage large-scale trials and maintain data quality, even in complex studies.[1] High-quality data is defined as data that is completely accurate and suitable for statistical analysis. It must adhere to the parameters specified in the study protocol and comply with all protocol requirements. This means that if there is a deviation from these specifications, there may be a need to exclude the patient from the final dataset.[2] However, it is important to note that in certain situations, regulatory authorities may still wish to review such data. Similarly, missing data is a significant concern for clinical researchers, and high-quality data should have minimal or no gaps. Most importantly, high-quality data should exhibit only a minimal, acceptable level of variation that does not impact the study's conclusions during statistical analysis. Additionally, the data must comply with the relevant regulatory standards for data quality.

PHASES IN CLINICAL TRIAL

The main aim of the clinical trial is to investigate whether the investigational new drug (IND) has efficacy and safety in human subjects and bring out IND to the market. More than thousands of molecules screened after the clinical trial and one or two IND reaches to the market after the trials. A clinical trial has four main phases: Phase I, Phase II, Phase III, and Phase IV; the main purpose of the study is to investigate the efficacy and safety. Phase 0 or micro-dosing studies are also included in the clinical trial recently to minimize the cost and time duration. These studies can be a very effective phase to determining very early in the drug development process whether the IND has potential expected biologic effect or not. Phase 0 trials serve as a good tool for clinical researchers in testing the safety and efficacy of drugs at micro level before the onset of Phase I trial using a small number of participants. The clinical trial phases, purpose and length of the study are given in Table 1. In Phase I trial, 20-100 volunteers will be involved in the study. The primary goal of this study is to investigate safety and dosage of the drug. 70% of the drugs will move to Phase II trials. The maximum tolerated dose and side effects, tolerability, pharmacokinetics, and pharmacodynamics are evaluated in this phase and the investigator find the dose that works best without causing a severe side effect. The main aim of the Phase II study test the efficacy and side effects of IND by recruiting 100-300 people with the disease/condition are participated. This second phase of testing can last from several months to 2 years. Most Phase II studies are randomized clinical trial and only 33% of the drugs move to the Phase III trial. Phase III trials are a very important phase and also long study phase because the study purpose of this Phase III is to evaluate the efficacy and monitoring the adverse event in the patients. It is a pre-marketing phase of clinical trials and the main goal is to find out the new drug is better than standard drug. In Phase III, 300-3000 patients are involved in the study. Phase II and III clinical trials are usually randomized, where one group of patients receives the experimental drug, while a second “control” group receives a standard treatment drug or placebo. The investigator does not choose which person gets the new drug or the current standard treatment drug. Randomization ensures that each patient has an equal chance of receiving any of the treatments under study, generate comparable intervention groups, which are alike in all the important aspects except for the intervention each group receives. It also provides a basis for the statistical methods used in analyzing the data. Phase III study is more expensive and time-consuming trials. 25-30% of the drug moved to the Phase IV study. In the Phase IV study is a post-marketing surveillance study and this studies several thousands of volunteers who have the disease/condition.[6,12]

TOOLS OF CDM

Numerous software tools are available for managing clinical trial data, known as Clinical Data Management Systems (CDMS). In multicentric trials, the use of a CDMS is crucial for handling the large volumes of data generated. While most CDMS utilized by pharmaceutical companies are commercial, a few open-source options are also available. Commonly used commercial CDMS include ORACLE CLINICAL, CLINTRIAL, MACRO, RAVE, and eClinical Suite. These tools are functionally similar, offering no substantial advantages over one another, but they are costly and require advanced Information Technology infrastructure to operate. Furthermore, some large pharmaceutical corporations develop custom CDMS tailored to their specific operational needs. Among the open-source alternatives, the most notable are OpenClinica, openCDMS, TrialDB, and PhOSCo. These tools are available free of charge and are comparable to their commercial counterparts in terms of functionality. They can be downloaded directly from their respective websites. For studies that require regulatory submission, maintaining a thorough audit trail of data management activities is critical. CDMS tools provide this capability by tracking discrepancies and recording any changes made to the data. These systems allow the creation of multiple user IDs with specific access controls, limiting access to functions such as data entry, medical coding, database design, and quality checks. This ensures that users can only interact with the data according to their assigned roles, preventing unauthorized changes. When changes are permitted, the software logs the modification, along with the user ID, date, and time, for audit purposes. During a regulatory audit, these logs enable auditors to review the discrepancy management process and confirm that all changes were authorized and accurate.

REGULATION, GUIDELINES AND STANDARDS IN CDM

In clinical research, Clinical Data Management (CDM) follows strict guidelines and standards due to the pharmaceutical industry's reliance on electronically captured data for evaluating medicines. To ensure data integrity, electronic records must comply with 21 CFR Part 11, a Code of Federal Regulations that applies to electronic records created, modified, maintained, archived, retrieved, or transmitted. This regulation mandates the use of validated systems to guarantee the accuracy, reliability, and consistency of data, incorporating secure, computer-generated, time-stamped audit trails that independently record the date and time of user entries and actions involving electronic records.[3]Adequate procedures and controls must be established to maintain the integrity, authenticity, and confidentiality of data. If data need to be submitted to regulatory authorities, they should be processed within 21 CFR Part 11-compliant systems. Most CDM systems are designed to meet these requirements, and both pharmaceutical companies and contract research organizations ensure compliance. The Society for Clinical Data Management (SCDM) publishes the Good Clinical Data Management Practices (GCDMP) guidelines, setting the standard for good practices within CDM. Initially published in September 2000 and last revised in July 2009, the GCDMP outlines accepted practices in CDM that align with regulatory standards, covering the entire CDM process in 20 chapters and highlighting minimum standards and best practices. The Clinical Data Interchange Standards Consortium (CDISC), a multidisciplinary non-profit organization, has developed standards to support the acquisition, exchange, submission, and archival of clinical research data and metadata. Metadata refers to data about the data entered, including information about who made the entry or change, the date and time of the entry/change, and details of the modifications. Among these standards, the Study Data Tabulation Model Implementation Guide for Human Clinical Trials (SDTMIG) and the Clinical Data Acquisition Standards Harmonization (CDASH) are particularly important. These standards, available free of charge from the CDISC website (www.cdisc.org), guide the organization of data (SDTMIG)[4] and define the basic standards for data collection in clinical trials (CDASH v 1.1),[5] covering the essential data from clinical, regulatory, and scientific perspectives.

THE CDM PROCESS

The CDM process, like a clinical trial, begins with the end in mind. This means that the whole process is designed keeping the deliverable in view. As a clinical trial is designed to answer the research question, the CDM process is designed to deliver an error-free, valid, and statistically sound database. To meet this objective, the CDM process starts early, even before the finalization of the study protocol.

REVIEW AND FINALIZATION OF STUDY DOCUMENT

The protocol is reviewed from a database designing perspective, for clarity and consistency. During this review, the CDM personnel will identify the data items to be collected and the frequency of collection with respect to the visit schedule. A Case Report Form (CRF) is designed by the CDM team, as this is the first step in translating the protocol-specific activities into data being generated. The data fields should be clearly defined and be consistent throughout. The type of data to be entered should be evident from the CRF. For example, if weight has to be captured in two decimal places, the data entry field should have two data boxes placed after the decimal as shown in (Figure 1). Similarly, the units in which measurements have to be made should also be mentioned next to the data field. The CRF should be concise, self-explanatory, and user-friendly (unless you are the one entering data into the CRF). Along with the CRF, the filling instructions (called CRF Completion Guidelines) should also be provided to study investigators for error-free data acquisition. CRF annotation is done wherein the variable is named according to the SDTMIG or the conventions followed internally. Annotations are coded terms used in CDM tools to indicate the variables in the study. An example of an annotated CRF is provided in (Figure 1). In questions with discrete value options (like the variable gender having values male and female as responses), all possible options will be coded appropriately.


       
            Picture1.png
       

    Figure 1:- Annotated sample of a Paper Case Report Form (CRF). Annotations are entered in text in this figure to differentiate from the p’CRF questions. SBP = Systolic blood pressure, DBP = Diastolic blood pressure, pre stabilization[clinically significant] = Yes, No = Not applicable , DT = Date format. For Example, dd-mmm-yyyy  [BD] indicates date of birth in the date format.


Table 1: List of clinical data management activities


       
            Picture2.jpg
       

    


DATABASE DESIGINING

Databases are the clinical software applications, which are built to facilitate the CDM tasks to carry out multiple studies.[6] Generally, these tools have built-in compliance with regulatory requirements and are easy to use. “System validation” is conducted to ensure data security, during which system specifications.[7] user requirements, and regulatory compliance are evaluated before implementation. Study details like objectives, intervals, visits, investigators, sites, and patients are defined in the database and CRF layouts are designed for data entry. These entry screens are tested with dummy data before moving them to the real data capture.

DATA COLLECTION

Data collection is done using the CRF that may exist in the form of a paper or an electronic version. The traditional method is to employ paper CRFs to collect the data responses, which are translated to the database by means of data entry done in-house. These paper CRFs are filled up by the investigator according to the completion guidelines. In the e-CRF-based CDM, the investigator or a designee will be logging into the CDM system and entering the data directly at the site. In e-CRF method, chances of errors are less, and the resolution of discrepancies happens faster. Since pharmaceutical companies try to reduce the time taken for drug development processes by enhancing the speed of processes involved, many pharmaceutical companies are opting for e-CRF options (also called remote data entry).

CRF TRACKING

The entries made in the CRF will be monitored by the Clinical Research Associate (CRA) for completeness and filled up CRFs are retrieved and handed over to the CDM team. The CDM team will track the retrieved CRFs and maintain their record. CRFs are tracked for missing pages and illegible data manually to assure that the data are not lost. In case of missing or illegible data, a clarification is obtained from the investigator and the issue is resolved.

DATA ENTRY

Data entry takes place according to the guidelines prepared along with the DMP. This is applicable only in the case of paper CRF retrieved from the sites. Usually, double data entry is performed wherein the data is entered by two operators separately.[8] The second pass entry (entry made by the second person) helps in verification and reconciliation by identifying the transcription errors and discrepancies caused by illegible data. Moreover, double data entry helps in getting a cleaner database compared to a single data entry. Earlier studies have shown that double data entry ensures better consistency with paper CRF as denoted by a lesser error rate.[9]

DATA VALIDATION

Data validation is the process of testing the validity of data in accordance with the protocol specifications. Edit check programs are written to identify the discrepancies in the entered data, which are embedded in the database, to ensure data validity. These programs are written according to the logic condition mentioned in the DVP. These edit check programs are initially tested with dummy data containing discrepancies. Discrepancy is defined as a data point that fails to pass a validation check. Discrepancy may be due to inconsistent data, missing data, range checks, and deviations from the protocol. In e-CRF based studies, data validation process will be run frequently for identifying discrepancies. These discrepancies will be resolved by investigators after logging into the system. Ongoing quality control of data processing is undertaken at regular intervals during the course of CDM. For example, if the inclusion criteria specify that the age of the patient should be between 18 and 65 years (both inclusive), an edit program will be written for two conditions viz. age <18>65. If for any patient, the condition becomes TRUE, a discrepancy will be generated. These discrepancies will be highlighted in the system and Data Clarification Forms (DCFs) can be generated. DCFs are documents containing queries pertaining to the discrepancies identified.

DISCREPANCY MANAGEMENT

This is also called query resolution. Discrepancy management includes reviewing discrepancies, investigating the reason, and resolving them with documentary proof or declaring them as irresolvable. Discrepancy management helps in cleaning the data and gathers enough evidence for the deviations observed in data. Almost all CDMS have a discrepancy database where all discrepancies will be recorded and stored with audit trail.

Based on the types identified, discrepancies are either flagged to the investigator for clarification or closed in-house by Self-Evident Corrections (SEC) without sending DCF to the site. The most common SECs are obvious spelling errors. For discrepancies that require clarifications from the investigator, DCFs will be sent to the site. The CDM tools help in the creation and printing of DCFs. Investigators will write the resolution or explain the circumstances that led to the discrepancy in data. When a resolution is provided by the investigator, the same will be updated in the database. In case of e-CRFs, the investigator can access the discrepancies flagged to him and will be able to provide the resolutions online. Figure 2 illustrates the flow of discrepancy management.


       
            Picture3.jpg
       

    Figure 2 Discrepancy management (DCF = Data clarification form, CRA = Clinical Research Associate, SDV = Source document verification, SEC = Self-evident correction)


The CDM team reviews all discrepancies at regular intervals to ensure that they have been resolved. The resolved data discrepancies are recorded as ‘closed’. This means that those validation failures are no longer considered to be active, and future data validation attempts on the same data will not create a discrepancy for same data point. But closure of discrepancies is not always possible. In some cases, the investigator will not be able to provide a resolution for the discrepancy. Such discrepancies will be considered as ‘irresolvable’ and will be updated in the discrepancy database. Discrepancy management is the most critical activity in the CDM process. Being the vital activity in cleaning up the data, utmost attention must be observed while handling the discrepancies.

MEDICAL CODING

Medical coding helps in identifying and properly classifying the medical terminologies associated with the clinical trial. For classification of events, medical dictionaries available online are used. Technically, this activity needs the knowledge of medical terminology, understanding of disease entities, drugs used, and a basic knowledge of the pathological processes involved. Functionally, it also requires knowledge about the structure of electronic medical dictionaries and the hierarchy of classifications available in them. Adverse events occurring during the study, prior to and concomitantly administered medications and pre-or co-existing illnesses are coded using the available medical dictionaries. Commonly, Medical Dictionary for Regulatory Activities (MedDRA) is used for the coding of adverse events as well as other illnesses and World Health Organization–Drug Dictionary Enhanced (WHO-DDE) is used for coding the medications. These dictionaries contain the respective classifications of adverse events and drugs in proper classes. Other dictionaries are also available for use in data management (eg, WHO-ART is a dictionary that deals with adverse reactions terminology). Some pharmaceutical companies utilize customized dictionaries to suit their needs and meet their standard operating procedures. Medical coding helps in classifying reported medical terms on the CRF to standard dictionary terms in order to achieve data consistency and avoid unnecessary duplication. For example, the investigators may use different terms for the same adverse event, but it is important to code all of them to a single standard code and maintain uniformity in the process. The right coding and classification of adverse events and medication is crucial as an incorrect coding may lead to masking of safety issues or highlight the wrong safety concerns related to the drug.

DATABASE LOCKING

After a proper quality check and assurance, the final data validation is run. If there are no discrepancies, the SAS datasets are finalized in consultation with the statistician. All data management activities should have been completed prior to database lock. To ensure this, a pre-lock checklist is used and completion of all activities is confirmed. This is done as the database cannot be changed in any manner after locking. Once the approval for locking is obtained from all stakeholders, the database is locked and clean data is extracted for statistical analysis. Generally, no modification in the database is possible. But in case of a critical issue or for other important operational reasons, privileged users can modify the data even after the database is locked. This, however, requires proper documentation and an audit trail has to be maintained with sufficient justification for updating the locked database. Data extraction is done from the final database after locking. This is followed by its archival.

ROLE AND RESPONSIBILITIES OF CDM

In a CDM team, different roles and responsibilities are attributed to the team members. The minimum educational requirement for a team member in CDM should be graduation in life science and knowledge of computer applications. Ideally, medical coders should be medical graduates. However, in the industry, paramedical graduates are also recruited as medical coders. Some key roles are essential to all CDM teams. The list of roles given below can be considered as minimum requirements for a CDM team:

  • Data Manager
  • Database Programmer/Designer
  • Medical Coder
  • Clinical Data Coordinator
  • Quality Control Associate
  • Data Entry Associate

The data manager is responsible for supervising the entire CDM process. The data manager prepares the DMP, approves the CDM procedures and all internal documents related to CDM activities. Controlling and allocating the database access to team members is also the responsibility of the data manager. The database programmer/designer performs the CRF annotation, creates the study database, and programs the edit checks for data validation. He/she is also responsible for designing of data entry screens in the database and validating the edit checks with dummy data. The medical coder will do the coding for adverse events, medical history, co-illnesses, and concomitant medication administered during the study. The clinical data coordinator designs the CRF, prepares the CRF filling instructions, and is responsible for developing the DVP and discrepancy management. All other CDM-related documents, checklists, and guideline documents are prepared by the clinical data coordinator. The quality control associate checks the accuracy of data entry and conducts data audits.[10] Sometimes, there is a separate quality assurance person to conduct the audit on the data entered. Additionally, the quality control associate verifies the documentation pertaining to the procedures being followed. The data entry personnel will be tracking the receipt of CRF pages and performs the data entry into the database

CONCLUSION

Clinical Data Management (CDM) has evolved to meet the growing demands of pharmaceutical companies aiming to expedite drug development and the expectations of regulatory authorities to ensure quality systems are in place for generating reliable data for accurate drug evaluation. This evolution has seen a gradual transition from paper-based to electronic data management systems. Technological advancements have significantly improved the CDM process, leading to faster and higher-quality data generation. However, CDM professionals must maintain high standards to enhance data quality.[11] As a specialized field, CDM should be assessed based on the systems and processes implemented, along with the standards adhered to. The primary challenge from a regulatory standpoint is the standardization of data management processes across organizations and the establishment of regulations to define proper procedures and data standards. For the industry, the main obstacle lies in planning and implementing data management systems in an operational environment where rapid technological advancements can render existing infrastructure obsolete. Despite these challenges, CDM is evolving into a standardized clinical research discipline by balancing expectations with the limitations of current systems, driven by technological advancements and industry demands.

REFERANCE

  1. Gerritsen MG, Sartorius OE, vd Veen FM, Meester GT. Data management in multi-center clinical trials and the role of a nation-wide computer network. A 5 year evaluation. Proc Annu Symp Comput Appl Med Care. 1993:659–62. [PMC free article] [PubMed] [Google Scholar]
  2. Lu Z, Su J. Clinical data management: Current status, challenges, and future directions from industry perspectives. Open Access J Clin Trials. 2010;2:93–105. [Google Scholar]
  3. CFR - Code of Federal Regulations Title 21 [Internet] Maryland: Food and Drug Administration. [Updated 2010 Apr 4; Cited 2011 Mar 1]. Available from: http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=11.10 .
  4. Study Data Tabulation Model [Internet] Texas: Clinical Data Interchange Standards Consortium. c2011. [Updated 2007 Jul; Cited 2011 Mar 1]. Available from: http://www.cdisc.org/sdtm .
  5. CDASH [Internet] Texas: Clinical Data Interchange Standards Consortium. c2011. [Updated 2011 Jan; Cited 2011 Mar 1]. Available from: http://www.cdisc.org/cdash .
  6. Fegan GW, Lang TA. Could an open-source clinical trial data-management system be what we have all been looking for? PLoS Med. 2008;5:e6. [PMC free article] [PubMed] [Google Scholar]
  7. Kuchinke W, Ohmann C, Yang Q, Salas N, Lauritsen J, Gueyffier F, et al. Heterogeneity prevails: The state of clinical trial data management in Europe?-?results of a survey of ECRIN centres. Trials. 2010;11:79. [PMC free article] [PubMed] [Google Scholar]
  8. Cummings J, Masten J. Customized dual data entry for computerized data analysis. Qual Assur. 1994;3:300–3. [PubMed] [Google Scholar]
  9. Reynolds-Haertle RA, McBride R. Single vs. double data entry in CAST. Control Clin Trials. 1992;13:487–94. [PubMed] [Google Scholar]
  10. Ottevanger PB, Therasse P, van de Velde C, Bernier J, van Krieken H, Grol R, et al. Quality assurance in clinical trials. Crit Rev Oncol Hematol. 2003;47:213–35. [PubMed] [Google Scholar]
  11. Haux R, Knaup P, Leiner F. On educating about medical data management - the other side of the electronic health record. Methods Inf Med. 2007;46:74–9. [PubMed] [Google Scholar]
  12. Available from: https://www.nhlbi.nih.gov/studies/clinicaltrials

Reference

  1. Gerritsen MG, Sartorius OE, vd Veen FM, Meester GT. Data management in multi-center clinical trials and the role of a nation-wide computer network. A 5 year evaluation. Proc Annu Symp Comput Appl Med Care. 1993:659–62. [PMC free article] [PubMed] [Google Scholar]
  2. Lu Z, Su J. Clinical data management: Current status, challenges, and future directions from industry perspectives. Open Access J Clin Trials. 2010;2:93–105. [Google Scholar]
  3. CFR - Code of Federal Regulations Title 21 [Internet] Maryland: Food and Drug Administration. [Updated 2010 Apr 4; Cited 2011 Mar 1]. Available from: http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=11.10 .
  4. Study Data Tabulation Model [Internet] Texas: Clinical Data Interchange Standards Consortium. c2011. [Updated 2007 Jul; Cited 2011 Mar 1]. Available from: http://www.cdisc.org/sdtm .
  5. CDASH [Internet] Texas: Clinical Data Interchange Standards Consortium. c2011. [Updated 2011 Jan; Cited 2011 Mar 1]. Available from: http://www.cdisc.org/cdash .
  6. Fegan GW, Lang TA. Could an open-source clinical trial data-management system be what we have all been looking for? PLoS Med. 2008;5:e6. [PMC free article] [PubMed] [Google Scholar]
  7. Kuchinke W, Ohmann C, Yang Q, Salas N, Lauritsen J, Gueyffier F, et al. Heterogeneity prevails: The state of clinical trial data management in Europe?-?results of a survey of ECRIN centres. Trials. 2010;11:79. [PMC free article] [PubMed] [Google Scholar]
  8. Cummings J, Masten J. Customized dual data entry for computerized data analysis. Qual Assur. 1994;3:300–3. [PubMed] [Google Scholar]
  9. Reynolds-Haertle RA, McBride R. Single vs. double data entry in CAST. Control Clin Trials. 1992;13:487–94. [PubMed] [Google Scholar]
  10. Ottevanger PB, Therasse P, van de Velde C, Bernier J, van Krieken H, Grol R, et al. Quality assurance in clinical trials. Crit Rev Oncol Hematol. 2003;47:213–35. [PubMed] [Google Scholar]
  11. Haux R, Knaup P, Leiner F. On educating about medical data management - the other side of the electronic health record. Methods Inf Med. 2007;46:74–9. [PubMed] [Google Scholar]
  12. Available from: https://www.nhlbi.nih.gov/studies/clinicaltrials. [Last accessed on 2016 Jun 07].

Photo
Akash Bhagwan Aher
Corresponding author

Department of Technology, Savitribai Phule Pune University,Pune .

Photo
Atharv Chagan Gadhe
Co-author

Department of Technology, Savitribai Phule Pune University,Pune.

Photo
Prashant Vilas Bhakare
Co-author

Department of Technology, Savitribai Phule Pune University,Pune .

Photo
Prof.Shweta Kahar
Co-author

Department of Technology, Savitribai Phule Pune University,Pune.

Photo
Prof.Kranti Pakhare
Co-author

Department of Technology, Savitribai Phule Pune University,Pune

Photo
Prof.Shweta Chavan
Co-author

Department of Technology, Savitribai Phule Pune University,Pune

Akash Bhagwan Aher , Atharv Chagan Gadhe , Shweta Kahar , Shweta Atkar , Kranti Pakhare, Management Strategies And Treatment Modalities Of Oral Health Issues During Pregnancy – A Comprehensive Review, Int. J. of Pharm. Sci., 2024, Vol 2, Issue 9, 688-696. https://doi.org/10.5281/zenodo.13756082

More related articles
Naphthalene: A Multidimensional Scaffold in Medici...
Shivam Yadav , Chetana Mayekar, Nikita Pagare, Alnaj Thange, Sona...
Phytochemical and Pharmacological Insights into He...
Sonawane Harshad, Dr. Manisha Zaware, Kulange Krushnakant, Pathar...
A Comprehensive Review on Ethical Considerations i...
Nurjamal Hoque, Ilias Uddin, Halema Khatun, Jafar Sharif, Sanjoy ...
Bioanalytical Method Development And Validation For The Simultaneous Estimation ...
Sanket G. Kadam , Madhuri D. Game , Vaibhav V. Narwade, ...
An Analysis on Management of Epilepsy and It Effect on Patients Quality of Life ...
Ruturaj Athawale, Atrikumar survase , Achal Hatzade, Yuvraj katu, ...
Spasmolytic Effects of Medicinal Plants - A Review...
Roshini K. V, Anjitha Anil, Arif K. Faisal, Maneesha N. A., Nahmath K. S., ...
Related Articles
Method Development and Validation of Indacaterol Maleate By RP HPLC In Bulk and ...
Manali Katkade, Anagha Baviskar, Kiran Kotade, Charushila Bhangale, ...
A Review on Therapeutic Aspects of Artesunate in The Treatment of Various Diseas...
Shital B.Bharambe, Shailesh Jawarkar, Madhuri Game, V. M Whagulkar, Monika Jadhav, ...
Naphthalene: A Multidimensional Scaffold in Medicinal Chemistry with Promising A...
Shivam Yadav , Chetana Mayekar, Nikita Pagare, Alnaj Thange, Sonal Yadav, ...
More related articles
Naphthalene: A Multidimensional Scaffold in Medicinal Chemistry with Promising A...
Shivam Yadav , Chetana Mayekar, Nikita Pagare, Alnaj Thange, Sonal Yadav, ...
Phytochemical and Pharmacological Insights into Helicteres Isora Linn: A Review ...
Sonawane Harshad, Dr. Manisha Zaware, Kulange Krushnakant, Pathare Neha , ...
A Comprehensive Review on Ethical Considerations in Biomarker Research and Appli...
Nurjamal Hoque, Ilias Uddin, Halema Khatun, Jafar Sharif, Sanjoy Chungkrang, Nafeesa Roza, Dhiraj Ba...
Naphthalene: A Multidimensional Scaffold in Medicinal Chemistry with Promising A...
Shivam Yadav , Chetana Mayekar, Nikita Pagare, Alnaj Thange, Sonal Yadav, ...
Phytochemical and Pharmacological Insights into Helicteres Isora Linn: A Review ...
Sonawane Harshad, Dr. Manisha Zaware, Kulange Krushnakant, Pathare Neha , ...
A Comprehensive Review on Ethical Considerations in Biomarker Research and Appli...
Nurjamal Hoque, Ilias Uddin, Halema Khatun, Jafar Sharif, Sanjoy Chungkrang, Nafeesa Roza, Dhiraj Ba...