The document describes CDISC's Study Data Tabulation Model (SDTM), which provides a fundamental model for organizing clinical trial data based on observations of discrete pieces of information (variables). SDTM defines general classes of observations (Events, Findings, Interventions) and variable roles including topic, identifier, timing, and qualifier variables. It discusses CDISC SDTM domains as SAS dataset implementations with optimizations for data exchange and medical review. Controlled terminologies and code lists are also described, as well as the define.xml specification for machine-readable metadata submission.
SDTM (Study Data Tabulation Model) defines a standard structure for human clinical trial (study) data tabulations and for nonclinical study data tabulations that are to be submitted as part of a product application to a regulatory authority such as the United States Food and Drug Administration (FDA).
SDTM (Study Data Tabulation Model) defines a standard for organizing and formatting data to streamline processes in collection, management, analysis and reporting of human clinical trial data tabulations and for non-clinical study data tabulations which are to be submitted as part of a product application(IND and NDA) to a regulatory authority such as the United States Food and Drug Administration (FDA) and PMDA (Japan)
The document discusses the Combined Data Interchange Standard Consortium (CDISC) and its Standard Data Tabulation Model (SDTM). CDISC develops standards to support clinical research data exchange and submission. SDTM defines a standard structure for study data tabulations submitted to regulators. The document outlines key aspects of SDTM including its implementation guide, fundamentals, observation classes, special purpose domains, trial design model, relationship datasets, metadata, controlled terminology, and date/time variables.
Presentation on CDISC- SDTM guidelines.Khushbu Shah
This document provides an overview of CDISC (Clinical Data Interchange Standards Consortium) and SDTM (Standard Data Tabulation Model). It defines these standards, their purpose in establishing common data formats for clinical research, and key concepts in SDTM like domains, variables, qualifiers and time standards. The document also provides examples of how SDTM organizes data from a clinical trial, including adverse events, trial design, and standards for related records.
The document summarizes the Study Data Tabulation Model (SDTM), which defines a standard structure for submitting human clinical trial data to regulatory authorities. SDTM organizes data into domains based on three general classes: Interventions, Events, and Findings. Each observation within a domain contains identifier, topic, timing, and qualifier variables to describe essential details. Qualifier variables are further categorized, and the Submission Metadata Model specifies seven attributes for each variable to ensure consistent interpretation.
Professor Jon Patrick
Health Information Technology Research Laboratory (HITRL - www.it.usyd.edu.au/~hitru)
School of Information Technologies
University of Sydney
(P39, 17/10/08, Systems & Methods stream, 1.50pm)
SDTM (Study Data Tabulation Model) defines a standard structure for human clinical trial (study) data tabulations and for nonclinical study data tabulations that are to be submitted as part of a product application to a regulatory authority such as the United States Food and Drug Administration (FDA).
SDTM (Study Data Tabulation Model) defines a standard for organizing and formatting data to streamline processes in collection, management, analysis and reporting of human clinical trial data tabulations and for non-clinical study data tabulations which are to be submitted as part of a product application(IND and NDA) to a regulatory authority such as the United States Food and Drug Administration (FDA) and PMDA (Japan)
The document discusses the Combined Data Interchange Standard Consortium (CDISC) and its Standard Data Tabulation Model (SDTM). CDISC develops standards to support clinical research data exchange and submission. SDTM defines a standard structure for study data tabulations submitted to regulators. The document outlines key aspects of SDTM including its implementation guide, fundamentals, observation classes, special purpose domains, trial design model, relationship datasets, metadata, controlled terminology, and date/time variables.
Presentation on CDISC- SDTM guidelines.Khushbu Shah
This document provides an overview of CDISC (Clinical Data Interchange Standards Consortium) and SDTM (Standard Data Tabulation Model). It defines these standards, their purpose in establishing common data formats for clinical research, and key concepts in SDTM like domains, variables, qualifiers and time standards. The document also provides examples of how SDTM organizes data from a clinical trial, including adverse events, trial design, and standards for related records.
The document summarizes the Study Data Tabulation Model (SDTM), which defines a standard structure for submitting human clinical trial data to regulatory authorities. SDTM organizes data into domains based on three general classes: Interventions, Events, and Findings. Each observation within a domain contains identifier, topic, timing, and qualifier variables to describe essential details. Qualifier variables are further categorized, and the Submission Metadata Model specifies seven attributes for each variable to ensure consistent interpretation.
Professor Jon Patrick
Health Information Technology Research Laboratory (HITRL - www.it.usyd.edu.au/~hitru)
School of Information Technologies
University of Sydney
(P39, 17/10/08, Systems & Methods stream, 1.50pm)
Integrated Summary of Safety and Integrated Summary of EffectivenessSAYAN DAS
In my presentation, I discuss what both these summaries are, the potential challenges of creating these summaries, and how these summaries can be incorporated into the Common Technical Document (CTD).
For those interested in learning more about this vital topic, I invite you to check out my presentation for an in-depth, comprehensive analysis.
The Logical Model Designer - Binding Information Models to TerminologySnow Owl
This presentation demonstrates the functionality provided by the Logical Model Designer (LMD) and Snow Owl tools, which enables terminology to be bound to the Singapore Logical Information Model.
Abstract:
A critical enabler in the journey towards semantic interoperability in Singapore is the Singapore "˜Logical Information Model' (LIM). The LIM is a model of the healthcare information shared within Singapore, and is defined as a set of reusable "˜archetypes' for each clinical concept (e.g. Problem/Diagnosis, Pharmacy Order). These archetypes are then constrained and composed into "˜templates' to support specific use cases.
The Singapore LIM harmonises the semantics of the information structures with the terminology, using multiple types of terminology bindings, including semantic, value domain and constraint bindings. Value domain bindings are defined to both national "˜reference terminology' (used for querying nationally-collated data), as well as to a variety of "˜interface terminologies' used within local clinical systems (required to enforce conformance-compliance rules over message specifications generated from the LIM). To support the diversity of pre-coordination captured in local interface terms, "˜design patterns' are included in the LIM, based on the SNOMED CT concept model. These design patterns represent a logical model of meaning for a specific concept, and allow more than one split between the information model and the terminology model to be represented in a semantically-consistent manner.
This presentation will demonstrate the "˜Logical Model Designer' (LMD) - an Eclipse-based tool that is being used to maintain Singapore's Logical Information Model. A number of features of the LMD tooling will be demonstrated, with a specific focus on how the information structure is bound to the terminology via an interface to the Snow Owl platform. Value Domains are defined as reference sets within Snow Owl and then linked to the information structures defined in the LMD.
Please see our website http://b2i.sg for further information.
Standards for clinical research data - steps to an information model (CRIM).Wolfgang Kuchinke
Standards for clinical research data: Introduction to CDISC standards CDASH, SHARE, PRM and BRIDG and their evaaluation to create a Information model for clinical research (CRIM). In particular, CRIM should allow the integrative usage of medical care data together with clinical research data; it should support the processes of the Learning Health System (LHS).
CDASH is Clinical Data Acquisition Standards Harmonization; it identifies the basic data collection fields needed from a clinical, scientific and regulatory perspective to enable more efficient data collection at the Investigator sites. SHARE is a globally accessible electronic library built on a common information model, which enables precise and standardized data element definitions that can be used in studies and applications to improve biomedical research. SHARE is intended to be a healthcare‐biomedical research enriched data dictionary. The Protocol Representation Model (PRM) focuses on the characteristics of a clinical study and the definitions and association of activities within the protocols and defines over 100 common protocol elements. The BRIDG Model is an instance of the Domain Analysis Model. The dynamic component of BRIDG defines the various processes and dynamic behaviour of the domain; the static component describes the concepts, attributes, and relationships of the static constructs which collectively define a domain-of-interest.
The CRIM was developed based on activity models and use cases. CRIM specifies the necessary information objects, their relationships and associated activities. It is required to fully support the development of TRANSFoRm project's tools for the Learning Health System. All activity objects of the workflows were defined and characterized according to their data requirements and information needs and mapped to the concepts of established information models including the above mentioned CDISC standards.
The best mapping results were achieved with PCROM and it was decided to use PCROM as basis for the development of CRIM. The comparison of PCROM with BRIDG found a significant overlap of concepts but also several areas important to research that were either not yet represented or represented quite differently in BRIDG. Adaption of PCROM to the needs of CRIM was acchieved by adding 14 information object types from BRIDG, two extensions of existing objects and the introduction of two new high-ranking concepts (CARE area and ENTRY area).
The New Health Catalyst 2.0 Platform and ProductsHealth Catalyst
Listen to Health Catalyst Co-Founder Tom Burton explain Health Catalyst 2.0 -- a new platform and application framework that significantly increases the scalability of the Late-Binding (TM) Data Warehouse and the number of Analytic Applications available from Health Catalyst.
In particular, see product demos and learn more about the specific suite of analytic products under the 3 main categories: Foundational Applications, Discovery Applications, and Advanced Applications
Leveraging Oracle's Life Sciences Data Hub to Enable Dynamic Cross-Study Anal...Perficient
This document discusses leveraging Oracle's Life Sciences Data Hub to enable dynamic cross-study analysis. It provides an overview of dynamic analytics and a systematic four-stage approach: 1) data preparation, 2) data selection and exploration, 3) model building and analytics, and 4) deployment and reuse. Key aspects of each stage are described, including conforming data, interactively subsetting data, selecting and building analytical models, and creating reusable analysis components. The proposed environment incorporates the Oracle Life Sciences Data Hub, SAS, and other tools. BioPharm Services are also briefly described to support integration and analytics.
This document discusses an adaptive clinical trial design that was used in a phase III oncology study. The particular adaptation was an unblinded sample size re-estimation based on interim analysis results. This required changes to the SDTM and ADaM data models to account for the interim analysis cut-off dates. The reviewer guides were also updated to explain how to identify patients in the interim analysis and which analysis datasets to use for re-calculating results based on the interim and final cut-offs.
This document discusses considerations for creating SDTM trial design datasets. It provides an overview of the trial design model, including the trial elements, arms, epochs and visits datasets. It also presents case studies of relatively simple and more complex clinical trial designs. Challenges in modeling more complicated trial designs are examined, such as a study with a double-blind period followed by an open-label extension. The paper aims to help sponsors and vendors properly represent clinical trial protocols in SDTM format.
The document discusses the Analysis Data Model (ADaM), which is used to standardize the organization of clinical trial data for statistical analysis. ADaM has two main data structures - Analysis Data Structure, Level (ADSL), which contains one record per subject, and Basic Data Structure (BDS), which can have multiple records per subject. BDS includes variables for subject identifiers, treatments, timings, analysis parameters, and other metadata. Using ADaM makes clinical trial data analysis-ready and traceable. It allows statisticians to perform various analyses like survival analysis and comparisons between treatment groups using standard SAS procedures without additional data manipulation.
Standardized MedDRA Queries (SMQs) are groupings of MedDRA terms related to a defined medical condition or area of interest. They are intended to help identify medical cases in a standardized way. SMQs benefits include being applicable across therapeutic areas and providing consistent retrieval of safety information. Some limitations are that not all topics are covered and SMQs continue to be refined. Terms in SMQs have statuses and SMQs can be hierarchical with subordinate levels. SMQs undergo testing before release and continue to be updated. They can be used for clinical trials, post-marketing safety monitoring, and signal detection.
In this presentation, Principal Statistical Scientist Ben Vaughn explains how clinical trial data moves from collection in the case report form to its presentation to FDA.
Considerations for an SDTM Compliant Study DefinitionPerficient
This document summarizes a presentation on considerations for creating an SDTM-compliant study definition in Oracle Clinical. It discusses the history of the SDTM standard and challenges of modeling SDTM domains vertically in Oracle Clinical, such as the VS domain. It proposes solutions like modeling domains horizontally then transforming, or using separate collection and reporting question groups connected by a derivation routine. It also notes issues around SAS naming conventions and date formats that require transformation for SDTM compliance.
Introduction to Oracle Clinical Overview in Clinical Data Management in Clinical Trials of Pharmaceuticals, Bio-Pharmaceuticals, Medical Devices, Cosmeceuticals and Foods.
Lessons Learned from the DICOM Standardization Effort Lessons Learned from ...MedicineAndDermatology
DICOM is an international standard for communicating medical imaging information and related data between devices. It has been developed over 18 years through collaboration between industry and clinical experts. Maintaining and evolving standards like DICOM requires significant organizational infrastructure and ongoing effort. Key aspects of the DICOM standard include its use of tagged data elements, object orientation, network services, privacy/security features, and support for structured reporting.
Designing and launching the Clinical Reference LibraryKerstin Forsberg
Presentation for the European Clinical Data Forum conference, 24 May, 2011. Describing the business problems and drivers behind the design of a ISO11179 based metadata registry for clinical data. And also introducing the features of the CRL application.
This document provides guidance on starting ADaM specification development and dataset programming. It recommends starting with ADaM subject matter experts and a well-defined specification template. It also recommends understanding the SDTM datasets, analysis keys, and Occurrence Data Structure requirements. The document outlines considerations like variable attributes and traceability when developing specifications and programming datasets. It emphasizes adhering to the ADaM Implementation Guide.
This document provides an overview of ADaM (Analysis Data Model) and recommendations for getting started with ADaM specification development, programming, and quality control (QC). It discusses:
1) Starting ADaM specification development by identifying analysis datasets needed based on the SAP and using a clear template to define required variables and attributes.
2) Beginning ADaM programming by understanding the source SDTM datasets and ADaM specifications, including how complex algorithms and derivations are defined.
3) Initiating ADaM QC by checking that analysis variables can be traced back to SDTM, comply with the ADaM IG, and support required analysis results. Simple programming and use of
Interpreting CDISC ADaM IG through Users InterpretationAngelo Tinazzi
This document summarizes a presentation given at the PhUSE 2013 conference titled "Interpreting CDISC ADaM IG through Users Interpretation". The presentation aimed to systematically review publications on the CDISC Analysis Data Model (ADaM) standard in order to evaluate how different organizations have implemented and interpreted ADaM. Over 100 presentations focused on ADaM implementation were identified from conferences like PharmaSUG and PhUSE. Key topics of discussion included how to map non-standard clinical domains to ADaM, determining how "analysis-ready" datasets should be, and handling listings/derived values not supported in ADaM. The presentation provided qualitative summaries of user interpretations and applications of various ADaM guidelines and
Planning And Development Of The Iss Ise Webinar FinalJay1818mar
This document provides a summary of a presentation on planning and developing integrated summaries of safety and efficacy data from multiple clinical trials. It discusses the purpose and requirements of integrated summaries, the planning process, special analysis considerations, and guidance documents. Key points covered include defining analysis populations and treatment groups, handling adverse events and laboratory data consistently across studies, and obtaining regulatory agency input on analysis plans.
My presentation at the http://neuroinformatics2017.org (Kuala Lumpur, Malaysia) on FAIR and FAIRsharing (previously BioSharing); metadata standards and their implementation by databases/repositories and adoption by journals' and funders' data policies.
Model PPT-Proposal Presentation for ou.pptxMadeeshShaik
This document describes the design and evaluation of buccal patches containing the drug Acrivastine. It begins with an introduction to buccal drug delivery systems, their advantages, and formulation components. The aim is then stated as preparing and evaluating buccal patches containing Acrivastine. The materials, methodology, and evaluation methods are described. A literature review discusses previous studies on buccal patches and their formulation. The document provides information on the drug Acrivastine and concludes with a list of references.
Nitrosamine impurities like NDMA and NDEA have been found in several generic drugs like ARBs which are used to treat hypertension. These impurities are classified as Class 1 mutagens and carcinogens. They can form during drug manufacturing through reactions between amines and nitrites in raw materials, solvents, or other process components. To prevent nitrosamine formation, manufacturers should avoid using amines and nitrites together, ensure raw materials and equipment are free of contamination, modify processes to remove potential impurities, and implement controls to detect and limit nitrosamines in drugs.
More Related Content
Similar to HCLSIG$$Drug_Safety_and_Efficacy$CDISCs_SDTM_basics.ppt
Integrated Summary of Safety and Integrated Summary of EffectivenessSAYAN DAS
In my presentation, I discuss what both these summaries are, the potential challenges of creating these summaries, and how these summaries can be incorporated into the Common Technical Document (CTD).
For those interested in learning more about this vital topic, I invite you to check out my presentation for an in-depth, comprehensive analysis.
The Logical Model Designer - Binding Information Models to TerminologySnow Owl
This presentation demonstrates the functionality provided by the Logical Model Designer (LMD) and Snow Owl tools, which enables terminology to be bound to the Singapore Logical Information Model.
Abstract:
A critical enabler in the journey towards semantic interoperability in Singapore is the Singapore "˜Logical Information Model' (LIM). The LIM is a model of the healthcare information shared within Singapore, and is defined as a set of reusable "˜archetypes' for each clinical concept (e.g. Problem/Diagnosis, Pharmacy Order). These archetypes are then constrained and composed into "˜templates' to support specific use cases.
The Singapore LIM harmonises the semantics of the information structures with the terminology, using multiple types of terminology bindings, including semantic, value domain and constraint bindings. Value domain bindings are defined to both national "˜reference terminology' (used for querying nationally-collated data), as well as to a variety of "˜interface terminologies' used within local clinical systems (required to enforce conformance-compliance rules over message specifications generated from the LIM). To support the diversity of pre-coordination captured in local interface terms, "˜design patterns' are included in the LIM, based on the SNOMED CT concept model. These design patterns represent a logical model of meaning for a specific concept, and allow more than one split between the information model and the terminology model to be represented in a semantically-consistent manner.
This presentation will demonstrate the "˜Logical Model Designer' (LMD) - an Eclipse-based tool that is being used to maintain Singapore's Logical Information Model. A number of features of the LMD tooling will be demonstrated, with a specific focus on how the information structure is bound to the terminology via an interface to the Snow Owl platform. Value Domains are defined as reference sets within Snow Owl and then linked to the information structures defined in the LMD.
Please see our website http://b2i.sg for further information.
Standards for clinical research data - steps to an information model (CRIM).Wolfgang Kuchinke
Standards for clinical research data: Introduction to CDISC standards CDASH, SHARE, PRM and BRIDG and their evaaluation to create a Information model for clinical research (CRIM). In particular, CRIM should allow the integrative usage of medical care data together with clinical research data; it should support the processes of the Learning Health System (LHS).
CDASH is Clinical Data Acquisition Standards Harmonization; it identifies the basic data collection fields needed from a clinical, scientific and regulatory perspective to enable more efficient data collection at the Investigator sites. SHARE is a globally accessible electronic library built on a common information model, which enables precise and standardized data element definitions that can be used in studies and applications to improve biomedical research. SHARE is intended to be a healthcare‐biomedical research enriched data dictionary. The Protocol Representation Model (PRM) focuses on the characteristics of a clinical study and the definitions and association of activities within the protocols and defines over 100 common protocol elements. The BRIDG Model is an instance of the Domain Analysis Model. The dynamic component of BRIDG defines the various processes and dynamic behaviour of the domain; the static component describes the concepts, attributes, and relationships of the static constructs which collectively define a domain-of-interest.
The CRIM was developed based on activity models and use cases. CRIM specifies the necessary information objects, their relationships and associated activities. It is required to fully support the development of TRANSFoRm project's tools for the Learning Health System. All activity objects of the workflows were defined and characterized according to their data requirements and information needs and mapped to the concepts of established information models including the above mentioned CDISC standards.
The best mapping results were achieved with PCROM and it was decided to use PCROM as basis for the development of CRIM. The comparison of PCROM with BRIDG found a significant overlap of concepts but also several areas important to research that were either not yet represented or represented quite differently in BRIDG. Adaption of PCROM to the needs of CRIM was acchieved by adding 14 information object types from BRIDG, two extensions of existing objects and the introduction of two new high-ranking concepts (CARE area and ENTRY area).
The New Health Catalyst 2.0 Platform and ProductsHealth Catalyst
Listen to Health Catalyst Co-Founder Tom Burton explain Health Catalyst 2.0 -- a new platform and application framework that significantly increases the scalability of the Late-Binding (TM) Data Warehouse and the number of Analytic Applications available from Health Catalyst.
In particular, see product demos and learn more about the specific suite of analytic products under the 3 main categories: Foundational Applications, Discovery Applications, and Advanced Applications
Leveraging Oracle's Life Sciences Data Hub to Enable Dynamic Cross-Study Anal...Perficient
This document discusses leveraging Oracle's Life Sciences Data Hub to enable dynamic cross-study analysis. It provides an overview of dynamic analytics and a systematic four-stage approach: 1) data preparation, 2) data selection and exploration, 3) model building and analytics, and 4) deployment and reuse. Key aspects of each stage are described, including conforming data, interactively subsetting data, selecting and building analytical models, and creating reusable analysis components. The proposed environment incorporates the Oracle Life Sciences Data Hub, SAS, and other tools. BioPharm Services are also briefly described to support integration and analytics.
This document discusses an adaptive clinical trial design that was used in a phase III oncology study. The particular adaptation was an unblinded sample size re-estimation based on interim analysis results. This required changes to the SDTM and ADaM data models to account for the interim analysis cut-off dates. The reviewer guides were also updated to explain how to identify patients in the interim analysis and which analysis datasets to use for re-calculating results based on the interim and final cut-offs.
This document discusses considerations for creating SDTM trial design datasets. It provides an overview of the trial design model, including the trial elements, arms, epochs and visits datasets. It also presents case studies of relatively simple and more complex clinical trial designs. Challenges in modeling more complicated trial designs are examined, such as a study with a double-blind period followed by an open-label extension. The paper aims to help sponsors and vendors properly represent clinical trial protocols in SDTM format.
The document discusses the Analysis Data Model (ADaM), which is used to standardize the organization of clinical trial data for statistical analysis. ADaM has two main data structures - Analysis Data Structure, Level (ADSL), which contains one record per subject, and Basic Data Structure (BDS), which can have multiple records per subject. BDS includes variables for subject identifiers, treatments, timings, analysis parameters, and other metadata. Using ADaM makes clinical trial data analysis-ready and traceable. It allows statisticians to perform various analyses like survival analysis and comparisons between treatment groups using standard SAS procedures without additional data manipulation.
Standardized MedDRA Queries (SMQs) are groupings of MedDRA terms related to a defined medical condition or area of interest. They are intended to help identify medical cases in a standardized way. SMQs benefits include being applicable across therapeutic areas and providing consistent retrieval of safety information. Some limitations are that not all topics are covered and SMQs continue to be refined. Terms in SMQs have statuses and SMQs can be hierarchical with subordinate levels. SMQs undergo testing before release and continue to be updated. They can be used for clinical trials, post-marketing safety monitoring, and signal detection.
In this presentation, Principal Statistical Scientist Ben Vaughn explains how clinical trial data moves from collection in the case report form to its presentation to FDA.
Considerations for an SDTM Compliant Study DefinitionPerficient
This document summarizes a presentation on considerations for creating an SDTM-compliant study definition in Oracle Clinical. It discusses the history of the SDTM standard and challenges of modeling SDTM domains vertically in Oracle Clinical, such as the VS domain. It proposes solutions like modeling domains horizontally then transforming, or using separate collection and reporting question groups connected by a derivation routine. It also notes issues around SAS naming conventions and date formats that require transformation for SDTM compliance.
Introduction to Oracle Clinical Overview in Clinical Data Management in Clinical Trials of Pharmaceuticals, Bio-Pharmaceuticals, Medical Devices, Cosmeceuticals and Foods.
Lessons Learned from the DICOM Standardization Effort Lessons Learned from ...MedicineAndDermatology
DICOM is an international standard for communicating medical imaging information and related data between devices. It has been developed over 18 years through collaboration between industry and clinical experts. Maintaining and evolving standards like DICOM requires significant organizational infrastructure and ongoing effort. Key aspects of the DICOM standard include its use of tagged data elements, object orientation, network services, privacy/security features, and support for structured reporting.
Designing and launching the Clinical Reference LibraryKerstin Forsberg
Presentation for the European Clinical Data Forum conference, 24 May, 2011. Describing the business problems and drivers behind the design of a ISO11179 based metadata registry for clinical data. And also introducing the features of the CRL application.
This document provides guidance on starting ADaM specification development and dataset programming. It recommends starting with ADaM subject matter experts and a well-defined specification template. It also recommends understanding the SDTM datasets, analysis keys, and Occurrence Data Structure requirements. The document outlines considerations like variable attributes and traceability when developing specifications and programming datasets. It emphasizes adhering to the ADaM Implementation Guide.
This document provides an overview of ADaM (Analysis Data Model) and recommendations for getting started with ADaM specification development, programming, and quality control (QC). It discusses:
1) Starting ADaM specification development by identifying analysis datasets needed based on the SAP and using a clear template to define required variables and attributes.
2) Beginning ADaM programming by understanding the source SDTM datasets and ADaM specifications, including how complex algorithms and derivations are defined.
3) Initiating ADaM QC by checking that analysis variables can be traced back to SDTM, comply with the ADaM IG, and support required analysis results. Simple programming and use of
Interpreting CDISC ADaM IG through Users InterpretationAngelo Tinazzi
This document summarizes a presentation given at the PhUSE 2013 conference titled "Interpreting CDISC ADaM IG through Users Interpretation". The presentation aimed to systematically review publications on the CDISC Analysis Data Model (ADaM) standard in order to evaluate how different organizations have implemented and interpreted ADaM. Over 100 presentations focused on ADaM implementation were identified from conferences like PharmaSUG and PhUSE. Key topics of discussion included how to map non-standard clinical domains to ADaM, determining how "analysis-ready" datasets should be, and handling listings/derived values not supported in ADaM. The presentation provided qualitative summaries of user interpretations and applications of various ADaM guidelines and
Planning And Development Of The Iss Ise Webinar FinalJay1818mar
This document provides a summary of a presentation on planning and developing integrated summaries of safety and efficacy data from multiple clinical trials. It discusses the purpose and requirements of integrated summaries, the planning process, special analysis considerations, and guidance documents. Key points covered include defining analysis populations and treatment groups, handling adverse events and laboratory data consistently across studies, and obtaining regulatory agency input on analysis plans.
My presentation at the http://neuroinformatics2017.org (Kuala Lumpur, Malaysia) on FAIR and FAIRsharing (previously BioSharing); metadata standards and their implementation by databases/repositories and adoption by journals' and funders' data policies.
Similar to HCLSIG$$Drug_Safety_and_Efficacy$CDISCs_SDTM_basics.ppt (20)
Model PPT-Proposal Presentation for ou.pptxMadeeshShaik
This document describes the design and evaluation of buccal patches containing the drug Acrivastine. It begins with an introduction to buccal drug delivery systems, their advantages, and formulation components. The aim is then stated as preparing and evaluating buccal patches containing Acrivastine. The materials, methodology, and evaluation methods are described. A literature review discusses previous studies on buccal patches and their formulation. The document provides information on the drug Acrivastine and concludes with a list of references.
Nitrosamine impurities like NDMA and NDEA have been found in several generic drugs like ARBs which are used to treat hypertension. These impurities are classified as Class 1 mutagens and carcinogens. They can form during drug manufacturing through reactions between amines and nitrites in raw materials, solvents, or other process components. To prevent nitrosamine formation, manufacturers should avoid using amines and nitrites together, ensure raw materials and equipment are free of contamination, modify processes to remove potential impurities, and implement controls to detect and limit nitrosamines in drugs.
PROJECT REVIEW FINAL PPT 2018-2022 TEAM FINAL.pptxMadeeshShaik
1. The document discusses the development and validation of an analytical method to detect and quantify levels of three genotoxic nitrosamine impurities - N-nitrosodimethylamine (NDMA), N-nitrosodiethylamine (NDEA), and N-nitrosodiisopropylamine (NDIPA) - in the drug rifapentine and its dosage forms.
2. Rifapentine is an antibiotic used to treat tuberculosis, but certain samples were found to contain unacceptable levels of nitrosamine impurities. The FDA set interim limits of 0.3 ppm for nitrosamines in rifapentine.
3. The project aims to develop an accurate analytical method and
This document provides an overview of planning and design considerations for building construction. It discusses general principles like providing adequate front, rear, and side open spaces per regulations. It also outlines structural elements of buildings like columns, beams, slabs, and footings. Analysis involves determining internal forces on members from loads. Design can be done using working stress, ultimate load, or limit state methods, with the latest code emphasizing limit state. Software like Staad and AutoCAD are useful tools for analysis and drawing plans.
Solubility refers to how well a substance dissolves in a solvent. There are different types of solubilities as defined by the USP. A saturated solution contains as much solute as possible dissolved at a given temperature, while an unsaturated solution can still dissolve more solute. Solubility depends on properties like polarity and intermolecular forces. Miscibility refers to the ability of two liquids to mix in all proportions without separating into layers. Terms like freely soluble and sparingly soluble describe relative solubility. Solubility testing is done to determine product quality and specifications.
This document presents the development and validation of a new RP-UPLC method for the simultaneous estimation of lamivudine, tenofovir, and efavirenz in tablet dosage forms. The optimized method uses a C18 BEH column, a mobile phase of 35% phosphate buffer (pH 3.0) and 65% methanol, and detects the drugs at 260 nm. The method was validated per ICH guidelines and found to be specific, precise, accurate, linear, robust, and stability-indicating for the simultaneous analysis of the drugs without interference from excipients. The developed and validated RP-UPLC method provides a simple, rapid, and economical approach for the routine quality control analysis of fixed
This document discusses manipulating character and numeric values in SAS using functions. It introduces SAS functions and their syntax. Specific functions discussed include SUBSTR to extract characters, LENGTH to return the length of a character string, and PROPCASE to convert words to proper case. Examples are provided to subset data to only charities, extract ID values from account codes, and format organization names properly. The goal is to create a new dataset with the essential charity information needed for a manager's report.
This document provides instructions for the safe and proper use of common laboratory equipment. It describes items such as safety goggles, safety showers, beakers, crucibles, dessicators, dropping bottles, drying ovens, Erlenmeyer flasks, evaporating dishes, forceps, fume hoods, funnels, graduated cylinders, pipets, microscopes, mortar and pestles, spot plates, ring stands, test tubes, tongs, tripods, utility clamps, volumetric flasks, watch glasses, weighing paper, wash bottles, wire gauzes, balances, and filtering procedures. Proper techniques are outlined for measuring volumes, masses, heating substances, and performing experiments.
This document lists and describes common lab equipment used in general and dissection labs, including their purposes. It includes basic equipment like rulers, goggles, graduated cylinders, flasks, forceps, droppers, pipets, funnels, racks, test tubes, beakers, thermometers, balances, magnifying glasses, petri dishes, timers, coverslips, microscopes, wash bottles, stirring rods, scissors, scalpels, dissection pins, probes, and trays. Safety equipment mentioned are safety showers, eyewashes, fire extinguishers, fire blankets, and fume hoods. Resources for quizzes on lab equipment are also provided. The document discusses classroom work and gives homework
How to Prepare for Fortinet FCP_FAC_AD-6.5 Certification?NWEXAM
Begin Your Preparation Here: https://bit.ly/3VfYStG — Access comprehensive details on the FCP_FAC_AD-6.5 exam guide and excel in the Fortinet Certified Professional - Network Security certification. Gather all essential information including tutorials, practice tests, books, study materials, exam questions, and the syllabus. Solidify your knowledge of Fortinet FCP_FAC_AD-6.5 certification. Discover everything about the FCP_FAC_AD-6.5 exam, including the number of questions, passing percentage, and the time allotted to complete the test.
A Guide to a Winning Interview June 2024Bruce Bennett
This webinar is an in-depth review of the interview process. Preparation is a key element to acing an interview. Learn the best approaches from the initial phone screen to the face-to-face meeting with the hiring manager. You will hear great answers to several standard questions, including the dreaded “Tell Me About Yourself”.
Job Finding Apps Everything You Need to Know in 2024SnapJob
SnapJob is revolutionizing the way people connect with work opportunities and find talented professionals for their projects. Find your dream job with ease using the best job finding apps. Discover top-rated apps that connect you with employers, provide personalized job recommendations, and streamline the application process. Explore features, ratings, and reviews to find the app that suits your needs and helps you land your next opportunity.
Jill Pizzola's Tenure as Senior Talent Acquisition Partner at THOMSON REUTERS...dsnow9802
Jill Pizzola's tenure as Senior Talent Acquisition Partner at THOMSON REUTERS in Marlton, New Jersey, from 2018 to 2023, was marked by innovation and excellence.
5 Common Mistakes to Avoid During the Job Application Process.pdfAlliance Jobs
The journey toward landing your dream job can be both exhilarating and nerve-wracking. As you navigate through the intricate web of job applications, interviews, and follow-ups, it’s crucial to steer clear of common pitfalls that could hinder your chances. Let’s delve into some of the most frequent mistakes applicants make during the job application process and explore how you can sidestep them. Plus, we’ll highlight how Alliance Job Search can enhance your local job hunt.
Resumes, Cover Letters, and Applying OnlineBruce Bennett
This webinar showcases resume styles and the elements that go into building your resume. Every job application requires unique skills, and this session will show you how to improve your resume to match the jobs to which you are applying. Additionally, we will discuss cover letters and learn about ideas to include. Every job application requires unique skills so learn ways to give you the best chance of success when applying for a new position. Learn how to take advantage of all the features when uploading a job application to a company’s applicant tracking system.
IT Career Hacks Navigate the Tech Jungle with a RoadmapBase Camp
Feeling overwhelmed by IT options? This presentation unlocks your personalized roadmap! Learn key skills, explore career paths & build your IT dream job strategy. Visit now & navigate the tech world with confidence! Visit https://www.basecamp.com.sg for more details.
Joyce M Sullivan, Founder & CEO of SocMediaFin, Inc. shares her "Five Questions - The Story of You", "Reflections - What Matters to You?" and "The Three Circle Exercise" to guide those evaluating what their next move may be in their careers.
1. CDISC submission standard
• CDISC SDTM
unfolding the core model that is the basis
both for the specialised dataset templates
(SDTM domains) optimised for medical
reviewers
• CDISC Define.xml
metadata describing the data exchange
structures (domains)
2. Background: CDISC SDTM’s fundamental
model for organizing clinical data
Observation
Generic structure
•Unique identifiers
•Topic variable or parameter
•Timing Variables
•Qualifiers.
Interventions
Findings
Events
General classes
Subject
CM
EX
EG
IE
LB
PE
AE
DS
SDTM Domains
(dataset structures)
…
The patient/subject focused information model of the clinical ‘reality’ (general classes of
observations on subjects: interventions, findings, events). This model has been developed by
CDISC/SDS team and exist today only as a text description.
3. * New in Version 3
Interventions Events
ConMeds
Exposure AE
MedHist
Disposition
Findings
ECG
PhysExam
Labs
Vitals
Demog
Other
Subj Char*
Subst Use*
Incl Excl*
RELATES*
SUPPQUAL*
Study Sum*
Study Design*
QS*, MB*
Comments*
CP*, DV*
CDISC SDTM’s Domains
From CDISC SDTM Overview & Impact to AZ, 2004, by Dan Godoy, presented
at the first CDISC/SDM meeting 20 October 2004
4. Basic Concepts in CDISC SDTM
Observations and Variables
• The SDTM provides a general framework for describing the
organization of information collected during human and animal
studies.
• The model is built around the concept of observations, which
consist of discrete pieces of information collected during a study.
Observations normally correspond to rows in a dataset.
• Each observation can be described by a series of named
variables. Each variable, which normally corresponds to a
column in a dataset, can be classified according to its Role.
• Observations are reported in a series of domains, usually
corresponding to data that were collected together. A domain is
defined as a collection of observations with a topic-specific
commonality about a subject.
From the Study Data Tabulation Model document
5. Basic Concepts in CDISC/SDTM
Variable Roles
• A Role determines the type of information conveyed by the variable
about each distinct observation and how it can be used.
– A common set of Identifier variables, which identify the study, the subject
(individual human or animal) involved in the study, the domain, and the
sequence number of the record.
– Topic variables, which specify the focus of the observation (such as the
name of a lab test), and vary according to the type of observation.
– A common set of Timing variables, which describe the timing of an
observation (such as start date and end date).
– Qualifier variables, which include additional illustrative text, or numeric
values that describe the results or additional traits of the observation (such
as units or descriptive adjectives). The list of Qualifier variables included
with a domain will vary considerably depending on the type of observation
and the specific domain
– Rule variables, which express an algorithm or executable method to define
start, end, or looping conditions in the Trial Design model.
From the Study Data Tabulation Model document
6. Example: Mapping Vital Signs
From CDISC End to End Tutorial - DIAAmsterdam 7 Nov 2004, Pierre-Yves Lastic, Sanofi-
Aventis and Philippe Verplancke, CRO24
7. CDISC’s Submission standard
• Underlying Models:
CDISC Study Data Tabulation Model
Clinical Observations
• General Classes: Events, Findings, Interventions
– Trial Design Model
• Elements, Arms, Trial Summary Parameters etc.
• Domains, submission dataset templates:
CDISC SDTM Implementation Guide
8. CDISC SDTM fundamental model for organizing data collected in
clinical trials
Concept of Observations, which consist of discrete pieces of information
collected during a study described by a series of named variables.
General Classes of Observations: Events, Findings, Interventions
Variable Roles: determines the type of information conveyed by the
variable about each distinct observation: Topic variables, Identifier
variables, Timing variables, Rule variables, and Qualifiers (Grouping,
Result, Synonym, Record, Variable)
General principles and standards
9. CDISC SDTM fundamental model for organizing data collected in
clinical trials
Concept of Observations, which consist of discrete pieces of information
collected during a study described by a series of named variables.
General Classes of Observations: Events, Findings, Interventions
Variable Roles: determines the type of information conveyed by the
variable about each distinct observation: Topic variables, Identifier
variables, Timing variables, Rule variables, and Qualifiers (Grouping,
Result, Synonym, Record, Variable)
General principles and standards
CDISC SDTM Domains
SAS Dataset implementations
(dataset templates)
e.g. Vital Signs domains
Optimisations for Data Exchange per
study and for Medical Reviewers to
easier understand data
Specific principles and standards such
as ISO8601 for dates/timings, and both
Original & Standard values expected
10. CDISC SDTM fundamental model for organizing data collected in
clinical trials
Concept of Observations, which consist of discrete pieces of information
collected during a study described by a series of named variables.
General Classes of Observations: Events, Findings, Interventions
Variable Roles: determines the type of information conveyed by the
variable about each distinct observation: Topic variables, Identifier
variables, Timing variables, Rule variables, and Qualifiers (Grouping,
Result, Synonym, Record, Variable)
General principles and standards
CDISC SDTM Domains
SAS Dataset implementations
(dataset templates)
e.g. Vital Signs domains
Optimisations for Data Exchange per
study and for Medical Reviewers to
easier understand data
Specific principles and standards such
as ISO8601 for dates/timings, and both
Original & Standard values expected
Identifiers of records
per dataset and study
Decoded format, that is, the
textual interpretation of
whichever code was
selected from the code list.
11. CDISC SDTM fundamental model for organizing data collected in
clinical trials
Concept of Observations, which consist of discrete pieces of information
collected during a study described by a series of named variables.
General Classes of Observations: Events, Findings, Interventions
Variable Roles: determines the type of information conveyed by the
variable about each distinct observation: Topic variables, Identifier
variables, Timing variables, Rule variables, and Qualifiers (Grouping,
Result, Synonym, Record, Variable)
General principles and standards
Optimisations for Data Exchange per
study and for Medical Reviewers to
easier understand data
Specific principles and standards such
as ISO8601 for dates/timings, and both
Original & Standard values expected
CDISC SDTM Domains
SAS Dataset implementations
(dataset templates)
e.g. Vital Signs domains
Controlled Terminologies
CT Packages for SDTM
e.g. Codelist Patient
Positiion and proposed
terms for VSTESTCD
12. CDISC SDTM fundamental model for organizing data collected in
clinical trials
Concept of Observations, which consist of discrete pieces of information
collected during a study described by a series of named variables.
General Classes of Observations: Events, Findings, Interventions
Variable Roles: determines the type of information conveyed by the
variable about each distinct observation: Topic variables, Identifier
variables, Timing variables, Rule variables, and Qualifiers (Grouping,
Result, Synonym, Record, Variable)
General principles and standards
Optimisations for Data Exchange per
study and for Medical Reviewers to
easier understand data
Specific principles and standards such
as ISO8601 for dates/timings, and both
Original & Standard values expected
CDISC SDTM Domains
SAS Dataset implementations
(dataset templates)
e.g. Vital Signs domains
Controlled Terminologies
CT Packages for SDTM
e.g. Codelist Patient
Positiion and proposed
terms for VSTESTCD
CDISC Codelist Specification CDISC Codelist Values
Codelist_Name Controlled Terms Comment
Codelist_Name VSTEST --TESTCD --TEST
Codelist_Label Vital Signs Test Name VSTEST WEIGHT Weight
Upper_Case Y VSTEST HEIGHT Height
Restriction_8char Y VSTEST HR Heart Rate
Extensible_NY Y VSTEST PULSE Pulse Rate
Reference_Description Organization Name: CDISC
Document Title: Study Data Tabulation Model
Implementation Guide: Human Clinical Trials
Document Version: 1.01
Date: 2004-07-14
Chapter:10.3.3 Vital Signs Test Codes
Page: 169
VSTEST SYSBP Systolic Blood Pressure
Reference_URL http://www.cdisc.org/models/sds/v3.1/index.html VSTEST DIABP Diastolic Blood Pressure
VSTEST RESP Respiratory Rate
VSTEST TEMP Temperature
VSTEST FRMSIZE Frame Size SIZECD
VSTEST BMI Body Mass Index
VSTEST BSA Body Surface Area
VSTEST BODYFAT Body Fat
VSTEST MAP Mean Arterial Pressure
CDISC Codelist Specification CDISC Codelist Values
Codelist_Name VSRESU Codelist_Name Controlled Terms Comment
Codelist_Label Units for Vital Signs Results VSRESU kg
Upper_Case N VSRESU lb
Restriction_8char N VSRESU cm
Extensible_NY Y VSRESU in
Reference_Description Not applicable VSRESU mmHG
Reference_URL Not applicable VSRESU beats/min
VSRESU kg/m2
VSRESU m2
VSRESU C
VSRESU F
VSRESU breath/min
VSRESU g
VSRESU %
VSRESU Ohm
CDISC Codelist Specification CDISC Codelist Values
Codelist_Name RACE Codelist Controlled Terms Comment
Codelist_Label Race RACE AMERICAN INDIAN OR ALASKA NATIVE FDA code
Upper_Case Y RACE ASIAN FDA code
Restriction_8char N RACE BLACK OR AFRICAN AMERICAN FDA code
Extensible_NY N RACE NATIVEHAWAIIAN OR OTHER PACIFIC ISLANDER FDA code
Reference_Description Organization Name: FDA
Document Title: Draft Guidance for Industry Collection of
Race and Ethnicity Data in Clinical Trials
Date: January 2003
Chapter: III COLLECTING RACEAND ETHNICITY DATA IN
CLINICAL TRIALS
Codelist_Name: Race
Page: 5
RACE WHITE FDA code
Reference_URL http://www.fda.gov/cder/guidance/5054dft.pdf
13. CDISC SDTM fundamental model for organizing data collected in
clinical trials
Concept of Observations, which consist of discrete pieces of information
collected during a study described by a series of named variables.
General Classes of Observations: Events, Findings, Interventions
Variable Roles: determines the type of information conveyed by the
variable about each distinct observation: Topic variables, Identifier
variables, Timing variables, Rule variables, and Qualifiers (Grouping,
Result, Synonym, Record, Variable)
General principles and standards
Optimisations for Data Exchange per
study and for Medical Reviewers to
easier understand data
Specific principles and standards such
as ISO8601 for dates/timings, and both
Original & Standard values expected
CDISC SDTM Domains
SAS Dataset implementations
(dataset templates)
e.g. Vital Signs domains
define.xml
Case Report Tabulation Data Definition Specification
to submit the Data Definition Document (submission
dataset metadata) in a machine-readable format
Controlled Terminologies
CT Packages for SDTM
e.g. Codelist Patient
Positiion and proposed
terms for VSTESTCD
14. CDISC SDTM fundamental model for organizing data collected in
clinical trials
Concept of Observations, which consist of discrete pieces of information
collected during a study described by a series of named variables.
General Classes of Observations: Events, Findings, Interventions
Variable Roles: determines the type of information conveyed by the
variable about each distinct observation: Topic variables, Identifier
variables, Timing variables, Rule variables, and Qualifiers (Grouping,
Result, Synonym, Record, Variable)
General principles and standards
Optimisations for Data Exchange per
study and for Medical Reviewers to
easier understand data
Specific principles and standards such
as ISO8601 for dates/timings, and both
Original & Standard values expected
CDISC SDTM Domains
SAS Dataset implementations
(dataset templates)
e.g. Vital Signs domains
define.xml
Case Report Tabulation Data Definition Specification
to submit the Data Definition Document (submission
dataset metadata) in a machine-readable format
Controlled Terminologies
CT Packages for SDTM
e.g. Codelist Patient
Positiion and proposed
terms for VSTESTCD
CRTDDS = Case Report Tabulation Data Description
Specification (= an ODM extension, formerly
called „define.xml“) will replace define.pdf in e-CTD
ItemGroup
ItemGroup
ItemGroup
Item Item ValueList
Item
Item
Item
(in an item)
<ItemDef OID="SU.SUTRT.SMKCLASS" Name="SMKCLASS" DataType="integer" Length="8“
Origin="CRF Page" Comment="Substance Use CRF Page 4" def:Label="Smoking classification">
<CodeListRef CodeListOID="SMKCLAS" />
</ItemDef>
<CodeList OID="SMKCLAS" Name="SMKCLAS" DataType="integer">
<CodeListItem CodedValue="1">
<Decode>
<TranslatedText xml:lang="en">NEVER SMOKED</TranslatedText>
</Decode>
</CodeListItem>
<CodeListItem CodedValue=“2">
<Decode>
<TranslatedText xml:lang="en">SMOKER</TranslatedText>
</Decode>
</CodeListItem>
<CodeListItem CodedValue=“3">
<Decode>
<TranslatedText xml:lang="en">EX SMOKER</TranslatedText>
</Decode>
</CodeListItem>
define.XML as machine-readable replacement for define.pdf
(= prevoius called Data Defintion Tables in item 11)
> Needs complete syntax to reference external lists
From Randy Levins presentation, see
http://www.cdisc.org/publications/interchange2005/se
ssion8/JANUS2005.pdf
> And to reference sponsor defined code lists cross studies
15. CDISC SDTM fundamental model for organizing data collected in
clinical trials
Concept of Observations, which consist of discrete pieces of information
collected during a study described by a series of named variables.
General Classes of Observations: Events, Findings, Interventions
Variable Roles: determines the type of information conveyed by the
variable about each distinct observation: Topic variables, Identifier
variables, Timing variables, Rule variables, and Qualifiers (Grouping,
Result, Synonym, Record, Variable)
General principles and standards
Optimisations for Data Exchange per
study and for Medical Reviewers to
easier understand data
Specific principles and standards such
as ISO8601 for dates/timings, and both
Original & Standard values expected
CDISC SDTM Domains
SAS Dataset implementations
(dataset templates)
e.g. Vital Signs domains
SDTM fundemantal mode is also the basis for:
• SEND Domains for Nonclinical Data (generated
from animal toxicity studies)
• Future domains of derived data, capturing
metadata to describe derivations and analyses.
16. Basic Concepts in CDISC/SDTM
Subclasses of Qualifiers
• Grouping Qualifiers are used to group together a collection of observations within the
same domain.
– Examples include --CAT, --SCAT, --GRPID, --SPEC, --LOT, and --NAM. The latter three grouping qualifiers can
be used to tie a set of observations to a common source (i.e., specimen, drug lot, or laboratory name,
respectively).
• Synonym Qualifiers specify an alternative name for a particular variable in an
observation.
– Examples include --MODIFY and --DECOD, which are equivalent terms for a --TRT or --TERM topic variable,
and --LOINC which is an equivalent term for a --TEST and --TESTCD.
• Result Qualifiers describe the specific results associated with the topic variable for a
finding. It is the answer to the question raised by the topic variable.
– Examples include --ORRES, --STRESC, and --STRESN.
• Variable Qualifiers are used to further modify or describe a specific variable within an
observation and is only meaningful in the context of the variable they qualify.
– Examples include --ORRESU, --ORNHI, and --ORNLO, all of which are variable qualifiers of --ORRES: and --
DOSU, --DOSFRM, and --DOSFRQ, all of which are variable qualifiers of --DOSE. observation and is
• Record Qualifiers define additional attributes of the observation record as a whole
(rather than describing a particular variable within a record).
– Examples include --REASND, AESLIFE, and allother SAE flag variables in the AE domain; and --BLFL, --POS
and --LOC.
From the Study Data Tabulation Model document
17. Basic Concepts in CDISC/SDTM
Variable Roles
• Topic variables
which specify the focus of the
observation (such as the name of
a lab test), and vary according to
the type of observation.
From the Study Data Tabulation Model document
Topic
Grouping
Qual
Synonym
Qual
• Grouping qualifiers
are used to group together a collection of observations
within the same domain.
– Examples include --CAT, --SCAT, --GRPID, --SPEC, --LOT,
and --NAM. The latter three grouping qualifiers can be used
to tie a set of observations to a common source (i.e.,
specimen, drug lot, or laboratory name, respectively)
• Synonym Qualifiers
specify an alternative name for a particular variable in
an observation.
– Examples include --MODIFY and --DECOD, which are
equivalent terms for a --TRT or --TERM topic variable,
and --LOINC which is an equivalent term for
a --TEST and --TESTCD.
Observation
Record
Qualifier
variables
18. Basic Concepts in CDISC/SDTM
Variable Roles
• Identifier variables
which identify the study, the subject
(individual human or animal) involved
in the study, the domain, and the
sequence number of the record.
• Timing variables
which describe the timing of an
observation (such as start date and
end date).
From the Study Data Tabulation Model document
Identifier Timing
• Result Qualifiers
describe the specific results associated
with the topic variable for a finding. It is the
answer to the question raised by the topic
variable. Depending on the type of result
(numeric or character) different variables
are being used. Includes variables for both
original (as supplied values) and for
standardised values (for uniformity).
– Examples include --ORRES,
--STRESC, and --STRESN.
Observation
Record
Qualifier
variables
Topic
Result
Qual
19. Basic Concepts in CDISC/SDTM
Variable Roles
From the Study Data Tabulation Model document
Identifier Timing
• Variable Qualifiers
are used to further modify or describe a specific
variable within an observation and is only
meaningful in the context of the variable they
qualify.
– Examples include --ORRESU, --ORNHI,
and --ORNLO, all of which are variable
qualifiers of --ORRES: and --DOSU, --
DOSFRM, and --DOSFRQ, all of which are
variable qualifiers of --DOSE.
– Indictors where the results falls with respect
to reference range
Observation
Record
Qualifier
variables
Result
Qual
Variable
Qual
Topic
20. Basic Concepts in CDISC/SDTM
Variable Roles
From the Study Data Tabulation Model document
Identifier Timing
• Record Qualifiers
define additional attributes of the observation
record as a whole (rather than describing a
particular variable within a record).
– Examples include --REASND, AESLIFE, and
allother SAE flag variables in the AE domain; and
--BLFL, --POS and --LOC.
Observation
Record
Qualifier
variables
Result
Qual
Variable
Qual
Record
Qual
Topic
21. Basic Concepts in CDISC/SDTM
Subclasses of Qualifiers
• Topic variables
• Identifier variables
• Timing variables
• Rule variables
From the Study Data Tabulation Model document
Topic Identifier Timing
Grouping
Qual
Synonym
Qual
• Qualifier variables
– Grouping Qualifiers
– Result Qualifiers
– Synonym Qualifiers
– Record Qualifiers
– Variable Qualifiers
Record
Qual
Observation Record
Result
Qual
Variable
Qual