This document provides guidance on starting ADaM specification development and dataset programming. It recommends starting with ADaM subject matter experts and a well-defined specification template. It also recommends understanding the SDTM datasets, analysis keys, and Occurrence Data Structure requirements. The document outlines considerations like variable attributes and traceability when developing specifications and programming datasets. It emphasizes adhering to the ADaM Implementation Guide.
SDTM (Study Data Tabulation Model) defines a standard structure for human clinical trial (study) data tabulations and for nonclinical study data tabulations that are to be submitted as part of a product application to a regulatory authority such as the United States Food and Drug Administration (FDA).
SDTM (Study Data Tabulation Model) defines a standard structure for human clinical trial (study) data tabulations and for nonclinical study data tabulations that are to be submitted as part of a product application to a regulatory authority such as the United States Food and Drug Administration (FDA).
SDTM (Study Data Tabulation Model) defines a standard for organizing and formatting data to streamline processes in collection, management, analysis and reporting of human clinical trial data tabulations and for non-clinical study data tabulations which are to be submitted as part of a product application(IND and NDA) to a regulatory authority such as the United States Food and Drug Administration (FDA) and PMDA (Japan)
CDISC's CDASH and SDTM: Why You Need Both!Kit Howard
CDISC's clinical data standards are widely used for clinical research, but many people wonder why there seem to be two standards for collected data: the Clinical Data Acquisition Standards Harmonization (CDASH) standard and the Study Data Tabulation Model (SDTM) standard. This poster steps through four significant reasons that reflect the differences in philosophy, intermediate goals and broad-scale uses. Examples illustrate each reason and how they affect your studies.
In this presentation, Principal Statistical Scientist Ben Vaughn explains how clinical trial data moves from collection in the case report form to its presentation to FDA.
A complex ADaM dataset - three different ways to create oneKevin Lee
The paper is intended for Clinical Trial SAS® programmers who create and validate a complex ADaM dataset. Some ADaM datasets require the use of complex algorithms. These algorithms could require several steps of data manipulation and more than one SDTM datasets. It can be very challenging to create a complex ADaM dataset in accordance with ADaM data structures and standards. Furthermore, it can be equally as challenging to validate those ADaM datasets. The paper will introduce three different ways to create a complex ADaM dataset. The first way is to create ADaM from SDTM directly without any intermediate permanent datasets. The second way is to create ADaM through the intermediate permanent datasets like SDTM+ or ADaM+ from SDTM. The third way is to create the final ADaM through the intermediate ADaM from SDTM. The paper will discuss the benefits and limitations of each method and also show some examples.
SDTM Training for personnel with Junior and Intermediate level Clinical Trial Experience. Covers summary of most domains. Salient features include order of domain creation, importance of making programming Data/Metadata Driven, Nature of Clinical Raw Data, Summary of the Clinical Trial process with regards to the data flow to arrive at the Study data to be submitted to regulatory authorities like FDA, Importance of deriving ADAM from SDTM and not directly from raw data, Information has been put together from variety of sources including my own programming work.
According to FDA Draft Guidance for Industry in Electronic Submission and Study Data Technical Conformance Guide, the pharmaceutical companies will need to provide CDISC Electronic submission to FDA. The paper will explain Data Standard Catalog which will dictate FDA Standards. The paper will discuss how to prepare CDISC electronic submission and what to prepare in CDISC electronic submission.
SDTM (Study Data Tabulation Model) defines a standard for organizing and formatting data to streamline processes in collection, management, analysis and reporting of human clinical trial data tabulations and for non-clinical study data tabulations which are to be submitted as part of a product application(IND and NDA) to a regulatory authority such as the United States Food and Drug Administration (FDA) and PMDA (Japan)
CDISC's CDASH and SDTM: Why You Need Both!Kit Howard
CDISC's clinical data standards are widely used for clinical research, but many people wonder why there seem to be two standards for collected data: the Clinical Data Acquisition Standards Harmonization (CDASH) standard and the Study Data Tabulation Model (SDTM) standard. This poster steps through four significant reasons that reflect the differences in philosophy, intermediate goals and broad-scale uses. Examples illustrate each reason and how they affect your studies.
In this presentation, Principal Statistical Scientist Ben Vaughn explains how clinical trial data moves from collection in the case report form to its presentation to FDA.
A complex ADaM dataset - three different ways to create oneKevin Lee
The paper is intended for Clinical Trial SAS® programmers who create and validate a complex ADaM dataset. Some ADaM datasets require the use of complex algorithms. These algorithms could require several steps of data manipulation and more than one SDTM datasets. It can be very challenging to create a complex ADaM dataset in accordance with ADaM data structures and standards. Furthermore, it can be equally as challenging to validate those ADaM datasets. The paper will introduce three different ways to create a complex ADaM dataset. The first way is to create ADaM from SDTM directly without any intermediate permanent datasets. The second way is to create ADaM through the intermediate permanent datasets like SDTM+ or ADaM+ from SDTM. The third way is to create the final ADaM through the intermediate ADaM from SDTM. The paper will discuss the benefits and limitations of each method and also show some examples.
SDTM Training for personnel with Junior and Intermediate level Clinical Trial Experience. Covers summary of most domains. Salient features include order of domain creation, importance of making programming Data/Metadata Driven, Nature of Clinical Raw Data, Summary of the Clinical Trial process with regards to the data flow to arrive at the Study data to be submitted to regulatory authorities like FDA, Importance of deriving ADAM from SDTM and not directly from raw data, Information has been put together from variety of sources including my own programming work.
According to FDA Draft Guidance for Industry in Electronic Submission and Study Data Technical Conformance Guide, the pharmaceutical companies will need to provide CDISC Electronic submission to FDA. The paper will explain Data Standard Catalog which will dictate FDA Standards. The paper will discuss how to prepare CDISC electronic submission and what to prepare in CDISC electronic submission.
Aplicacion de la metodologia DEA en stata, Aplicacion de la metodologia DEA en stata, Aplicacion de la metodologia DEA en stata, Aplicacion de la metodologia DEA en stata.
This presentation on Process Analysis using 3D plots was given at Emerson Exchange, 2010. Details are provided on a field trail in which the DeltaV historian was modified to support array parameters and a web enabled interfaces was used to provide a 3D plot of array data. Information is provided on how this was used to look at absorber column and stripper column temperature profiles.
Are you ready for Dec 17, 2016 - CDISC compliant data?Kevin Lee
Are you ready for Dec 17th, 2016?
According to FDA Data Standards Catalog v4.4, all clinical trial studies starting after December 17th, 2016 with the exception of certain INDs will be required to have CDISC compliant data. Organizations who are unclear on their compliance status will have their understanding of FDA expectations elucidated in the paper. The paper will show how programmers can interpret and understand the crucial elements of the FDA Data Standards Catalog, which includes support begin date, support end date, requirement begin date and requirement end date of specific standards for both eCTD and CDISC.
First, the paper will provide the brief introduction of regulatory recommendation of electronic submission, including methods, five modules in CTD especially m5, technical deficiencies in submission and etc. The paper will also discuss what programmers need to prepare for the submission according to FDA and CDISC guidelines for CSR, Protocol, SAP, SDTM annotated eCRF, SDTM datasets, ADaM datasets, ADaM datasets SAS® programs and Define.xml.
Additionally, the paper will discuss formatting logistics that programmers should be aware of in their preparation of documents, including length, naming conventions and file formats of electronic files. For examples, SAS data sets should be submitted as SAS transport file formats and SAS programs should be submitted as text format, rather than SAS format.
Finally, based on information from FDA CSS meeting and FDA Study Data Technical Conformance guides v 3.0, the paper will discuss the latest FDA concerns and issues on electronic submission. This will include the size of SAS data sets, lack of Trial Design dataset(TS) and Define.xml, importance of Reviewer Guide and etc.
Professor Jon Patrick
Health Information Technology Research Laboratory (HITRL - www.it.usyd.edu.au/~hitru)
School of Information Technologies
University of Sydney
(P39, 17/10/08, Systems & Methods stream, 1.50pm)
Optimizing Data Extracts from Oracle Clinical SAS Viewsrajopadhye
This presentation delivered at OCUG Annual Conference 2009 at New Orleans is on a new way developed to optimize SAS dataset extracts in Batch Data Load (BDL) format from Oracle Clinical SAS views.
After completing this module, you will be able to:
List and describe the major components of the Teradata architecture.
Describe how the components interact to manage incoming and outgoing data.
List 5 types of Teradata database objects.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
ADaM - Where Do I Start?
1. ADaM - Where Do I Start?
By :– Krupali Ladani & Dr.Sangram Parbhane
2. Disclaimers
The opinions in this presentation are those of the presenter and may not
necessarily reflect the views of doLoopTech, PhUSE or CDISC.
All the examples displayed as ‘Table no. x’ are taken from ADaMIG_v1.1
2
3. Contents
ADaM Introduction
ADaM Specification Development
Where Do I Start ?
Considerations
Importance
Challenges
Recommendation
ADaM Dataset Programming
Where Do I Start ?
Considerations
Importance
Challenges
Recommendation
ADaM QC
Where Do I Start ?
Considerations
Importance
Challenges
Recommendation
3
4. Analysis Data Model (ADaM)
“Framework that enables analysis of the data, while at the same time
allowing reviewers and other recipients of the data to have a clear
understanding of the data’s lineage from collection to analysis to results”
4
Protocol Mock Up TablesSAP
SDTMEDC/CDASH
ADaM Related Process
ADaM Metadata
TLFADaM
5. CDISC ADaM V2.1 - Analysis data flow
5
Analysis Data Flow Diagram Showing One Scenario for the Flow of Data and Information
6. Fundamental Principles of ADaM
Provide Traceability
Readily Usable
Associated with Metadata
Communicate Clearly and Unambiguously
6
Analysis Ready Datasets
7. The ADaM Data Structure
ADSL BDS OCCDS
1) Known as “Subject
level Analysis Data”
1) Known as “Basic Data
Structure”
1) Known as “Occurrence Data
Structure”
2) Structure: One
record per subject
2) Structure: Contains one or
more records per subject,
per analysis parameter, per
analysis time point
2) Structure: One record per record in
SDTM domain (optional: per coding
path, per Analysis Period and/or
Phase)
3) Contains subject-
level information
3) Contains PARAM, AVAL
and AVALC and related
variables
3) Support Occurrence Data Models
such as Medical History, Concomitant
Medications, and Lab Events
4) Example: ADSL 4) Example: ADLB, ADVS 4) Example: ADAE, ADMH
7
11. Values of ADaM “Core” Attribute
Conditionally
Required
Permissible
11
The variable must be
included in the
dataset in certain
circumstances
The variable may
be included in the
dataset, but is not
required
The variable
must be included
in the dataset
13. ADaM Specification Development: Where Do I Start ?
13
ADaM Subject Matter Expert
Well defined specification template
SDTM Datasets
Analysis keys
ADaM Specification Development is the starting point for ADaM process after
SDTM. Here is the ABC’s for specification developments….
14. ADaM Specification Development: Considerations
Let’s assume we live in an ideal world and creating our specification before
starting programming!
Identifying requirement of number of datasets based on SAP
Readiness of SDTM data
Clear specification template to define all required components
ADSL as a base for other datasets
Understanding of OCCDS and its requirements e.g. MedDRA Coding
Must have variables to support Traceability and Analysis Results
14
15. ADaM Specification Development: Importance
ADaM Specification
Development:
Importance
Path from SDTM to
Analysis results
Programming
guide
Helps to understand
derivations and
complex algorithms
For partial
automation
via SAS
Base for
Reviewer's
Guide
Documentation
for traceability
Provides clarity and
consistency between
and within dataset
15
To Generate
Define.xml
Can’t think about dense datasets like ADaM without specifications!
16. ADaM Specification Development: Challenges
There's never a road without a turning - we can’t expect ADaM to be as
simple as SDTM
Need to maintain a document which requires frequent updates as
ADaM specification is an important source of traceability
Compatible for metadata as specifications usually used as the basis for
the generation of ADaM Define.xml and Define.pdf
Adherence to ADaM IG for datasets structure and variable attributes
Defining complex analysis variables well in advance
16
17. ADaM Specification Development: Recommendation
Identify any discrepancy in SDTM during specification development level
Follow the standard process, prepare specifications first and then start
programming
Adhere ADaM Implementation Guide and write specifications for ADaM
datasets instead ADaM like datasets from beginning
ADaM is the extended process and changes or updates are expected at
any point of time, document and track the updates applicable to
specification without delaying
17
18. Example
Order Active Dataset Description Structure Purpose Keys Location
1 Y ADSL Subject Level Characteristics One record per subject Analysis STUDYID, USUBJID ADSL.xpt
2 Y ADAE Adverse Events
One record per Event per
subject Analysis
STUDYID, USUBJID, AETERM,
AEDECOD, AEBODSYS, ASTDT, AENDTADAE.xpt
order
Dom
ain
Nam
e
Activ
e
Varia
ble
Name Variable Label Type
Codelis
t
Origin of
Variable
(Protocol,
Assigned,
Derived,
eDT, CRF
Page no.)
Internal
Variable
Length Mapping Rules Notes provided by CDISC v1.1
Type of
CDISC
Variable
1 ADSL Y
STUDYI
D Study Identifier text 12 DM.studyid;
Must be identical to the SDTM variable
DM.STUDYID. Req
2 ADSL Y
DOMA
IN Domain Abbreviation text DOMAIN 4 domain = 'ADSL'; Req
Domain Sheet
ADSL Sheet
18
20. ADaM Dataset Programming: Where Do I start ?
ADaM is definitely a combined and parallel effort from Specification developer
and Programmer. Instead following specifications blindly, inform developer to
provide feedback for inconsistency in specifications as and when required.
Look into the source datasets
Going through the SDTM datasets provided, having basic understanding about
it e.g. total number of subjects in DM, number of subjects which are screen
failure, treatment arms and trial design
ADaM Implementation guide does not define how to write the ‘Mapping
Rules’ column in specification
Understand how your specifications written, the simple English or mixed with
programming codes
Is there any different columns or indicators for direct SDTM variables or for
complex algorithms ?
20
21. ADaM Dataset Programming: Consideration
Basic understanding about ADaMIG or ADaM variables is the key guide
along with specification document.
Example 1 - Naming convention for variables
Table 1- Naming convention for date and flag variables
FL Suffix used in names of character flag variables
DT Suffix used in names of numeric date variables
TM Suffix used in names of numeric time variables
DTM Suffix used in names of numeric datetime variables
DTF Suffix used in names of date imputation flag variables
TMF Suffix used in names of time imputation flag variables
DY Suffix used in names of relative day variables that do not include day 0
21
22. Example 2 - Visit windows and Unscheduled visits
The record that falls closest to the scheduled visit day is the one that will be
analysed and indicated using ANL01FL
Used AWTARGET and AWTDIFF to indicate more clearly how the analysed rows
were selected from among the candidate rows
Table 2: Identification of rows used for analysis in ADVS
22
Row USUBJID VISIT AVISIT ADY PARAM AVAL DTYPE ANL01FL AWTARGET AWTDIFF
1 1001 Screening Baseline -5 SUPINE SYSBP (mm Hg) 144 1 5
2 1001 Baseline Baseline 1 SUPINE SYSBP (mm Hg) 145 Y 1 0
3 1001 Week 1 Week 1 7 SUPINE SYSBP (mm Hg) 130 Y 7 0
4 1001 Week 2 Week 2 12 SUPINE SYSBP (mm Hg) 133 Y 14 2
5 1001 Week 3 Week 2 17 SUPINE SYSBP (mm Hg) 125 14 3
6 1001 Week 4 Week 4 30 SUPINE SYSBP (mm Hg) 128 Y 28 2
ADVS
Table 2: Identification of rows used for analysis in ADVS
23. Example 3 - How to handle missing values
23
Row PARAM AVISIT AVISITN VISITNUM VSSEQ ABLFL AVAL BASE CHG DTYPE ADY AWTARGET AWTDIFF ANL01FL
1 Systolic BP (mm Hg) Screening -4 1 3821 120 114 . -30 -28 2 Y
2 Systolic BP (mm Hg) Run-In -2 2 3822 116 114 . -16 -14 2 Y
3 Systolic BP (mm Hg) Week 0 0 3 3823 Y 114 114 0 -2 1 2 Y
4 Systolic BP (mm Hg) Week 2 2 4 3824 118 114 4 13 14 1 Y
5 Systolic BP (mm Hg) Week 2 2 4.1 3825 126 114 12 17 14 3
6 Systolic BP (mm Hg) Week 4 4 5 3826 122 114 8 23 28 5 Y
7 Systolic BP (mm Hg) Week 8 8 5 3826 122 114 8 LOCF 23 56 33 Y
8 Systolic BP (mm Hg) Week 8 8 4.1 3825 126 114 12 WOCF 17 56 39 Y
9 Systolic BP (mm Hg) Week 12 12 7 3827 134 114 20 83 84 1 Y
At Week 8, there is a scheduled visit (visit number 6), at that visit, blood pressure should be collected.
However, no data on blood pressure were collected. The SAP says that missing post-baseline data
should be imputed by two methods: LOCF (last observation carried forward), and WOCF (worst
observation carried forward)
ADVS
Table 3 - Creation of New Rows to Handle Imputation of Missing Values by LOCF & WOCF
24. Example 4 - All the flag values are not same
Row USUBJID ITTFL PPROTFL VISIT AVISIT PARAMCD AVAL ANL01FL PPROTRFL
1 1001 Y Y Week 0 Week 0 TEST1 500 Y Y
2 1001 Y Y Week 1 Week 1 TEST1 400 Y Y
3 1001 Y Y Week 2 Week 2 TEST1 600 Y Y
4 1002 Y N Week 0 Week 0 TEST1 500 Y
5 1002 Y N Week 2 Week 1 TEST1 48 Y
6 1002 Y N Week 2 Week 2 TEST1 46 Y
7 1003 Y Y Week 0 Week 0 TEST1 999 Y Y
8 1003 Y Y Week 1 Week 1 TEST1 999 Y
9 1003 Y Y Retest Week 1 TEST1 49 Y
10 1003 Y Y Week 2 Week 2 TEST1 499 Y
24
Subject-level character population flag values can be ‘Y’/’N’ or Null while
parameter-level and record-level character population flag are ‘Y’ and Null.
How about date imputation flag values ?
Table 4: ADaM Dataset with Subject-Level and Record-Level Indicator Variables in BDS
BDS
25. ADaM Dataset Programming: Importance
Helps to understand how Derivation
and Algorithms works on data
Well written programs can also be used in next study
Helps to self QC the output dataset for programmers
Helps find out any loopholes in specifications
25
26. ADaM Dataset Programming: Challenges
FDA CDER draft guidance for Study Data Technical Conformation Guide
stats that
“One of the expected benefits of analysis datasets that conform to ADaM
is that they simplify the programming steps necessary for performing an
analysis”
Does this mean to compensate programming efforts in ADaM ?
May be YES!......
Correct interpretation and implementation of specification document
Consider all situations in advance without having such data e.g. missing
or partial values during early development
ADaM requires update when SDTM datasets are updated or TLF shells
updated with new requirement from ADaM
26
28. ADaM Dataset Programming: Recommendation
Basic knowledge about ADaM implementation guide
Running Pinnacle21 Validator individually
Simple programming which can be edited easily at any point without
major changes
Develop generic SAS Macro utilities which be utilized across all
programming
Wherever possible go for automation
28
30. ADaM QC: Where do I Start ?
QC goal should be
Tracking ADaM data back to SDTM
Checking compliance with the ADaM Implementation Guide
Checking compliance of analysis variables and its derivation with
required analysis results
QC process may involve
Pinnacle21 Validator report
Using standardized macro which can also generate report
Identify independent QC resource with ADaM implementation
knowledge
Create a study specific QC checklist which can check basic ADaM
principle
Documentation of QC process and QC findings
30
31. ADaM QC: Consideration
Traceability is the major concern for FDA and so for QC process in organization
as well
QC of ADaM specification against SAP
Here SAP is the Master and specification is support
Fitness of computational algorithms with required analysis result
QC of ADaM dataset
Pinnacle21 is just blessings to check CDISC compliance! (for FDA as well)
Need an additional check list except standard CDISC checks
QC ADaM Define file
Computational Algorithms, Comments, datasets and controlled terminology
definitions
Links to the supporting documents and links within the documents
Requirement of reviewer’s guide
31
32. ADaM QC: Importance
Importance
32
Minimal review
comments from
regulatory reviewers
Minimize future
rework by resolving
issues at ADaM level
Save time for
TLF generation
33. ADaM QC: Challenges
33
Understand the source
data and tracking it to
the ADaM Statistical basis of
analysis variables
Accuracy of
computational and
complex algorithms
Adherence to
ADaM Model
34. ADaM QC: Recommendations
34
Planning
of Timelines
Simultaneous QC as
datasets are generated
Double check your
Programming to
avoid major issuesPrioritize datasets
considering
interdependency
ADaM Expert as ADaM QC
resource and not a beginner
35. References
Analysis Data Model Implementation Guide Version 1.1
ADaM Structure for Occurrence Data (OCCDS) Version 1.0
CDISC Analysis Data Model Version 2.1
How to build ADaM from SDTM: A real case study (JIAN HUA (DANIEL)
HUANG, FOREST LABORATORIES, NJ PharmaSUG2010 - Paper CD06)
Challenges of Questionnaire Data from Collection to SDTM to ADaM and
Solutions using SAS® (Karin LaPann, PRA International, Horsham, PA
Terek Peterson, MBA, PRA International, Horsham, PA PharmaSUG 2014
– DS08)
An Innovative ADaM Programming Tool for FDA Submission Xiangchen
(Bob) Cui, Min Chen Vertex Pharmaceuticals, Cambridge, MA
Challenges in Validation of ADaM data, Presented at PhUSE SDE April
18, 2013
35
36. 36
Dr.Sangram Parbhane
Associate Clinical SAS Programmer
doLoop Technologies India Pvt. Ltd.
Email: sangram.parbhane@dolooptech.com
www.dolooptech.com
Krupali Ladani
Senior Clinical SAS Programmer
doLoop Technologies India Pvt. Ltd.
Email: krupali.ladani@dolooptech.com
www.dolooptech.com
Reach out to us @