PharmaSUG Single Day Event
Extending CDISC Standards: Spare Parts and Turbo Chargers
Philadelphia University campus, Philadelphia, PA
October 22, 2015, 8:00am - 5:00pm

The PharmaSUG 2015 Single-Day Event for Philadelphia has now concluded. Thanks to Philadelphia University for hosting the event, and to all who presented and attended. Don't forget that all paid registrants will receive a $75 discount for our annual conference coming in Denver in May 2016!

Presentations
Title (click for abstract) Presenter(s) Presentation
Data Standards, Considerations and Conventions within the Therapeutic Area User Guides (TAUGs) Jerry Salyers and Kristin Kelly, Accenture PDF(329 KB)
New Enhanced Expectations of the Use of Controlled Terminology David Izard, Accenture PDF(912 KB)
It’s All About EPOCH Karin LaPann, Chiltern PDF(673 KB)
Maximizing the Value and Utility of ADaM for Pharmacokinetic Analyses and Reporting: Much More Than Just ADPC and ADPP James R. Johnson, PhD, Summit Analytical PDF(1.7 MB)
What is the “ADAM OTHER” Class of Datasets, and When Should it be Used? John Troxell, Accenture PDF(395 KB)
To IDB or Not to IDB Beth Seremula, Chiltern PDF(2.7 MB)
Usage of OpenCDISC Community Toolset 2.x.x for Clinical Programmers Mike DiGiantomasso, Pinnacle 21 PDF(3.5MB)

Thank You to Our SDE Sponsors




Presentation Abstracts

Data Standards, Considerations and Conventions within the Therapeutic Area User Guides (TAUGs)
Jerry Salyers and Kristin Kelly, Accenture

One of the major initiatives in the pharmaceutical industry involving standards is the development of Therapeutic Area User Guides (TAUGs) under the CFAST (Coalition for Accelerating Standards and Therapies) initiative.  CFAST is a collaborative effort between CDISC , with participation from TransCelerate Biopharma, Inc,, FDA, and the National Institute of Health. Currently, there are fifteen TAUGs (one at Version 2, fourteen at Version 1) available for download from the CDISC website. There are also a number currently in development that are moving towards public review and eventual publication.  The development of these standards includes the design of case report forms and resulting metadata using the CDASH standard, the mapping of the collected data to SDTM-based datasets, and examples of how the SDTM-based data would be used in the production of ADaM (analysis) datasets. This design and mapping of specialized data to current standards provides many opportunities to implement new and different submission strategies, while remaining compliant with the published standard.

 

New Enhanced Expectations the Use of Controlled Terminology
David Izard, Accenture

The FDA has raised the bar with respect to controlled terminology, articulating their desires for use across a broad number of domains as part of planning, execution, archiving and eventual submission of clinical data and related assets to the agency. This presentation will review current controlled terminology expectations and explore challenges typically encountered when engaging in this work, with a focus on the recently introduced terminologies as described in the FDA Study Data Technical Conformance Guide.

 

It’s All About EPOCH
Karin LaPann, Chiltern

In the world of SDTM the FDA has had multiple tools built to assist them in processing study data more rapidly.  One of their new initiatives is the Datafit tool, which requires the variable EPOCH.  In order to become more efficient, these tools allow the agency to quickly determine if the data is compliant and in good order for the data warehouse.  Therefore, it is to our sponsor’s benefit to incorporate EPOCH into all versions of SDTM.  In this paper I will discuss why it is important to include EPOCH, the basic concepts and CT associated with EPOCH, and minimum requirements to meet FDA and other regulatory agency needs.  I will provide a guidance on how to derive depending on the type of domain: Findings, Events or Interventions.  Finally I will discuss the benefit added for EPOCH in our own processing of the data for analysis. 

 

Maximizing the Value and Utility of ADaM for Pharmacokinetic Analyses and Reporting: Much More Than Just ADPC and ADPP
James R. Johnson, PhD, Summit Analytical

The implementation of CDISC STDM and ADaM are the de facto standards for clinical trials, and now required by binding regulatory guidance. In pharmacokinetic studies, these standards include the SDTM.PC (Plasma concentration) and STDM.PP (Pharmacokinetic parameters), with most organizations then creating the ADaM datasets ADPC and ADPP. Pharmacokinetic parameters are a combination of observed endpoints (e.g. Cmax, Tmax, Clast, Tlast) and calculated endpoints (e.g. AUC0-t, AUC0-INF, terminal half-life: T1/2). PK parameters are derived primarily from non-compartmental analysis (NCA) models using external software such as Phoenix 64 WinNonlin. Standard descriptive summaries of plasma concentrations or NCA derived PK parameters can be completed with ADPC and ADPP with very little manipulation from the SDTM.PC or SDTM.PP to support analysis and reporting. 

However, more complex PK analyses originating from cross-over studies that examine on a by-Subject basis endpoints such as the ratio of observed or predicted analyte concentrations, or ratios of observed or predicted PK parameters require a more advanced use of the ADaM data standard to maximize the value and utility of the PK data for analysis and reporting. Population PK modeling (PopPK) or physiological modeling also presents even more complex modeling challenges that require a more advanced use of the ADaM data standard for analysis and reporting, especially for time-dependent predicted endpoints.  In this paper, we will present some examples and suggestions for maximizing the ADaM standard for analysis and reporting on more complex PK analyses. We will introduce the use of extending ADPP and ADPP to include ADPKcccc and ADPCcccc datasets, where these additional PK ADaM datasets provide greater value and utility for reporting more complex derived endpoints from PK analyses.

 

What is the “ADAM OTHER” Class of Datasets, and When Should it be Used?
John Troxell, Accenture

As is well known by now, the CDISC ADaM team has defined four classes of ADaM datasets: ADSL (Subject-Level Analysis Dataset), BDS (Basic Data Structure), ADAE (Adverse Events Analysis Dataset), and OTHER. ADAE has been generalized into the provisional OCCDS (Occurrence Data Structure) class. The ADSL, BDS and ADAE/OCCDS structures are defined and widely used. However, the OTHER class is by nature relatively unspecified and mysterious. This presentation explores philosophical and practical questions surrounding the OTHER class of ADaM datasets, including considerations for deciding when its use is appropriate, and when it is not.

 

To IDB or Not to IDB
Beth Seremula, Chiltern

In Shakespeare’s Hamlet we hear Prince Hamlet ask the now cliché “To be or not to be” question as he contemplates suicide.  How does this relate to ADaM integrated databases (IDBs)?  As Hamlet weighs the pros and cons of death, we too must decide whether it is better to stick with the status quo or venture into the unknown world of integrating our ADaMs.  We shall examine the pros and cons of ADaM IDBs as well as some of the basic pitfalls we have come across while undertaking this daunting task.  Along this journey we will show why we think IDB is the future and why it is better to be on the cutting edge.

Act 1: In the Beginning: One important use of integrated clinical data is to support the safety and efficacy analyses for new and supplemental drug and device applications as required by regulatory agencies.  There are several options available when deciding on how to submit our ADaM data; each study can be submitted individually, creation of one large integrated database (IDB) or a combination of both.  There are advantages and disadvantages to both approaches and IDB presents the most challenges.

Act 2, Scene 1: Considerations: Just like any submission careful planning is the key to success with an ADaM IDB.  It is vitally important to pull all the stakeholders together and plan, plan, plan. What is the purpose of your submission, what is the design of each study you want to integrate, what are the indications, endpoints, durations, etc.?  What is the story you are trying to tell and is an IDB the right approach for your submission? You want to start with the end in mind, how will all of this data be displayed on the TLFs?  Do you need to pool treatment groups together, what about the visit information, how will that be displayed? 

All of these questions and many more will help drive the content of your IDB. Harmonization of your data also needs to be examined closely as well as how are you going to maintain traceability. These are important pieces and take time to consider and address. There are several approaches which can be taken when looking at how to create the integration.  The IDB can be created from study level SDTM, integrated SDTM or study level ADaM.  Again, each of these has pros and cons and they all take careful planning to ensure a quality product is created.

Act 2, Scene 2: Advantages: Integrating ADaM for submission has many advantages of which the most obvious would be consistency.  An IDB gives helps your reviewer compare apples to apples and clearly see the road you have mapped out for them.  IDB also makes it easier to create adhoc analysis as you move through the process.

Act 2, Scene 3: Disadvantages: IDB work is very time consuming and this is probably the number one disadvantage to the process.  There is a lot of up front
work that needs to happen and many people who need to be involved.  You need to create a very detailed map which outlines exactly where you are starting, where you are going and how you are getting there.  All of that takes a tremendous amount of time and effort.

Act 3: Decision Time: Hamlet is a tragedy but ADaM IDB does not need to be. ADaM IDB gives us greater consistency and helps us clearly examine the results of our submission. Don’t be afraid of the future, be brave and embrace the challenge head on.

 

Usage of OpenCDISC Community Toolset 2.x.x for Clinical Programmers
Mike DiGiantomasso, Pinnacle 21

All programmers have their own toolsets like a collection of macros, helpful applications, favorite books or websites. OpenCDISC Community is a free and easy to use toolset which is useful for clinical programmers who work with CDISC standards. In this presentation, we'll provide an overview of installation, tuning, usage and automation of OpenCDISC Community applications including:  Validator – ensure your data is CDISC compliant and FDA submission ready;  Define.xml Generator – create metadata in standardized define.xml v2.0 format;  Data Converter – generate Excel, CSV or Dataset-XML format from SAS XPT;  ClinicalTrials.gov Miner – find information across all existing clinical trials.