PharmaSUG Single Day Event
Using Standards, Even for Non-Standard Needs
Gilead offices, Foster City, CA
February 10, 2015, 8:00am - 5:00pm

The PharmaSUG 2015 Single-Day Event for the San Francisco Bay Area has now concluded. Thanks to Gilead for hosting the event and to all who presented and attended. Don't forget that all paid registrants will receive a $75 discount for our annual conference coming in Orlando in May!

Presentations
Title (click for abstract) Presenter(s) Presentation
Keynote Presentation:
Maximizing the Value of Clinical Trials Data: A Collaborative Framework for Data Standards Governance from Data Definition to Knowledge Management
Jim Johnson, Summit AnalyticalPDF(5.3 MB)
Every Study is Special! - Governing Data StandardsElizabeth Nicol, GenentechPDF(1.0 MB)
Non-Standard Variables – Are they Supplemental Qualifiers, FA records, or a Custom Findings Domain?Jerry Salyers, AccenturePDF(1.1 MB)
What is the “ADAM OTHER” Class of Datasets, and When Should it be Used?John Troxell, AccenturePDF(2.0 MB)
Useful Tools from the Computational Sciences SymposiumSandra Minjoe, Accenture
John Brega, PharmaStat
PDF(1.4 MB)
Interpreting and Using the Validation Results from Automated ToolsDave Borbas, Jazz PharmaceuticalsPDF(2.4 MB)
Beyond OpenCDISC: Using Define.xml Metadata to Ensure End-to-End Submission IntegrityJohn Brega, PharmaStat
Linda Collins, PharmaStat
PDF(2.4 MB)

Thank You to Our SDE Sponsors

Presentation Abstracts

Maximizing the Value of Clinical Trials Data: A Collaborative Framework for Data Standards Governance from Data Definition to Knowledge Management
Jim Johnson, Summit Analytical

The global arena that pharmaceutical and device manufacturers operate today now more than ever requires regulatory compliant, high quality, transparent, and reliable standardized clinical trial data. The value of these standardized data from a development program is related directly to the ability of a sponsor to leverage its use for life cycle management, and a regulatory authority’s ability to utilize the information for review and approval of licensure applications. It is from this mutually beneficial collaborative framework for collection and use of standardized data that both a sponsor and regulatory authority can maximize the value of clinical trial data. In this presentation we will present a clinical trial data value chain and discuss how a collaborative framework for data standards governance within a sponsor’s organization can maximize the value of clinical trials data for sponsors, regulatory authorities, and ultimately patients.


Every Study is Special! - Governing Data Standards
Elizabeth Nicol, Genentech

The definition of data standards aligned with CDISC is just the start of a journey. Once standards have been defined they need to be implemented on each study according to the protocol in data collection, tabulation and analysis. Roche governs the Global Data Standards for all of these areas with a cross-functional committee managed by the Data Standards Office. The committee is made up of five core teams each supporting a different area of the standards. Each study team, where necessary, submits requests for approval for deviations from the current standards to be implemented at a study level, this includes both new and existing items. In addition to deviations for individual studies requests are also received for the development of new standards. All requests are evaluated by the DSO and then brought before the Governance Committee for a decision and actioned as appropriate.


Non-Standard Variables – Are they Supplemental Qualifiers, FA records, or a Custom Findings Domain?
Jerry Salyers, Accenture

As discussed by Fred Wood at PharmaSUG in 2013, the SDTM Implementation Guide (SDTMIG) provides for a standard mechanism and structure for submitting non-standard variables. His paper discussed common issues seen when submitting SUPP-- datasets, from using an inappropriate IDVAR to the practice of submitting data “as collected” (i.e. “coded”) though often largely uninterpretable. Examples highlighted some of the unexpected outcomes when the parent domain and data from the supplemental domain are merged together (as during the course of review) based on the identified “merge key” in the IDVAR variable. With the advent and growing knowledge and use of the Findings About (FA) domain, many sponsors are challenged in determining where non-standard data best fit. It’s not uncommon to see sponsors submitting data in FA when supplemental datasets would have been preferred and would not have required the creation of RELREC records. Similarly, we’ve seen FA used when a custom Findings domain would have sufficed, as the --OBJ variable would not have added any clarity to the data. This paper will highlight several criteria that can be used to determine how best to represent such non-standard data.


What is the “ADAM OTHER” Class of Datasets, and When Should it be Used?
John Troxell, Accenture

As is well known by now, the CDISC ADaM team has defined four classes of ADaM datasets: ADSL (Subject-Level Analysis Dataset), BDS (Basic Data Structure), ADAE (Adverse Events Analysis Dataset), and OTHER. The ADAE class is soon to be generalized into the OCCDS (Occurrence Data Structure) class. The ADSL, BDS and ADAE/OCCDS structures are defined and widely used. However, the OTHER class is by nature relatively unspecified and mysterious. This presentation explores philosophical and practical questions surrounding the OTHER class of ADaM datasets, including considerations for deciding when its use is appropriate, and when it is not.


Useful Tools from the Computational Sciences Symposium
Sandra Minjoe, Accenture
John Brega, PharmaStat


The Computational Sciences Symposium (CSS) has several Working Groups, including one called “Optimizing the Use of Data Standards”. Projects within this working group have developed some useful tools, regardless of whether our study is “standard” or not. This presentation covers the white papers created by the “Traceability and Data Flow” project, noting especially how to handle cases other than the linear flow from data collection data -> tabulation data -> analysis data -> analysis results. It also walks through the two reviewer guides, for tabulation and analysis data, and describes how they are used to document study-specific information not adequately covered in the data or define. These CSS tools can be used with or without CDISC to help us tell our study’s story.


Interpreting and Using the Validation Results from Automated Tools
Dave Borbas, Jazz Pharmaceuticals

The development and enhancement of industry standards and automated tools to check conformance to data standards is a major improvement in the pharmaceutical industry. It reduces manual work and allows human experts to focus on the important elements of the content of a submission. Automated tools can check and confirm the dataset structure of the format, length and variable names / labels. Also they can provide some data integrity checks for the content and the metadata. These tools allow you to focus on a higher order review of the data content. But they are not a complete substitute for normal validation of these results. There are error messages or problems that could occur. This presentation will focus on the understanding needed to determine the correct action to take when these instances occur.


Beyond OpenCDISC: Using Define.xml Metadata to Ensure End-to-End Submission Integrity
John Brega, PharmaStat
Linda Collins, PharmaStat


The Define.xml standard is a flexible method for documenting both standard and nonstandard tabulations and analysis data. Define.xml and the data it describes have to form a coherent package, whether the data are in CDISC standard formats or not. CDISC standards (define.xml, SDTM, and/or ADaM) describe what the outputs should look like, but not how to create them. Most sponsors use some form of metadata for dataset specifications; there can be big advantages to making it an integral, automated part of the process. We will show how the effective use of metadata can drive production and quality control for both data and documentation across an entire submission. With the advent of Results Metadata in define 2.0, the same discipline can be extended to documenting tables, listings and graphs.