Paper presentations are the heart of a SAS users group meeting. PharmaSUG 2016 will feature over 200 paper presentations, posters, and hands-on workshops. Papers are organized into 14 academic sections and cover a variety of topics and experience levels. You can also view this information in our Interactive Schedule Grid.
Note: This information is subject to change. Last updated 28-Apr-2016.
Click on a section title to view abstracts for that section, or scroll down to view them all.
- Applications Development
- Beyond the Basics
- Career Planning
- Data Standards
- Data Visualizations & Graphics
- Hands-on Training
- Healthcare Analytics
- Industry Basics
- Management & Support
- Quick Tips
- Statistics & Pharmacokinetics
- Submission Standards
- Techniques & Tutorials
Beyond the Basics
|Paper No.||Author(s)||Paper Title (click for abstract)|
|CP01||Kirk Paul Lafler||What's Hot - Skills for SAS® Professionals|
|CP02||Janet Stuelpner||Don't be a Diamond in the Rough: Tips to Employment|
& Dawn Edgerton
|Are you thinking about becoming an independent contractor? Things to consider as you plan for your new entrepreneurship.|
|CP04||James Meiliang Yue||Career Path for SAS profession in Pharmaceutical Industry|
|CP05||Viktoriia Vasylenko||Journey from the student to the full time programmer|
& Thu-Nguyen Nguyen
|My First Job Dos-and-Don'ts A Survival Guide for Your First Statistical Programming Job in the Industry|
& Sudarshan Reddy Shabadu
& Houde Zhang
|There is No Time Like Present-Being from Elf to the True Self|
|CP09||Kirk Paul Lafler||A Review of "Free" Massive Open Online Content (MOOC) for SAS® Learners|
Data Visualizations & Graphics
|Paper No.||Author(s)||Paper Title (click for abstract)|
|HT01||Kirk Paul Lafler||Hands-on SAS® Macro Programming Essentials for New Users|
|HT02||Art Carpenter||PROC REPORT: Compute Block Basics|
|HT03||Vince Delgobbo||New for SAS® 9.4: A Technique for Including Text and Graphics in Your Microsoft Excel Workbooks, Part 1|
|HT04||Sergiy Sirichenko||Usage of Pinnacle 21 Community Toolset 2.1 for Clinical Programmers|
|HT05||Art Carpenter||Building and Using User Defined Formats|
|HT06||Bill Coar||Combining TLFs into a Single File Deliverable|
|HT07||Bob Hull||Cool Tool School|
|HT08||Eric Herbel||Hands On Training-Automated Patient Narratives and Tabular and Graphical Patient Profiles using JReview|
|Paper No.||Author(s)||Paper Title (click for abstract)|
& Pari Hemyari
|PrecMod: An Automated SAS® Macro for Estimating Precision via Random Effects Models|
|HA03||Pushpa Saranadasa||Working with composite endpoints: Constructing Analysis Data|
|HA04||Aran Canes||What's the Case? Applying Different Methods of Conducting Retrospective Case/Control Experiments in Pharmacy Analytics|
|HA05||Stephen Ezzy||Four "Oops" Moments While Using Electronic Health Records to Identify a Cohort of Medication Users|
Management & Support
Statistics & Pharmacokinetics
Techniques & Tutorials
Applications DevelopmentAD02 : Efficient safety assessment in clinical trials using the computer-generated AE narratives of JMP Clinical
Richard Zink, JMP Life Sciences, SAS Institute
Drew Foglia, JMP Life Sciences, SAS Institute
Monday, 1:45 PM - 2:35 PM, Location: Centennial G
ICH Guideline E3 recommends that sponsors provide written narratives describing each death, serious adverse event (SAE), and other significant AE of special interest to the disease under investigation. Narratives summarize the details surrounding these events to enable understanding of the circumstances that may have led to the occurrence and its subsequent management and outcome. Ultimately, narratives may shed light on factors associated with severe events, or describe effective means for managing patients for appropriate recovery. Narratives are written from the original SAE report in combination with summary tables and listings that are generated as part of the study deliverables. Information contained in the typical narrative requires the medical writer to review these many disparate sources. This is time consuming and often requires additional review and quality control. Too often, narratives may not be composed until the full data for a patient becomes available. This may cause narratives to become a rate-limiting factor in completing the CSR. Further, while changes to the study database are easily reflected in statistical tables by re-running programs, changes to narratives occur manually which may result in incorrect reporting. Finally, patients in therapeutic areas for severe conditions likely experience numerous SAEs; the volume of events to summarize can consume a great deal of resources. In this talk, we describe how AE narratives can be generated directly from study data sets using JMP Clinical. Further, we discuss the importance of standards and templates for narrative text, as well as utilizing CDISC data submission standards for narrative content.
AD03 : The Benefits and Know-how of Building a central CDISC Terminology Dictionary
Solomon Lee, K Solomon LLC
Monday, 1:15 PM - 1:35 PM, Location: Centennial G
CDISC terminology compliance is an important part of CDISC format submission. Without a good macro system to process, it can be messy and time-consuming. This paper addresses the benefits and know-how of building a CDISC Terminology processing macro system. First, a study level format dictionary can be efficiently established so as to map the non-compliant entries of those variables that fall into the covering range of the CDISC-terminology. Second, the study level C-term dictionary can be used as a building block for a central C-term dictionary either at the project level or at the therapeutic level. As a result, the processing efficiency will be further improved as more studies are processed. The same method and code framework could be applied to other kinds of dictionaries in clinical trial processing such as the building of a central labname dictionary or lab test unit conversion dictionary.
AD04 : SAS integration with NoSQL data
Kevin Lee, Clindata Insight
Monday, 3:30 PM - 3:50 PM, Location: Centennial G
AD06 : All Aboard! Next Stop is the Destination Excel
William E Benjamin Jr, Owl Computer Consultancy LLC
Monday, 4:00 PM - 4:20 PM, Location: Centennial G
Over the last few years both Microsoft Excel file formats and the SAS® interfaces to those Excel formats have changed. SAS® has worked hard to make the interface between the two systems easier to use. Starting with "Comma Separated Variable" files and moving to PROC IMPORT and PROC EXPORT, LIBNAME processing, SQL processing, SAS® Enterprise Guide®, JMP®, and then on to the HTML and XML tagsets like MSOFFICE2K, and EXCELXP. Well, there is now a new entry into the processes available for SAS users to send data directly to Excel. This new entry into the ODS arena of data transfer to Excel is the ODS destination called EXCEL. This process is included within SAS ODS and produces native format Excel files for version 2007 of Excel and later. It was first shipped as an experimental version with the first maintenance release of SAS® 9.4. This ODS destination has many features similar to the EXCELXP tagsets
AD07 : Enhanced OpenCDISC Validator Report for Quick Quality review!
Ajay Gupta, PPD Inc
Tuesday, 1:15 PM - 1:35 PM, Location: Centennial G
OpenCDISC validator provides great compliance checks against CDISC outputs like SDTM, ADaM, SEND and Define.xml. This validator will provide a report in excel or csv format which contains errors, warnings, and notice, information. At the initial stage of clinical programming when the data is not very clean, this report can sometimes be very large and tedious to review. Also, if there is data or code list issues in the report then the user needs to check the physical SAS data sets or SDTM controlled terminology separately which can be very time consuming. In order to expedite quality review time, this paper will introduce an enhanced version of the OpenCDISC validator report. This enhanced report will have SDTM data (only row with issue) and SDTM terminology added as separate worksheets in the original report. Later, hyperlinks between each message in the report and the related SDTM data or SDTM code list worksheets are added using the excel formulas and Visual Basic for Application (VBA). These hyperlinks will further provide point to click options to check the data and code list related issues immediately in the enhanced report which will save significant time with minimal coding. This enhanced report can be further developed to cover ADaM, SEND database
AD08 : The Power of Perl Regular Expressions: Processing Dataset-XML documents back to SAS Data Sets.
Joseph Hinson, inVentiv Health
Tuesday, 1:45 PM - 2:05 PM, Location: Centennial G
The seemingly intimidating syntax notwithstanding, Perl Regular Expressions (PRE) are so powerful they can overcome and parse the most complex non-uniform textual data. A "regular expression" is a string of characters that define a particular pattern of data, and is used for matching, searching, and replacing text. In SAS, PRE is implemented via the PRX functions such as PRXPARSE, PRXMATCH, PRXCHANGE, and PRXPOSN. Consider a situation where a date has to be extracted from data and the date can be present in a wide variety of forms: "Jan 1, 1960", "January 1st, 1960", "1st January, 1960", "1/1/60", "01:01:1960". With PRE, all the above forms of dates can be deciphered with the same, single PRXPARSE code, the way the human eyes can quickly glance through all those date formats and instantly know they are always referring to the same "first day of the first month of the year 1960". Thus it comes as no surprise that the XML data format, with its disparate forms of tags and elements, can easily be processed using PRE techniques. With PRE, all the extraneous non-SDTM text can be "ignored" as the records of XML data are read. SAS 5 XPT file format is scheduled to be replaced, according to FDA, by the new CDISC Dataset-XML standard, and various techniques are currently being developed for processing the XML data structure. The present paper will show how to easily convert an SDTM data in XML format back to a regular SAS data set, using the PRE technique.
AD09 : The Devil is in the Details - Reporting from Pinnacle 21 (OpenCDISC) Validation Report
Amy Garrett, Novella Clinical
Chris Whalen, Clovis Oncology
Tuesday, 2:15 PM - 3:05 PM, Location: Centennial G
Pinnacle 21 Community Validator (P21, previously known as OpenCDISC Community) is a valuable tool for SDTM implementers, but the resulting report has some limitations. The P21 report does not display full information about erroneous records, making it difficult for the reviewer to discern which issues are caused by dirty data versus incorrect programming or data mapping. This limitation means programmers must manually look up records based on observation number (provided on the details tab of the P21 report), which is extremely time-consuming. This cumbersome process creates the need for a detailed listing report to expedite the review of P21 validation findings. The resulting report includes a series of customizable listings organized by domain and issuer ID (FDA Publisher ID). The final output saves time, helps programmers understand issues more thoroughly, and provides a tangible product that can be delivered to other team members for further investigation, including data management for querying. This paper will focus on how to create a detailed report of data issues requiring further inquiry and is intended for use by the data management team.
AD11 : Best Practice Programming Techniques for SAS® Users
Kirk Paul Lafler, Software Intelligence Corporation
Mary Rosenbloom, Alcon, a Novartis Company
Tuesday, 3:30 PM - 4:20 PM, Location: Centennial G
It's essential that SAS® users possess the necessary skills to implement "best practice" programming techniques when using the Base-SAS software. This presentation illustrates core concepts with examples to ensure that code is readable, clearly written, understandable, structured, portable, and maintainable. Attendees learn how to apply good programming techniques including implementing naming conventions for datasets, variables, programs and libraries; code appearance and structure using modular design, logic scenarios, controlled loops, subroutines and embedded control flow; code compatibility and portability across applications and operating platforms; developing readable code and program documentation; applying statements, options and definitions to achieve the greatest advantage in the program environment; and implementing program generality into code to enable its continued operation with little or no modifications.
AD12 : Building a Better Dashboard Using Base-SAS® Software
Kirk Paul Lafler, Software Intelligence Corporation
Roger Muller, Data To Events, Inc
Josh Horstman, Nested Loop Consulting
Tuesday, 4:30 PM - 4:50 PM, Location: Centennial G
Organizations around the world develop business intelligence dashboards to display the current status of "point-in-time" metrics and key performance indicators. Effectively designed dashboards often extract real-time data from multiple sources for the purpose of highlighting important information, numbers, tables, statistics, metrics, and other content on a single screen. This presentation introduces basic rules for "good" dashboard design and the metrics frequently used in dashboards, to build a simple drill-down dashboard using the DATA step, PROC FORMAT, PROC PRINT, PROC MEANS, ODS, ODS Statistical Graphics, PROC SGPLOT and PROC SGPANEL in Base-SAS® software.
AD13 : The Little-Known DOCUMENT Procedure, a Utility for Manipulating Output Delivery System (ODS) Content
Roger Muller, Data To Events, Inc
Monday, 4:30 PM - 4:50 PM, Location: Centennial G
The DOCUMENT procedure is a little-known procedure that can save you vast amounts of time and effort when managing the output of your SAS® programming efforts. This procedure is deeply associated with the mechanism by which SAS controls output in the Output Delivery System (ODS). Have you ever wished you didn't have to modify and rerun the report-generating program every time there was some tweak in the desired report? PROC DOCUMENT enables you to store one version of the report as an ODS Document Object and then call it out in many different output forms, such as PDF, HTML, listing, RTF, and so on, without rerunning the code. Have you ever wished you could extract those pages of the output that apply to certain "BY variables" such as State, StudentName, or CarModel? With PROC DOCUMENT, you have where capabilities to extract these. Do you want to customize the table of contents that assorted SAS procedures produce when you make frames for the table of contents with HTML, or use the facilities available for PDF? PROC DOCUMENT enables you to get to the inner workings of ODS and manipulate them. This paper addresses PROC DOCUMENT from the viewpoint of end results, rather than provide a complete technical review of how to do the task at hand. The emphasis is on the benefits of using the procedure, not on detailed mechanics.
AD15 : Automating Patient Narratives - The Medical Writer Loves Me!
Scott Burroughs, PAREXEL International
Wednesday, 8:00 AM - 8:20 AM, Location: Centennial G
Patient narratives have long been part of the safety review process for studies that go on to submission. For much of this time, they have been done by Medical Writers who pored through the CRFs and enter each subject's data by hand. With larger and longer studies, this can be quite the arduous task. Newer methodologies have toyed with the use of a Mail Merge to a spreadsheet, but with the data available to us, why can't these be programmed/automated by us? They can! As long as the wording of each sentence and paragraph follows a script of sorts, it should be easy, right? Not so fast& formatting and length can be issues to tackle. This paper goes into detail on the various issues that arose doing my first patient narratives for a Medical Writer.
AD17 : The New STREAM Procedure as a Virtual Medical Writer
Joseph Hinson, inVentiv Health
Wednesday, 8:30 AM - 8:50 AM, Location: Centennial G
One of the really cool features of SAS® 9.4 is the STREAM procedure. This new tool allows free-form text containing macro elements to be directly streamed to an external file, completely bypassing the SAS® Compiler. The SAS® Word Scanner allows the Macro Processor to resolve the macro elements after which text and graphical elements are streamed directly to the external file without engaging the SAS® Compiler. This means SAS® syntax violators like HTML and XML tags would not trigger errors. This exception permits greater freedom in composing text for processing, and thus very much suited for reports such as patient narratives, patient profiles, reviewer's guide, and investigator's brochures. Such documents typically contain data in a variety of formats: textual paragraphs interspersed with bulleted lists, tables, images, graphs, schematics, and listings, all of which are easily incorporated by the use of macro elements like %includes, macro variables, and even macro calls. The key feature in Proc STREAM is an input stream of free text embedded with macro elements, where the macros are executed and expanded while the remaining text in the input stream is preserved and not validated as SAS® syntax. Such flexibility of PROC Stream is demonstrated in this paper with the creation of Patient Narratives and Investigator Brochures.
AD18 : Consider Define.xml Generation during Development of CDISC Dataset Mapping Specifications
Vara Prasad Sakampally, Softworld, Inc.
Bhavin Busa, Softworld, Inc.
Wednesday, 9:45 AM - 10:05 AM, Location: Centennial G
As per the submission data standards set by the FDA for new drug application, the sponsor has to provide a complete and informative define.xml as part of the dataset submission packet along with other required components. FDA specifies to submit the metadata information of a submitted dataset as per the CDISC Define-XML file standard because of its virtue of both machine and human readable properties. Most sponsors consider generating or receiving define document from their vendor during the final steps of dataset submission. There are multiple papers discussing different approaches to create the define.xml file. In this paper, we are presenting an approach where we have used Pinnacle21® validator to generate define.xml during the specification and dataset development phase. This paper also provides an insight on using Pinnacle21® validator as a tool with a constructive approach to generate define.xml and validation of the datasets during the SDTM development lifecycle.
AD19 : SAS® Office Analytics: An Application In Practice Monitoring and Ad-Hoc Reporting Using Stored Process
Kamal Chugh, Roche Molecular Systems, Inc.
Wednesday, 10:15 AM - 10:35 AM, Location: Centennial G
There is always time constrains when it comes to ad-hoc reporting while the project work is on full swing. There are always numerous and urgent requests from various cross-functional groups regarding the study progress, e.g. enrollment rate and so on. Typically a programmer has to work on these requests along with the study work, which can be stressful. To address this need for monitoring of data in real time and tailor the requirements of clients to create portable reports, SAS® has come out with a powerful tool SAS Office Analytics/Visual Analytics. SAS Office Analytics with Microsoft Add-in provides excellent real-time data monitoring and report generation capabilities with which a SAS programmer can take the reporting and monitoring to the next level. Using this powerful tool, a programmer can build interactive customized reports, which can be saved as a "stored process" and anyone can view, customize, and comment on these reports using Microsoft Office. This paper will show how to create these customized reports in SAS, convert these reports into stored process using SAS Enterprise Guide® Software and how these reports can then be run anytime using Microsoft Office Add-in feature. Anyone with knowledge of Microsoft Office can then harness the power of SAS running in the background to generate these reports once a programmer has converted these client needs into stored processes.
Beyond the BasicsBB01 : Color, Rank, Count, Name; Controlling it all in PROC REPORT
Art Carpenter, CA Occidental Consultants
Monday, 1:15 PM - 2:05 PM, Location: Centennial A
Managing and coordinating various aspects of a report can be challenging. This is especially true when the structure and composition of the report is data driven. For complex reports the inclusion of attributes such as color, labeling, and the ordering of items complicates the coding process. Fortunately we have some powerful reporting tools in SAS® that allow the process to be automated to a great extent. In the example presented in this paper we are tasked with generating an EXCEL® spreadsheet that ranks types of injuries within age groups. A given injury type is to receive a constant color regardless of its rank and the labeling is to include not only the injury label, but the actual count as well. Of course the user needs to be able to control such things as the age groups, color selection and order, and number of desired ranks.
BB02 : Performing Pattern Matching by Using Perl Regular Expressions
Arthur Li, City of Hope
Monday, 2:15 PM - 3:05 PM, Location: Centennial A
SAS® provides many DATA step functions to search and extract patterns from a character string, such as SUBSTR, SCAN, INDEX, TRANWRD, etc. Using these functions to perform pattern matching often requires utilizing many function calls to match a character position. However, using the Perl Regular Expression (PRX) functions or routines in the DATA step will improve pattern matching tasks by reducing the number of function calls and making the program easier to maintain. In this talk, in addition to learning the syntax of Perl Regular Expressions, many real-world applications will be demonstrated.
BB03 : An Intersection of Pharma and Medical Devices - Development of Companion Diagnostics in Conjunction with Targeted Therapies
Carey Smoak, Portola Pharmaceuticals
Monday, 3:30 PM - 3:50 PM, Location: Centennial A
The intersection of pharma and devices is growing. For example, devices can be used to deliver drugs, e.g., drug-eluting heart stents. A growing area of drugs and devices is targeted therapies and companion diagnostics. This means that a subject who tests positive for a gene mutation (the companion diagnostic test) will receive the study drug while subjects who test negative for a gene mutation test will not receive the study drug. The clinical validation of the companion diagnostic test is done in conjunction is done during the conduct of a pharma clinical trial (Phase II and/or III). Thus collection of data on screen failures (subjects who test negative) is necessary for calculation of sensitivity and specificity, e.g., using SAS PROC FREQ. Sensitivity and specificity are key analyses for companion diagnostic tests. This paper will demonstrate the clinical development of a companion diagnostic test during testing of a drug in Phase II and/or III.
BB04 : Go Compare: Flagging up some underused options in PROC COMPARE
Michael Auld, Independent
Monday, 4:00 PM - 4:50 PM, Location: Centennial A
A brief overview of the features of PROC COMPARE, together with a demonstration of a practical example of this procedure's output data set. Identify observations that have changed during the lifetime of a study when different cuts have been made to varying snapshots of the same database, then create flags from this metadata in safety update CSR listings. that can be used in safety update CSR listings.
BB05 : DOSUBL and the Function Style Macro
John King, Ouachita Clinical Data Services, Inc.
Tuesday, 8:00 AM - 8:50 AM, Location: Centennial A
The introduction of the SAS function DOSUBL has made it possible to write certain function style macros that were previously impossible or extremely difficult possible. This Beyond the Basics talk will discuss how to write a function style macro that uses DOSUBL to run SAS code and return an "Expanded Variable List" as a text string. While this talk is directed toward a specific application the techniques can be more generally applied.
BB06 : Superior Highlighting: Identifying New or Changed Data in Excel Output using SAS
Kim Truett, KCT Data, Inc
Tuesday, 9:00 AM - 9:20 AM, Location: Centennial A
Non-SAS-users often review data in Excel. As these outputs grow larger, the ability to identify new or changed data is invaluable to the reviewer. This is easily identified in SAS using procedures such as PROC COMPARE. However, it is not obvious how to automate representation of this in Excel. Just prior to PharmaSUG 2013, I was approaching a solution to get SAS to automate this but could not figure out the final steps. So I took this puzzler to Coders Corner. During the hour that followed, John King, Gary Moore, and I devised a solution. This paper is a presentation of that brainstorming session. This paper will show how a SAS program can be used to automatically identify new or changed data using rtf codes that change the color of the cell background to indicate new or changed data in an Excel output.
BB07 : Meaningful Presentation of Clinical Trial Data with Multiple Y Axes Graph
Mina Chen, Roche
Tuesday, 9:45 AM - 10:05 AM, Location: Centennial A
Clear and informative graphics are a popular way to explore, analyze and display data in clinical trials. Graphical representation of drug safety or efficacy is typically easier to understand compared to a statistical analysis table. In many cases it is useful to add secondary y-axes to display data with different scales on the same plot. For example, to illustrates the relationship between mean values of the weight/height for age percentile and interested biomarker. With the release of SAS® 9.4, there are a variety of new options to create such sophisticated analytical graphs. The goal of this paper is to show how to create graphs with multiple axes using PROC TEMPLATE and SAS/GRAPH® SG procedures in SAS® 9.4. Some commonly used graphs in our daily work will be introduced.
BB08 : Novel Programming Methods for Change from Baseline Calculations
Peter Eberhardt, Fernwood Consulting Group Inc
Mina Chen, Roche
Tuesday, 10:15 AM - 11:05 AM, Location: Centennial A
In many clinical studies, change from baseline is a common measure of safety and/or efficacy in in the clinical data analysis. There are several ways to calculate changes from baseline in a vertically structured data set such as Retain statement, Arrays, DO-Loop in DATA steps or PROC SQL. However, most of these techniques require operations such as sorting, searching, and comparing. As it turns out, these types of techniques are some of the more computationally intensive and time-consuming. Consequently, an understanding of these techniques and a careful selection of the specific method can often save the user a substantial amount of computing resources. This paper will demonstrates a novel way of calculating change from baseline using Hash objects.
BB09 : I Object: SAS® Does Objects with DS2
Peter Eberhardt, Fernwood Consulting Group Inc
Tuesday, 1:15 PM - 2:05 PM, Location: Centennial A
The DATA step has served SAS® programmers well over the years, and although it is powerful, it has not fundamentally changed. With DS2, SAS has introduced a significant alternative to the DATA step by introducing an object-oriented programming environment. In this paper, we share our experiences with getting started with DS2 and learning to use it to access, manage, and share data in a scalable, threaded, and standards-based way.
BB10 : Name the Function: Punny Function Names with Multiple MEANings and Why You Do Not Want to be MISSING Out.
Ben Cochran, The Bedford Group
Art Carpenter, CA Occidental Consultants
Tuesday, 2:15 PM - 3:05 PM, Location: Centennial A
The SAS DATA step is one of the best (if not the best) data manipulators in the programming world. One of the areas that give the DATA step it's power is the wealth of Functions that are available to it. This paper takes a PEEK at some of the functions whose names have more than one MEANing. While the subject matter is very serious, the material will be presented in a humorous way that is guaranteed not to BOR the audience. With so many functions available, we had to TRIM our list so that the presentation could made within the TIME alloted. This paper also discusses syntax as well as illustrates several examples of how these function can be used to manipulate data.
BB11 : Surviving the SAS® Macro Jungle by Using Your Own Programming Toolkit
Kevin Russell, SAS
Tuesday, 3:30 PM - 4:20 PM, Location: Centennial A
Almost every night, there is a reality show on TV that shows someone trying to survive in a harsh environment by using just their wits. Although your environment might not be as harsh, you might feel you do not have all the tools you need to survive in the macro programming jungle. This paper provides you with ten programming tools to expand your macro toolkit and to make life in the jungle much easier. For example, the new DOSUBL function is the flint that ignites your code and enables you to dynamically invoke your macro using DATA step logic. Another tool is the stored compiled macro facility, which can provide protection from the elements by enabling you to keep your code safe from prying eyes. Also included is a discussion of the macro debugging system options, MLOGIC, SYMBOLGEN, and MPRINT. These options act as a compass to point you in the right direction. They can help guide you past a log riddled with errors to an error-free log that puts you back on the right path. CALL SYMPUTX, the new and improved version of CALL SYMPUT, automatically converts numeric values to character strings and even automatically removes trailing blanks. This improved routine saves you steps and gets you out of the jungle faster. These tools, along with macro comments, enhancements to %PUT, the MFILE system option, read-only macro variables, the IN operator, and SASHELP.VMACRO, help equip all macro programmers with the tools that they need to survive the macro jungle.
BB12 : Preparing the data to ensure correct calculation of Relative Risk
Ravi Kankipati, Seattle Genetics
Abhilash Chimbirithy, Merck & Co.
Tuesday, 4:30 PM - 4:50 PM, Location: Centennial A
Relative Risk (RR) used frequently in statistical analysis can be easily obtained by using SAS® procedure. However, how the data are organized before calling the SAS procedure will affect the outcome and could result in an incorrect RR. From a programmer's perspective when asked to provide RR for a group, or subgroups within that group one should be very careful in having the data organized correctly. In this paper we will walk through one such approach to derive the desired classification variables based on the data, focusing on organizing the data before finally calling SAS procedures to obtain a correct RR calculation.
BB13 : The Power of Data Access Functions: An example with Dataset-XML Creation
Joseph Hinson, inVentiv Health
Wednesday, 8:00 AM - 8:20 AM, Location: Centennial A
Data Access Functions are part of the SAS® Component Language or "SCL" (formerly called "Screen Control Language"). Currently, they are not so widely used in clinical programming, except for the OPEN function which is commonly used in SAS® macros when one might need access to data without dealing with a step boundary. But many more functions are available, such as FETCHOBS, VARNUM, and GETVARC. As the name suggests, they provide access to the SAS® data set in a far more flexible way than one can obtain with the SET statement of the DATA step. They are particularly useful in converting tables of dissimilar structures. The SAS® data set is two-dimensional, whereas data structure of the Extensible Markup Language, XML, is hierarchical. Thus converting data sets to XML can be quite challenging. This paper will show that utilizing Data Access Functions can make the conversion of SAS® data sets to XML quite straightforward and easy. The paper uses the creation of the new CDISC Dataset-XML standard as an example.
BB14 : Macro to get data from specific cutoff period
Kiranmai Byrichetti, SCRI
Jeffrey Johnson, SCRI
Wednesday, 8:30 AM - 8:50 AM, Location: Centennial A
During the clinical development of an investigational drug, periodic analysis of safety information is of great importance to the ongoing assessment of risk to trial subjects. It is also important to inform regulators and other interested parties (e.g., ethics committees) at regular intervals about the results of such analyses and the evolving safety profile of an investigational drug, and apprise them of actions proposed or being taken to address safety concerns. For periodic analysis, data needs to be pulled based off of specific time period and doing this on each and every output or dataset is time consuming and having a macro to do such task is definitely time saving. Also, Manual selection of data based off particular dates can be prone to errors. For this purpose, a macro has been developed and this paper discusses a macro that applies cutoff based off specific dates on all SDTM datasets available in both production and validation folders. This macro is developed in SAS 9.3 and users with basic SAS knowledge would be able to use the macro.
BB15 : The Impact of Change from wlatin1 to UTF-8 Encoding in SAS Environment
Hui Song, PRA HEALTH SCIENCES
Anja Koster, PRA HEALTH SCIENCES
Wednesday, 9:45 AM - 10:05 AM, Location: Centennial A
As clinical trials become globalized, there has been a steadily strong growing need to support multiple languages in the collected clinical data. The default encoding for a dataset in SAS is "wlatin1". wlatin1 is used in the "western world" and can only handle ASCII/ANSI characters correctly. UTF-8 encoding can fulfill such a need. UTF-8 is a universal encoding that can handle characters from all possible languages, including English. It is backward compatible with ASCII characters. However, UTF-8 is a multi-byte character set while wlatin1 is a single-byte character set. This major difference of data representation imposes several challenges for SAS programmers when (1) import and export files to and from wlatin1 encoding, (2) read in wlatin1-encoded datasets in UTF-8 SAS environment, and (3) create wlatin1-encoded datasets to meet clients' needs. In this paper, we will present concrete examples to help the readers understand the difference between UTF-8 and wlatin1 encoding and provide practical solutions to address the challenges above.
BB16 : UTF What? A Guide to Using UTF-8 Encoded Data in a SDTM Submission
Michael Stackhouse, Chiltern
Wednesday, 10:15 AM - 10:35 AM, Location: Centennial A
The acronyms SBCS, DBCS, or MBCS (i.e. single, double, and multi-byte character sets) mean nothing to most statistical programmers. Many do not concern themselves with the encoding of their data, but what happens when data encoding causes SAS to error? The errors produced by SAS and some common workarounds for the issue may not answer important questions about what the issue was or how it was handled. Though encoding issues can apply to any set of data, this presentation will be geared towards an SDTM submission. Additionally, a common origin of the transcoding error is UTF-8 encoded source data, whose use is rising in popularity by database providers, making it likely that this error will continue appear with greater frequency. Therefore, the ultimate goal of this presentation will be to provide guidance on how to obtain fully compliant SDTM data with the intent of submission to the FDA from source datasets provided natively in UTF-8 encoding. Among other topics, in this presentation we will first explore UTF-8 encoding, explaining what it is and why it is. Furthermore, we will demonstrate how to identify the issues not explained by SAS, and recommend best practices dependent on the situation at hand. Lastly, we will review some preventative checks that may be added into SAS code to identify downstream impacts early on. By the end of this presentation, the audience should have a clear vision of how to proceed when their clinical database is using separate encoding from their native system.
Career PlanningCP01 : What's Hot - Skills for SAS® Professionals
Kirk Paul Lafler, Software Intelligence Corporation
Monday, 9:45 AM - 10:35 AM, Location: Centennial B
As a new generation of SAS® user emerges, current and prior generations of users have an extensive array of procedures, programming tools, approaches and techniques to choose from. This presentation identifies and explores the areas that are hot and not-so-hot in the world of the professional SAS user. Topics include Enterprise Guide, PROC SQL, PROC REPORT, Macro Language, DATA step programming techniques such as arrays and hash, SAS University Edition software, support.sas.com, sasCommunity.org®, LexJansen.com, JMP®, and Output Delivery System (ODS).
CP02 : Don't be a Diamond in the Rough: Tips to Employment
Janet Stuelpner, SAS
Monday, 8:00 AM - 8:50 AM, Location: Centennial B
No one can foresee a merger or acquisition. When it happens, big changes happen as well. So, what can you do to make yourself employable? What puts you ahead of the rest so they choose you instead of someone else? What do you need to do to get your next job? What are the resources that are available to you to help in the pursuit of your next opportunity? In this presentation, I will explore the types of things that will help you to succeed without spending time on the bench.
CP03 : Are you thinking about becoming an independent contractor? Things to consider as you plan for your new entrepreneurship.
Kathy Bradrick, Chiltern International, Inc.
Dawn Edgerton, Edgerton Data Consulting, LLC
Tuesday, 8:00 AM - 8:50 AM, Location: Centennial B
Have you contemplated starting your own business as an independent contractor? There are many things to consider such as financial planning, establishing a business infrastructure, and support activities you will need such as IT.
CP04 : Career Path for SAS profession in Pharmaceutical Industry
James Meiliang Yue, InVentive Health
Monday, 9:00 AM - 9:20 AM, Location: Centennial B
Provide the career path for SAS profession in pharmaceutical industry, the education requirements for clinical data manager, statistical or clinical programmer, and statistician, the special subjects knowledge area for career development. The intended audience will be new SAS profession, or whoever consider a career plan and development in the pharmaceutical industry.
CP05 : Journey from the student to the full time programmer
Viktoriia Vasylenko, Experis, Statistical Programming Analyst, Ukraine
Monday, 10:45 AM - 11:05 AM, Location: Centennial B
Suppose you want to work in the pharmaceutical industry, but you are studying in the field of applied mathematics. Of course, one natural question that may arise: what are the chances of me getting into such an industry? For me the answer turned out to be: 'pretty high!' In hindsight, I understand that pursuing a degree in applied mathematics is a bit of a risk; particularly, for those of us who enrolled in the Clinical SAS University Program 2013 -2014. Our class was the first experiment with this kind of program, so we were not sure about its success rate. Furthermore, we entered the program without fully understanding the benefits of pursuing such a field. Additionally, there will come a time, where we as students will question if the substantial effort we put to our education and the knowledge that we have accumulated so far will result in a better opportunity to secure a career in the field we want. In this talk, by deriving from some of my experience, I will discuss how recent graduates without any solid work experience in IT may start their professional careers in the Clinical Research Industry. In particular, I will put an emphasis on the path of a student in the Clinical SAS University program: his/her journey through the internship period, the company's hiring process, and eventually a professional career as a statistical programmer.
CP06 : My First Job Dos-and-Don'ts A Survival Guide for Your First Statistical Programming Job in the Industry
Assir Abushouk, PAREXEl
Thu-Nguyen Nguyen, PAREXEL
Tuesday, 9:00 AM - 9:20 AM, Location: Centennial B
Are you a statistical programmer who's new to the industry? Have no fear: some useful tips and tricks are here! After working a little over a year for a CRO, we wanted to share some things we've learned along the way, as well as include some personal anecdotes of our successes and failures. We hope that these ideas will be a surefire way to help you keep your first job and wow your boss. Throughout our presentation, we will discuss how to take charge of your own career, program more efficiently, and stay organized in the workplace. Although, many of these skills can be applied to any first job, our focus is to help new industry programmers. These suggestions may seem very simple and straightforward, but building good habits from the beginning is a great way to get your career off to a wonderful start.
CP08 : There is No Time Like Present-Being from Elf to the True Self
Rajinder Kumar, Novartis Healthcare Private Limited
Sudarshan Reddy Shabadu, Inventiv International Pharma Services Pvt. Ltd.
Houde Zhang, Novartis Pharmaceuticals
Tuesday, 10:45 AM - 11:05 AM, Location: Centennial B
In the past few years, Clinical Industry has been moving so well enhanced on the ethics of conducting clinical trials to improve the quality of human life. The people likes of Bio-statisticians, Clinical SAS Programmers play a vital role in any clinical trial study to bring up the quality in reports by enhancing the existing analysis or macrotizing the tedious tasks, etc but fail in most of the cases to bring that hands-on-work up that really worth of sharing to global audience, having been had good situations and surroundings, feeling inspired from inside but failing to act out is because of lacking something downgrade their spirits. People often fail to think "What Exactly is Needed to Write a Paper" than "What is Overrated by Themselves", This paper truly covers the worthy points that clear the air between these two questions. Going by human perceptions, people always expect to have some motivation or support or helping hand in some way to reach their goals. To fill in this motivation and bring up the transformation in people from being ignorant to enlightened, it includes all forthright notes that uplifts their spirits to take a self-initiation which further helps them to be a successful writer. Besides, it speaks up on what are the traits that one needs to exercise in daily life to improve their basic level of thinking on the things simple by nature but making expensive by thinking.
CP09 : A Review of "Free" Massive Open Online Content (MOOC) for SAS® Learners
Kirk Paul Lafler, Software Intelligence Corporation
Tuesday, 9:45 AM - 10:35 AM, Location: Centennial B
Leading online providers are now offering SAS® users with "free" access to content for learning how to use and program in SAS. This content is available to anyone in the form of massive open online content (or courses) (MOOC). Not only is all the content offered for "free", but it is designed with the distance learner in mind, empowering users to learn using a flexible and self-directed approach. As noted on Wikipedia.org, "A MOOC is an online course or content aimed at unlimited participation and made available in an open access forum using the web." This presentation illustrates how anyone can access a wealth of learning technologies including comprehensive student notes, instructor lesson plans, hands-on exercises, PowerPoints, audio, webinars, and videos.
Data StandardsDS02 : Tips and tricks when developing Trial Design Model Specifications that provide Reductions in Creation time
Ruth Marisol Rivera Barragan, Chiltern International Inc.
Monday, 8:00 AM - 8:20 AM, Location: Centennial F
This presentation will give you some key points when developing trial design model specification under SDTM versions (up to 3.2). After the presentation, you will have a clearer idea of what inputs are needed to gather, as well as understand in advance of starting work on the trial design domains, in order to move efficiently through the definition process. We will see for example, that it is important to have: the latest protocol; a summary of the Inclusion/Exclusion criteria changes; the SDTM Specification Template; the latest version of Electronic Case Report Form (eCRF); as well as the specifications for DM, IE, SE and SV domains. The presentation will also recognize the main differences within SDTM IG version 3.1.3 and older versions for the TDM domains. A discussion will present the critical cross checking between TDMs and other SDTM domains that is necessary to consider for the most efficient specification development, for example: a. TA (Trial Arms) and DM domain. b. TA (Trial Arms) and TE (Trial Element). c. TE (Trial Element) and SE (Subject Elements) domain. d. TI (Trial Inclusion/Exclusion) and IE domain. e. TV (Trial Visits) and SV domain. Finally, the presentation will give the audience useful and specific examples on how to find the required information on external references showed in the SDTM IG that may not be as straightforward as it initially appears when working with Trial Summary domain in version 3.1.3 (like REGID, UNII, TSVERPCLAS, SNOMED,etc.)
DS03 : Results-Level Metadata: What, How, and Why
Frank Diiorio, CodeCrafters, Inc.
Jeffrey Abolafia, Rho
Monday, 8:30 AM - 9:20 AM, Location: Centennial F
The power and versatility of well-designed metadata has become widely recognized in recent years. Organizations creating standard-compliant FDA deliverables such as SDTM and ADaM datasets have found that metadata and associated tools both simplified workflow and improved output quality. They also facilitate creation of define.xml, which describes the "traceability," or flow of derivation, of a data point. The scope of this traceability is widening to include a heretofore neglected part of submission deliverables - analysis displays. The FDA's increasing emphasis on traceability and CDISC's 2015 release of the Analysis Results Metadata (ARM) schema extension to ODM have focused attention on this critical piece of the submission package. The schema extension provides sponsors with the ability to submit rich, descriptive metadata about key results. The schema also represents the missing link for end-to-end traceability: the ability to trace a variable's flow from SDTM to ADaM dataset to its use in analysis results. This paper is an overview of the ARM schema and its associated metadata and tools. It describes what is meant by results-level metadata; discusses collection techniques; illustrates how it can be used during the creation of analysis displays; and summarizes what is needed for results-level define-xml to be compliant. The paper should give the reader an appreciation of both the scope of the metadata and its usage beyond simply creating define.xml. Used to its full potential, it can be one of the many benefits of an end-to-end, metadata-driven system.
DS04 : Moving up! - SDTM 3.2 - What is new and how to use it
Alyssa Wittle, Chiltern International
Christine Mcnichol, Chiltern International
Tony Cardozo, Chiltern International Ltd.
Monday, 9:45 AM - 10:05 AM, Location: Centennial F
December of 2013 the CDISC world blessed us with SDTM version 3.2. Now that the FDA has approved the use of the new version of the implementation guide and the requirement is on the horizon, we need to consider all that the new IG has in store for our clinical trials and what handling or considerations should be made for the noted exceptions. Don't be overwhelmed by the file size or the fancy, new format, this session will lay out the new IG in easy to understand ways and show you how to apply this to your studies today! It will cover new domains, significant changes to our existing domains and applied examples to walk you through how to transition to this new version. We will cover in depth bigger changes, such as the EX/EC adjustment, but will highlight even in the smaller changes to make sure you don't miss them as you become familiar with the new document. This session will have you SDTM 3.2 fluent in no time!
DS05 : Getting Started with Data Governance
Greg Nelson, ThotWave
Monday, 10:15 AM - 11:05 AM, Location: Centennial F
While there has been tremendous progress in technologies related to data storage, high performance computing and advanced analytic techniques, organizations have only recently begun to comprehend the importance of parallel strategies that help manage the cacophony of concerns around access, quality, provenance, data sharing and use. While data governance is not new, the drumbeat around this, along with master data management and data quality, is approaching a crescendo. Intensified by the increase in consumption of information, expectations about ubiquitous access, and highly dynamic visualizations, these factors are also circumscribed by security and regulatory constraints. In this paper we provide a summary of what data governance is and its importance, which is, as you would expect. However, we go beyond the obvious and provide practical guidance on what it takes to build out a data governance capability appropriate to the scale, size and purpose of the organization and its culture. Moreover, we will discuss best practices in the form of requirements that highlight what we think is important to consider as you provide that tactical linkage between people, policies and processes to the actual data lifecycle. To that end, our focus will include the organization and its culture, people, processes, policies and technology. Further, our focus will include discussions of organizational models as well as the role of the data steward(s), and provide guidance on how to formalize data governance into a sustainable set of practices within your organization
DS06 : "It is a standard, so it is simple, right?": Misconceptions and Organizational Challenges of Implementing CDISC at a CRO
Susan Boquist, PAREXEL
Adam Sicard, PAREXEL
Monday, 11:15 AM - 11:35 AM, Location: Centennial F
Implementing CDISC standards into the clinical trial process presents several challenges ranging from understanding the complexity of the task to whom should be doing which part of it but can yield many positive effects. There are many possible approaches, all with advantages and disadvantages. After experimenting with several different options, we have adopted a two team model of database build and "end to end" SAS. While the approach is relatively new, by beginning to include CDISC standards into both teams, we are starting to see anecdotal evidence of positive effects in terms of reduced timelines, increased quality of outputs and employee engagement. This presentation will review the path we have followed in the quest to achieve full CDISC implementation and additional information on possible misconceptions, suggestions for streamlining processes, employee recruitment and training, resourcing models and sponsor participation in hopes that others may benefit from our mistakes and successes.
DS07 : Data Standards, Considerations and Conventions within the Therapeutic Area User Guides (TAUGs)
Jerry Salyers, Accenture Life Sciences
Kristin Kelly, Accenture Life Sciences
Fred Wood, Accenture Life Siences
Monday, 1:15 PM - 2:05 PM, Location: Centennial F
One of the major initiatives in the pharmaceutical industry involving standards is the development of Therapeutic Area User Guides (TAUGs) under the CFAST (Coalition for Accelerating Standards and Therapies) initiative. CFAST is a collaborative effort between CDISC , with participation from TransCelerate Biopharma, Inc,, FDA, and the National Institute of Health. Currently, there are seventeen TAUGs (two at Version 2, fifteen at Version 1) available for download from the CDISC website. There are also a number currently in development that are moving towards public review and eventual publication. The development of these standards includes the design of case report forms and resulting metadata using the CDASH standard, the mapping of the collected data to SDTM-based datasets, and examples of how the SDTM-based data would be used in the production of ADaM (analysis) datasets. This design and mapping of specialized data to current standards provides many opportunities to implement new and different submission strategies, while remaining compliant with the published standard. With this effort, it is the hope that data, across sponsors, from clinical trials within these therapeutic areas will be more standardized thus allowing for easier and potentially quicker regulatory review.
DS08 : Deconstructing ADRS: Tumor Response Analysis Data Set
Steve Almond, Bayer Inc.
Tuesday, 8:00 AM - 8:20 AM, Location: Centennial F
From the perspective of someone new to both the oncology therapy area and working with ADaM, this paper describes the process of designing a data set to support the analysis of tumor response data. Rather than focus on the programmatic implementation of particular response criteria (e.g., RECIST, Cheson, etc.), we instead concentrate on the structural features of the ADaM data set. Starting from a simple description of a study's primary analysis needs, we explore the design impact of additional protocol features such as multiple response criteria algorithms, multiple medical evaluators, and adjudication. With an understanding of the necessary derived parameters to support our analyses, we have a better conceptual link between the collected SDTM RS data and our ADRS analysis data set -- regardless of response criteria being used (the implementation of which is beyond the scope here). We will also touch on some other practical considerations around missing data, interim analyses, further variations in response criteria, and alternate summary table formats which can have an impact on the design of the data set.
DS09 : Prepare for Re-entry: Challenges and Solutions for Handling Re-screened Subjects in SDTM
Charity Quick, Rho, Inc.
Paul Nguyen, Rho, Inc.
Tuesday, 10:15 AM - 10:35 AM, Location: Centennial F
A common problem for SDTM is tabulating data for subjects who enroll multiple times in a single trial. Currently, the FDA advises that as long as there is one USUBJID for each specific subject, a different subject identifier (SUBJID) value can be used for each screening attempt. It can be challenging to handle this data while conforming to both the SDTM Implementation Guide and FDA-published validation rules. Here, I suggest retaining one record in the Demographics domain (DM) for a trial subject using the latest SUBJID value and saving the previous subject identifiers in SUPPDM. From there, the ability to link observations with previous ID values in Supplemental data and adequate usage of VISIT and EPOCH values will facilitate mapping of most trials involving re-screened subjects. Specific considerations will include: 1. Identifying the subset of domains which will contain data from multiple enrollments. 2. Managing possible duplicate records when the same events or treatments can be reported at multiple screening visits, such as for the Adverse Events (AE), Concomitant Medications (CM), or Medical History (MH) domains. 3. Populating Findings domains as well as Subject visits (SV) and Disposition (DS) domains for multiple enrolled subject records.
DS10 : Associated Persons Domains - Who, What, Where, When, Why, How?
Michael Stackhouse, Chiltern
Alyssa Wittle, Chiltern International
Tuesday, 9:45 AM - 10:05 AM, Location: Centennial F
Many types of clinical studies collect information on people other than the person in the study. Family medical history - MH or a custom domain? Care-giver information? Until now we have there was no home in the CDISC SDTM structure to standardize this information. With the publication of the Associated Persons SDTM IG this standardization has arrived! However, since it is so new, many don't know that it exists and/or how to apply it. This presentation will explore the Associated Persons SDTM Implementation Guide by exploring six investigational questions to make the application of the AP SDTM IG very logical and even easy. During the presentation we will discuss "What" the Associated Person IG is and for "Who" are these domains intended? "When" do these domains apply in a study and "Where" do I put all the information? One of the biggest questions that will be discussed is "How" to apply the information from the IG. Finally, we will discuss "Why" these domains are a necessary addition and why they should be applied to all studies wherever they are applicable. Many applied examples will be included. It will progress from study structure to CRFs and annotations to translating the information from the raw data into the Associated Persons domain structure. While the transition to using Associated Persons domains may seem complex for many, by the end of this presentation all attendees will have a thorough understanding of these domains and how to use them in their next clinical study.
DS11 : SDTM Trial Summary Domain: Putting Together the TS Puzzle
Kristin Kelly, Accenture Life Sciences
Jerry Salyers, Accenture Life Sciences
Fred Wood, Accenture Life Siences
Monday, 2:15 PM - 3:05 PM, Location: Centennial F
The SDTM Trial Summary (TS) domain was updated with new variables and implementation strategies with the publication of the SDTM v1.3/SDTMIG v3.1.3. This was an attempt to make the domain more useful to reviewers and to be machine readable to facilitate data warehousing. The FDA has recently considered TS as being an essential domain to include in a study submission. They have developed tools that check that the TS domain is populated according to the guidance in the SDTMIG and the FDA's Study Data Technical Conformance Guide (TCG). Given the advice for implementation, constructing an informative and complete TS domain can be challenging for sponsors in terms of the interpretation of current guidance and using the correct controlled terminologies. This paper will discuss some of the challenges associated with populating specific TS parameters, and provide some solutions for addressing these challenges.
DS12 : Conformance, Compliance, and Validation: An ADaM Team Lead's Perspective
John Troxell, Accenture
Monday, 3:30 PM - 4:20 PM, Location: Centennial F
Conformance, compliance, and validation checks have become a hot topic. In 2008 as I became ADaM Team Lead, I observed that SDTM computer validation checks had not been developed by CDISC. I believe that CDISC alone should define what constitutes CDISC compliance and, specifically, the algorithms for testing those aspects of compliance that are checkable by computer. Compliance rules, in my view, are naturally part of the standard. The ADaM Team published the first version of the ADaM validation checks in 2010, based on the ADaM Implementation Guide published in 2009. Updated versions have been released to reflect subsequent ADaM publications. However, all is not yet perfect. This paper provides a suggested map of the space comprising various aspects of regulatory agency conformance, CDISC compliance, and computer validation checks. To facilitate discussion, the paper also attempts to define vocabulary to refer to various components of the space. The paper looks at the chain of things that have to go right with validation checks, starting from the standards themselves, through rule definition and implementation. Finally, the paper discusses recent efforts toward improving the accuracy of computer validation checks.
DS13 : Implementation of ADaM Basic Data Structure for Cross-over Studies
Songhui Zhu, A2Z Scientific Inc
Monday, 4:30 PM - 5:20 PM, Location: Centennial F
Basic data structure (BDS) is the most widely-used class of ADaM datasets and suitable for most safety and efficacy datasets such as lab data analysis, accountability analysis, exposure analysis, time-to-event analysis, and questionnaire analysis. To seasoned ADaM programmers, the implementation of BDS is relatively easy when the study is a parallel design. However, the implementation of BDS datasets can be challenging when the trial is a cross-over study. The major challenging issues include the selection of appropriate variables, the creation of extra sets of records when necessary, the identification of baseline records, the definition of change from baseline due to the existence of multiple baselines, the definition of period/phase, the definition of record-level planned or actual treatment for post-baseline records and/or baseline records. This paper will illustrate how to deal with these issues using examples of cross-over studies.
DS14 : TRTP and TRTA in BDS Application per CDISC ADaM Standards
Maggie Ci Jiang, Teva Pharmaceuticals
Tuesday, 8:30 AM - 9:20 AM, Location: Centennial F
CDSIC ADaM Implementation Guide v1.0 (IG) has defined the standards on how to use the TRTP and TRTA terminology when developing ADaM DBS datasets, and provided users some examples to illustrate how to utilize the standards. However, the definitions and examples from ADaM IG are limited to the applications that data is expected to be in good standing. There will be some sort of challenges for crossover studies when it comes to the situation that repeated visit record, unscheduled visit record or the record out of treatment visit window occur. This paper will present a review of experiences implementing TRTP and TRTA in ADaM BDS datasets, and discuss the comprehensive utilization of TRTP and TRTA terminology step by step with some practical examples.
DS15 : Transforming Biomarker Data into an SDTM based Dataset
Kiran Cherukuri, Seattle Genetics
Tuesday, 10:45 AM - 11:35 AM, Location: Centennial F
Biomarkers play an increasingly important role in drug discovery and development. They are used as a tool for understanding the mechanism of action of a drug, investigating efficacy and toxicity signals at an early stage of pharmaceutical development, and in identifying patients likely to respond to a treatment. This paper provides an Introduction to the implementation of SDTM standards for data that defines a genetic biomarker and data about genetic observations. The draft CDISC SDTM Pharmacogenomics/Genetics Implementation Guidance will be referenced and rationale for using specific aspects of the draft guidance or suggesting a modification will be explained. The variables used, considerations taken and the process for setting up the pharmacogenomics/ genetics biomarker domains will be described.
DS16 : Codelists Here, Versions There, Controlled Terminology Everywhere
Shelley Dunn, Regulus Therapeutics
Tuesday, 1:15 PM - 2:05 PM, Location: Centennial F
Programming SDTM and ADaM data sets for a single study based on a one quarterly version of NCI Controlled Terminology (CT) can give a false sense that implementing CT correctly is a straight forward process. The most complex issues for a single study may rely on generating sufficient extensible codelists and ensuring all possible values on a CRF are accounted for within the study metadata. However, when looking beyond a single study towards developing processes to support end to end management, the maintenance of controlled terminology involves more complexities requiring ongoing upkeep, compliance checking, archiving, and updating. To ensure FDA compliance, it behooves sponsors to develop processes for maintenance and organization of CT. It is not enough to hire an external vendor and hope they apply CT correctly and/or to assume the CT used is correct based on a clean vendor compliance report. This is primarily due to the iterative changes to industry terminology over time. Additionally, sponsor-defined terms must constantly be re-evaluated and compared to regularly CDISC published terminology. Some best practices include: -maintaining a CT repository/library -running checks to compare current sponsor CT to new CDISC CT, codelist codes, and other metadata -adopting strategies for version control and developing processes for up-versioning This paper will provide an in depth look at requirements and governance needed to ensure consistent and compliant use of controlled terminology across an entire company. Example issues and workable solutions will also be provided to illustrate a number of the challenges requiring this recommended rigor.
Data Visualizations & GraphicsDG01 : Now You Can Annotate Your GTL Graphs!
Dan Heath, SAS
Monday, 1:15 PM - 2:05 PM, Location: Centennial H
For some users, having an annotation facility is an integral part of creating polished graphics for their work. To meet that need, we created a new annotation facility for the SG procedures in SAS 9.3. Now, with SAS 9.4, the Graph Template Language (GTL) supports annotation as well! In fact, GTL annotation facility has some unique features not available in the SG procedures, such as using multiple sets of annotation in the same graph and the ability to bind annotation to a particular cell in the graph. This presentation will cover some basic concepts of annotating that are common to both GTL and the SG procedures. Then, I will apply those concepts to demonstrate the unique abilities of GTL annotation. Come see how you can take your GTL development to the next level!
DG02 : Clinical Graphs using SAS
Sanjay Matange, SAS
Monday, 2:15 PM - 3:05 PM, Location: Centennial H
Graphs are essential for many Clinical and Health Care domains including analysis of Clinical Trials Safety Data and analysis of the efficacy of the treatment such as change in tumor size. Creating such graphs is a breeze with SAS 9.4 SG Procedures. This paper will show how to create many industry standard graphs such as Lipid Profile, Swimmer Plot, Survival Plot, Forest Plot with Subgroups, Waterfall Plot and Patient Profile using SDTM data with just a few lines of code.
DG03 : Waterfall plot: two different approaches, one beautiful graph
Ting Ma, Pharmacyclics, An AbbVie Company
Monday, 3:30 PM - 3:50 PM, Location: Centennial H
This paper discusses two approaches to presenting oncologic data by use of a waterfall plot. One approach focuses on the statistical graphics (SG) family of procedure (SGPANEL) and the other utilizes the Graph Template Language (GTL). Both approaches are within the ODS Graphics system available with SAS 9.2 and later release. This paper will compare the specific SAS procedures to generate the visually effective waterfall plot. Basic plots will be built as the first step and then customized features will be added. These procedures should provide guidance to present stratified information from a complex clinical trial.
DG04 : Fifty Ways to Change Your Colors (in ODS Graphics)
Shane Rosanbalm, Rho, Inc
Monday, 4:00 PM - 4:50 PM, Location: Centennial H
Back in the good ole days (think GPLOT) there was one primary way to change the colors of your symbols and lines: the COLOR= option of the SYMBOL statement. But now that ODS graphics are taking over (think SGPLOT and GTL), color management is not so straightforward. There are plot statement attribute options, style modifications, the %modstyle macro, discrete attribute maps, and more. Sometimes it feels like there must be 50 ways to change your colors. In this paper we will explore the various ways to manage colors in ODS graphics. Complex topics will be demystified. Strengths and weaknesses will be examined. Recommendations will be made. And with any luck, you will come away feeling less confused and more confident about how to manage colors in ODS graphics.
DG05 : Stylish Kaplan-Meier Survival Plot using SAS(R) 9.4 Graph Template Language
Setsuko Chiba, Pharmacyclics
Tuesday, 1:15 PM - 1:35 PM, Location: Centennial H
An eye catching graph provides a visual display of data summaries reported in tables. In Oncology clinical trials, the commonly used method to display the results of time to event data analysis is to combine the Kaplan-Meier Survival Plot in PROC LIFETEST with ODS Graphics to compare different treatment groups based on the log-rank test. The new release of SAS/STAT® 14.1 added many options to control the appearance and format of survival outputs not available in previous versions. Nevertheless, SAS programmers still find that the options in PROC LIFETEST are not sufficient to produce the outputs data reviewers desire. If you are in this situation, the solution is to use SAS® 9.4 Graph Template Language (GTL). This paper provides a step-by-step approach to modify the macros and macro variables used in the graph template. SAS programmers at all levels are able to control the results of survival plots on Windows and Unix OS environments.
DG06 : Creating Customized Graphs For Oncology Lab Test Parameters
Sean Liu, Novartis
Tuesday, 1:45 PM - 2:05 PM, Location: Centennial H
Lab data is most complex and messy in oncology clinical trials. It's hard for researchers to review and to find the data issues. In order to make data review easier and the data cleaning more efficient, we developed more powerful tools - summary visual graphics. This paper discuss the method to combine and to list lab test values, CTC grades, normal ranges, study days, and standard unit in one plot for each test parameter. After the researchers eyeball it at first glance, they can easily find which values are out of normal ranges and are of higher CTC grades and then can easily identify the potential data issues. In addition, we also explore a more efficient way to make all the plots for the available test parameters automatically.
DG07 : Annotating Graphs from Analytical Procedures
Warren Kuhfeld, SAS
Tuesday, 2:15 PM - 3:05 PM, Location: Centennial H
You can use annotation, modify templates, and change dynamic variables to customize graphs in SAS. Standard graph customization methods include template modification (which most people use to modify graphs that analytical procedures produce) and SG annotation (which most people use to modify graphs that procedures such as PROC SGPLOT produce). However, you can also use SG annotation to modify graphs that analytical procedures produce. You begin by using an analytical procedure, ODS Graphics, and the ODS OUTPUT statement to capture the data that go into the graph. You use the ODS document to capture the values that the procedure sets for the dynamic variables, which control many of the details of how the graph is created. You can modify the values of the dynamic variables, and you can modify graph and style templates. Then you can use PROC SGRENDER along with the ODS output data set, the captured or modified dynamic variables, the modified templates, and SG annotation to create highly customized graphs. This paper shows you how and provides examples using the LIFETEST and GLMSELECT procedures.
DG08 : Swimmer Plot by Graphic Template Language (GTL)
Baiming Wang, Pharmaceutical Products Development inc (PPD)
Tuesday, 3:30 PM - 3:50 PM, Location: Centennial H
Time-to-Event analysis is one of the key analyses for oncology studies. Beside the Kaplan-Meier survival step curve to summarize overall drug performance, investigators and sponsors are also very interested in individual subject response situations. Many oncology clinical trials will request Swimmer plots to show individual response trends on a continuing basis. Graphic Template Language (GTL) is a powerful approach for all kinds of visual clinical data presentations. Swimmer plots can presents all subject time-to-event data such as Complete Response Start, Partial Response Start, Continued Response after Study Drug Exposure, Disease Progress, death, tumor types, etc. All those data can be shown on a Swimmer bar with different symbols/color combinations. Different color can represent different tumor types or tumor stages. Subject IDs can be shown on each bar for easy review. Each event has data labeled on each mark to show exact days for that event. Every component of the plot will have different data modules easily programmed, step-by-step, adding each part to the plot.
DG09 : What HIGHLOW Can Do for You
Kristen Much, Rho, Inc.
Kaitlyn Steinmiller, Rho, Inc.
Tuesday, 4:00 PM - 4:20 PM, Location: Centennial H
Longitudinal plots that quickly, creatively, and informatively summarize study data act as powerful tools in understanding a clinical trial. The HIGHLOW statement not only creates a plot with floating bars that represent high and low values but also includes the ability to add markers and text annotations. With a wide variety of options, the HIGHLOW statement can produce a plot that summarizes a complex story with a multitude of different data in a single graphic. This paper will introduce this relatively new plot statement (in production since SAS 9.3) with an example-based approach, exploring possible applications and plot capabilities. Focus will be placed on how to create HIGHLOW plots using both the PROC SGPLOT procedure and Graph Template Language (GTL). The examples provided will demonstrate the power and flexibility of the HIGHLOW statement.
DG10 : Empowering Users by creating data visualization applications in R/Shiny
Sudhir Singh, Pharmacyclics Inc
Brian Munneke, Pharmacyclics Inc
Amulya Bista, Pharmacyclics Inc
Jeff Cai, Pharmacyclics Inc
Tuesday, 4:30 PM - 5:20 PM, Location: Centennial H
A statistical programmer often receives several requests from different functions for exploratory data analysis. This result in several programming hours spent on unplanned exploratory analyses. We believe a better and more efficient approach would be to build data visualization applications that allow our users to directly interact with the data. R + Shiny help us develop scalable simple web based data applications that can access the data in real time . This paper demonstrates several existing applications that we have developed to assist various functional groups. Our applications help us communicate information or provide analyses clearly and efficiently in real time. The system is scalable and more customized applications can be built within short development cycles.
DG11 : Displaying data from NetMHCIIPan using GMAP: the SAS System as Bioinformatics Tool
Kevin Viel, Histonis, Incorporated
Wednesday, 8:00 AM - 8:50 AM, Location: Centennial H
The binding of peptides to HLA Class II molecules is a seminal event in adaptive immunology. From vaccine development to immunogenicity, the prediction of binding affinity (strength) is extremely important information. The NetMHCIIpan 3.1 Server provides estimates of the binding affinity for specific peptides and given HLA molecules. The goal of this papers is to describe the basics of immunology with respect to HLA, to describe how to obtain estimate binding affinity interactively using NetMHCIIpan, and how to present these data using the SAS system using the GMAP procedure complimented by the ANNOTATE facility.
DG13 : Get a Quick Start with SAS® ODS Graphics By Teaching Yourself
Roger Muller, Data To Events, Inc
Wednesday, 9:45 AM - 10:35 AM, Location: Centennial H
SAS® Output Delivery System (ODS) Graphics started appearing in SAS® 9.2. When first starting to use these tools, the traditional SAS/GRAPH® software user might come upon some very significant challenges in learning the new way to do things. This is further complicated by the lack of simple demonstrations of capabilities. Most graphs in training materials and publications are rather complicated graphs that, while useful, are not good teaching examples. This paper contains many examples of very simple ways to get very simple things accomplished. Over 20 different graphs are developed using only a few lines of code each, using data from the SASHELP data sets. The usage of the SGPLOT, SGPANEL, and SGSCATTER procedures are shown. In addition, the paper addresses those situations in which the user must alternatively use a combination of the TEMPLATE and SGRENDER procedures to accomplish the task at hand. Most importantly, the use of ODS Graphics Designer as a teaching tool and a generator of sample graphs and code are covered. The emphasis in this paper is the simplicity of the learning process. Users will be able to take the included code and run it immediately on their personal machines to achieve an instant sense of gratification.
DG14 : A Different Approach to Create Swimmer Plot Using Proc Template and SGRENDER
Jui-Fu Huang, Baxalta
Wednesday, 9:00 AM - 9:20 AM, Location: Centennial H
Creating figures for statistical analysis of clinical trials are sometimes a challenge when limited to specific SAS version and internal SOP. Previous work(Stacey Phillips, 2014; Sanjay Matange) has been done mainly with PROC SGPLOT, in this paper, I introduce a different way to create swimmer plot by using PROC TEMPLATE and PROC SGRENDER, to give additional information such as dose level, cancer type, or genetic type. Additionally, instead of mark "ongoing response" with annotation, I assign a subject unique format with PROC TEMPLATE function "BARLABELFORMAT" to simplify and automatically update output figures when receive new data. This paper will demonstrate detailed steps of creating swimmer plot as well as compare the advantage and disadvantage of different methods.
DG15 : Elevate your Graphics Game: Violin Plots
Spencer Childress, Rho, Inc.
Wednesday, 10:45 AM - 11:05 AM, Location: Centennial H
If you've ever seen a box-and-whisker plot you were probably unimpressed. It lives up to its name, providing a basic visualization of the distribution of an outcome: the interquartile range (the box), the minimum and maximum (the whiskers), the median, and maybe a few outliers if you're (un)lucky. Enter the violin plot. This data visualization technique harnesses density estimates to describe the outcome's distribution. In other words the violin plot widens around larger clusters of values (the upper and lower bouts of a violin) and narrows around smaller clusters (the waist of the violin), delivering a nuanced visualization of an outcome. With the power of SAS/GRAPH®, the savvy SAS® programmer can reproduce the statistics of the box-and-whiskers plot while offering improved data visualization through the addition of the probability density 'violin' curve. This paper covers various SAS techniques required to produce violin plots.
Demo TheaterDT01 : Spotfire & SAS
Monday, 9:00 AM - 9:30 AM, Location: Demo Theater
DT02 : Best in Class vs. One Size Fits All
Monday, 10:00 AM - 10:30 AM, Location: Demo Theater
DT03 : SAS® Studio: The Best of All Worlds
Monday, 10:30 AM - 11:00 AM, Location: Demo Theater
DT04 : Interview preparation company and candidate perspective
Monday, 11:00 AM - 11:30 AM, Location: Demo Theater
DT05 : Creating Define-XML v2 including Analysis Results Metadata with CST
Monday, 1:30 PM - 2:00 PM, Location: Demo Theater
DT06 : Get smart about resourcing with DOCS flexible outsourcing models that drive business outcomes.
Monday, 2:00 PM - 2:30 PM, Location: Demo Theater
DT07 : Empowering Self-Service Capabilities with Agile Analytics in Pharma
Monday, 2:30 PM - 3:00 PM, Location: Demo Theater
DT08 : Transparency in Pharma-CRO Relationship
Monday, 3:30 PM - 4:00 PM, Location: Demo Theater
DT09 : Introducing SAS® Life Science Analytics Framework
Monday, 4:00 PM - 4:30 PM, Location: Demo Theater
DT10 : The Fourth Lie, False Resumes. How we screen against a plague of false resumes
Monday, 4:30 PM - 5:00 PM, Location: Demo Theater
DT11 : Clinical Graphs Using SAS®
Tuesday, 9:00 AM - 9:30 AM, Location: Demo Theater
DT12 : Get smart about resourcing with DOCS flexible outsourcing models that drive business outcomes.
Tuesday, 10:00 AM - 10:30 AM, Location: Demo Theater
DT13 : Leading without Authority
Tuesday, 10:30 AM - 11:00 AM, Location: Demo Theater
DT14 : Real-World Evidence Analysis … REALLY?
Tuesday, 11:00 AM - 11:30 AM, Location: Demo Theater
DT15 : Concepts and Strategies for Developing Effective Data Visuals
Tuesday, 1:30 PM - 2:00 PM, Location: Demo Theater
DT16 : Best in Class vs. One Size Fits All
Tuesday, 2:00 PM - 2:30 PM, Location: Demo Theater
DT17 : Therapeutic Applications of Patient Data
Tuesday, 2:30 PM - 3:00 PM, Location: Demo Theater
DT18 : Assessing Data Integrity in Clinical Trials Using JMP Clinical
Tuesday, 3:30 PM - 4:30 PM, Location: Demo Theater
DT20 : How to become the most desirable pharma programmer in the market place? - help people with their career development and job search
Tuesday, 4:30 PM - 5:00 PM, Location: Demo Theater
DT23 : How to create submission-ready Define.xml 2.0 in 30 minutes
Wednesday, 8:30 AM - 9:00 AM, Location: Demo Theater
DT24 : An Investment in the Future – A Global Data Standards Organization to Drive End-to-End Standards
Wednesday, 9:00 AM - 9:30 AM, Location: Demo Theater
DT25 : A Better, Clearer View into Patient Data
Wednesday, 10:00 AM - 10:30 AM, Location: Demo Theater
DT26 : Metadata-Driven Dataset Generation
Wednesday, 10:30 AM - 11:00 AM, Location: Demo Theater
DT27 : Chiltern – Designed Around You
Wednesday, 11:00 AM - 11:30 AM, Location: Demo Theater
Hands-on TrainingHT01 : Hands-on SAS® Macro Programming Essentials for New Users
Kirk Paul Lafler, Software Intelligence Corporation
Monday, 8:00 AM - 9:30 AM, Location: Quartz
The SAS® Macro Language is a powerful tool for extending the capabilities of the SAS System. This hands-on workshop teaches essential macro coding concepts, techniques, tips and tricks to help beginning users learn the basics of how the Macro language works. Using a collection of proven Macro Language coding techniques, attendees learn how to write and process macro statements and parameters; replace text strings with macro (symbolic) variables; generate SAS code using macro techniques; manipulate macro variable values with macro functions; create and use global and local macro variables; construct simple arithmetic and logical expressions; interface the macro language with the SQL procedure; store and reuse macros; troubleshoot and debug macros; and develop efficient and portable macro language code.
HT02 : PROC REPORT: Compute Block Basics
Art Carpenter, CA Occidental Consultants
Monday, 10:00 AM - 11:30 AM, Location: Quartz
One of the unique features of the REPORT procedure is the Compute Block. Unlike most other SAS® procedures, PROC REPORT has the ability to modify values within a column, to insert lines of text into the report, to create columns, and to control the content of a column. Through compute blocks it is possible to use a number of SAS language elements, many of which can otherwise only be used in the DATA step. While powerful, the compute block can also be complex and potentially confusing. This tutorial introduces basic compute block concepts, statements, and usages. It discusses a few of the issues that tend to cause folks consternation when first learning how to use the compute block in PROC REPORT.
HT03 : New for SAS® 9.4: A Technique for Including Text and Graphics in Your Microsoft Excel Workbooks, Part 1
Vince Delgobbo, SAS
Monday, 1:15 PM - 2:45 PM, Location: Quartz
A new ODS destination for creating Microsoft Excel workbooks is available starting in the third maintenance release of SAS® 9.4. This destination creates native Microsoft Excel XLSX files, supports graphic images, and offers other advantages over the older ExcelXP tagset. In this presentation you learn step-by-step techniques for quickly and easily creating attractive multi-sheet Excel workbooks that contain your SAS® output. The techniques can be used regardless of the platform on which SAS software is installed. You can even use them on a mainframe! Creating and delivering your workbooks on-demand and in real time using SAS server technology is discussed. Although the title is similar to previous presentations by this author, this presentation contains new and revised material not previously presented. Using earlier versions of SAS to create multi-sheet workbooks is also discussed.
HT04 : Usage of Pinnacle 21 Community Toolset 2.1 for Clinical Programmers
Sergiy Sirichenko, Pinnacle 21
Monday, 3:30 PM - 5:00 PM, Location: Quartz
All programmers have their own toolsets like a collection of macros, helpful applications, favorite books or websites. Pinnacle 21 Community is a free and easy to use toolset, which is useful for clinical programmers who work with CDISC standards. In this Hands-On Workshop (HOW) we'll provide an overview of installation, tuning, usage and automation of Pinnacle 21 Community applications including: Validator - ensure your data is CDISC compliant and FDA/PMDA submission ready, Define.xml Generator - create metadata in standardized define.xml v2.0 format, Data Converter - generate Excel, CSV or Dataset-XML format from SAS XPT, and ClinicalTrials.gov Miner - find information across all existing clinical trials.
HT05 : Building and Using User Defined Formats
Art Carpenter, CA Occidental Consultants
Tuesday, 8:00 AM - 9:30 AM, Location: Quartz
Formats are powerful tools within the SAS System. They can be used to change how information is brought into SAS, how it is displayed, and can even be used to reshape the data itself. The Base SAS product comes with a great many predefined formats and it is even possible for you to create your own specialized formats. This paper will very briefly review the use of formats in general and will then cover a number of aspects dealing with user generated formats. Since formats themselves have a number of uses that are not at first apparent to the new user, we will also look at some of the broader application of formats. Topics include; building formats from data sets, using picture formats, transformations using formats, value translations, and using formats to perform table look-ups.
HT06 : Combining TLFs into a Single File Deliverable
Bill Coar, Axio Research
Tuesday, 10:00 AM - 11:30 AM, Location: Quartz
In day-to-day operations of a Biostatistics and Statistical Programming department, we are often tasked with generating reports in the form of tables, listings, and figures (TLFs). A common setting in the pharmaceutical industry is to develop SAS® code in which individual programs generate one or more TLFs in some standard formatted output such as RTF or PDF. As trends move towards electronic review and distribution, there is an increasing demand for producing a single file as the deliverable rather than sending each output individually. Various techniques have been presented over the years, but they typically require post-processing individual RTF or PDF files, require knowledge base beyond SAS, and may require additional software licenses. The use of item stores as an alternative has been presented more recently for TLFs. Using item stores for Proc Report or the SG procedures, SAS stores the data and instructions used for the creation of each report. Individual item stores are restructured and replayed at a later time within an ODS sandwich to obtain a single file deliverable. This single file is well structured with either a hyperlinked Table of Contents in RTF or properly bookmarked PDF. All hyperlinks and bookmarks are defined in a meaningful way enabling the end user to easily navigate through the document. This Hands-on-Workshop will introduce the user to creating, replaying, and restructuring item stores to obtain a single file containing a set of TLFs. The use of ODS is required in this application using SAS 9.4 in a Windows environment.
HT07 : Cool Tool School
Bob Hull, Syteract
Tuesday, 1:15 PM - 2:45 PM, Location: Quartz
In this fun Hands on Training session you will learn how to use some cool SAS tools to further your SAS tool kit. This training session will emphasize the most useful and frequently used tools that I've found over the course of my career. No matter simple or complex these tools are ones that you will benefit from. How do you find specific data points when the data is not documented? How do you compare across multiple rows for a patient? How do you check a new data delivery vs an old one? All tools used in the class will be provided to attendees.
HT08 : Hands On Training-Automated Patient Narratives and Tabular and Graphical Patient Profiles using JReview
Eric Herbel, Integrated Clinical Systems, Inc.
Tuesday, 3:30 PM - 5:00 PM, Location: Quartz
This hands on training session reviews methods of defining and generating automated patient narratives - based on the ability to define a flexible template for the narratives within JReview, as well as embedding pertinent mini reports or graphs. Further - the session will include hands on definition of graphical patient profiles - telling JReview which data to include in the profiles, relevant dates, and then have the system generate the graphic profiles directly from the data source - calculating days since a specified reference date for each patient on the fly. Lastly, we'll define tabular patient profiles - telling the system which datasets/tables and items to include - then reviewing the default output, but then explore different ways to modify the output format before generating PDF output for each patient selected.
Healthcare AnalyticsHA01 : PrecMod: An Automated SAS® Macro for Estimating Precision via Random Effects Models
Jesse Canchola, Roche Molecular Systems, Inc.
Pari Hemyari, Roche Molecular Systems
Monday, 8:00 AM - 8:50 AM, Location: Centennial A
Typical random effects model estimation involves fitting a linear model with main factor effects that may or may not include nesting with other study factors (e.g., operator nested within site). Part of the challenge comes when having to calculate the confidence intervals for the variance components using the correct effective degrees of freedom (Satterthwaite, 1946) and then iterating the macro over different grouping levels (if more than one exists). The current precision macro, PrecMod, surmounts these challenges and provides a clear and concise path towards efficient and timely calculations ready for reporting.
HA03 : Working with composite endpoints: Constructing Analysis Data
Pushpa Saranadasa, Merck &Co
Monday, 9:00 AM - 9:20 AM, Location: Centennial A
A composite endpoint in a Randomized Clinical Trial consists of multiple single endpoints that are combined in order to evaluate an investigational drug with a higher number of events expected during the trial. For example, the primary composite endpoint may include mortality, myocardial infarction and stroke. The use of a composite endpoint in a clinical trial is usually justified if the individual components of the composite are clinically meaningful and of similar importance to the patient, the expected effects on each component are similar, and treatment will be beneficial. The major advantages in using a composite endpoint are statistical precision and efficiency and smaller, less costly trials. All components of a composite endpoint should be separately defined as secondary endpoints and reported with the results of the primary analysis for the purpose of relative effectiveness. The objective of this paper is to explain how the analysis datasets are created to include not only the composite endpoint but also the secondary and tertiary endpoints for individual patients. The dataset should be structured to be ADaM-like data and readily usable for sophisticated statistical procedures such as Kaplan-Meier and PH REG. The challenge in creating such a dataset is handling the censoring of events without losing the information associated with survival rates.
HA04 : What's the Case? Applying Different Methods of Conducting Retrospective Case/Control Experiments in Pharmacy Analytics
Aran Canes, Cigna
Monday, 9:45 AM - 10:35 AM, Location: Centennial A
Retrospective Case/Control matching is an increasingly popular approach to evaluating the effect of a given treatment. There is some theoretical literature comparing different methods of case/control matching, but there is a lack of empirical work in which each of these methods is employed. In this paper, I use SAS to conduct a retrospective case/control experiment on the efficacy and safety of Eliquis and Warfarin using Propensity Score Matching, Mahalanobis Metric Matching with Propensity Score Caliper and Coarsened Exact Matching. I then compare outcomes from all three methods to try to develop a sense of the advantages and disadvantages of each. In this example, despite a considerable lack of overlap in the output datasets, all three methods led to similar results: evidence of increased effectiveness for Eliquis with little to no difference in safety. More generally, when a researcher needs to choose one method over another, I conclude that the choice should be guided by their understanding of how close the pairwise distance needs to be between cases and controls and the degree to which it is appropriate to measure the treatment on only a part of the pre-matched case population.
HA05 : Four "Oops" Moments While Using Electronic Health Records to Identify a Cohort of Medication Users
Stephen Ezzy, Optum Epidemiology
Monday, 10:45 AM - 11:05 AM, Location: Centennial A
In life, and when using electronic health record (EHR) data, things don't always go the way you plan. Random quirks in software, metadata and clinical data can lead you down rabbit-trails that take hours to discover, diagnose and correct. I present four quirks encountered while identifying a cohort of users of a particular medication within an EHR and describe procedures to help safeguard against them. 1. Use of mapped fields, such as medication codes, can simplify a myriad of local EHR code issues. However detailed knowledge about how the mapping was performed is essential. 2. Determining which dates are best to use from Rx data is complicated --> Issue Date, Patient-Report Date, Action Date, Med Reported Date, Update Date, Discontinue Date, Expiration Date? Choice of date should be determined by whether the research question is aimed at topics such as prescribing behaviors, adherence or drug utilization. 3. Duplicates or not? Some clinical Rx records are complete duplicates of each other, distinct only by an Rx ID field. We can de-duplicate rows ignoring the Rx ID, but must consider the risk of under-reporting of prescriptions. 4. Non-breaking spaces; few programmers know these special characters exist but when they occur in data, they can confound your SAS code if you're not expecting them. For example, you may encounter data selection errors when using IF statements with text strings which include these characters.
Industry BasicsIB01 : What makes a "Statistical Programmer" different from a "Programmer"
Arun Raj Vidhyadharan, inVentiv Health
Sunil Jairath, Inventiv Health
Monday, 1:15 PM - 1:35 PM, Location: Centennial C
In clinical SAS programming world programmers come from different backgrounds like engineering, biotech, biomedical etc. and everyone knows how to write a program as per specifications. At the same time some programmers who understand statistics can look beyond records and their statistical awareness helps them to understand data in a better way. Statistics is not about all crazy formulas, but it's a set of tools that helps us in understanding, analyzing and presenting clinical trial data. We all produce lot of tables and at the same time having basic understanding of statistics will be beneficial for us as we can contribute more than just creating TLFs. So next time when statistician asks to provide standard error instead of standard deviation; or statistician suggested using different formulae each time they calculate p-value we can also provide our input. The goal of this paper is to give the programmer insight to understand basic statistics involved in creation of tables, as a result, a programmer understand and check their numbers , and provide input to improve the quality of the final product and help company to save time and resources.
IB02 : Introduction of Semantic Technology for SAS programmers
Kevin Lee, Clindata Insight
Monday, 1:45 PM - 2:05 PM, Location: Centennial C
There is a new technology to express and search the data that can provide more meaning and relationship - semantic technology. The semantic technology can easily add, change and implement the meaning and relationship to the current data. Companies such as Facebook and Google are currently using the semantic technology. For example, Facebook Graph Search use semantic technology to enhance more meaningful search for users. The paper will introduce the basic concepts of semantic technology and its graph data model, Resource Description Framework (RDF). RDF can link data elements in a self-describing way with elements and property: subject, predicate and object. The paper will introduce the application and examples of RDF elements. The paper will also introduce three different representation of RDF: RDF/XML representation, turtle representation and N-triple representation. The paper will also introduce "CDISC standards RDF representation, Reference and Review Guide" published by CDISC and PhUSE CSS. The paper will discuss RDF representation, reference and review guide and show how CDISC standards are represented and displayed in RDF format. The paper will also introduce Simple Protocol RDF Query Language (SPARQL) that can retrieve and manipulate data in RDF format. The paper will show how programmers can use SPARQL to re-represent RDF format of CDISC standards metadata into structured tabular format. Finally, paper will discuss the benefits and futures of semantic technology. The paper will also discuss what semantic technology means to SAS programmers and how programmers take an advantage of this new technology.
IB04 : Programming checks: Reviewing the overall quality of the deliverables without parallel programming
Shailendra Phadke, Baxalta
Veronika Csom, Baxalta
Monday, 2:15 PM - 2:35 PM, Location: Centennial C
The pharmaceutical and Biotech industry trend is slowly shifting from programming the deliverables in house to outsourcing these programming and data management responsibilities to CROs and FSPs using their knowledge and programming expertise. While the programming responsibilities are being handled by these external sources the programming lead at the pharmaceutical company is still accountable for the quality, accuracy and compliance of results which are created. Even though it's a steep task to check and confirm the accuracy of the results without actually doing a parallel programming and with a tight timeline, certain key programming checks will definitely help in checking the overall quality of the deliverable. This paper talks about a list of such programming and CDISC compliance check which will help the reviewer to assess the deliverable in an efficient way. After these checks the reviewer can judge the quality and compliance of the deliverable with much confidence and pinpoint the issues and errors. While these checks will help in reviewing the deliverable, more specific checks might be needed depending on the complexity of the study.
IB05 : Compilation of Errors, Warnings and Notes!
Rajinder Kumar, Novartis Healthcare Private Limited
Anusuiya Ghanghas, Novartis Healthcare Private Limited
Houde Zhang, Novartis Pharmaceuticals
Monday, 2:45 PM - 3:05 PM, Location: Centennial C
Errors and warnings are part and parcel of programming. But for someone who is new to programming these unwanted messages in log can prove a big headache. Sometimes it is seen that more efforts are required in clearing these unwanted messages (error, warning and note) from log than original time required in development of entire code. This paper puts light on some of the most common error/warning/notes in log and provides solution for them along with possible reason behind them. When someone who has good experience provides solutions for these messages then programmer thinks, ohh! It's so simple. Obviously experience can't be gained overnight. But I am sure examples shown in this paper will be helpful for all the beginner and intermediate programmers for their most of the queries related to unwanted issues faced due to these messages in log.
IB06 : Practical Implications of Sharing Data: A Primer on Data Privacy, Anonymization, and De-Identification
Greg Nelson, ThotWave
Monday, 3:30 PM - 4:20 PM, Location: Centennial C
Researchers, patients, clinicians, and other healthcare industry participants are forging new models for data sharing in hopes that the quantity, diversity, and analytic potential of health-related data for research and practice will yield new opportunities for innovation in basic and translational science. Whether we are talking about medical records (e.g., EHR, lab, notes), administrative information (claims and billing), social contacts (on-line activity), behavioral trackers (fitness or purchasing patterns), or about contextual (geographic, environmental) or demographic (genomics, proteomics) data, it is clear that as healthcare data proliferates, threats to security grow. Beginning with a review of the major healthcare data breaches in our recent history, this paper highlights some of the lessons that can be gleaned from these incidents. We will talk about the practical implications of data sharing and how to ensure that only the right people will have the right access to the right level of data. To that end, we will not only explore the definitions of concepts like data privacy but also discuss, in detail, various methods that can be used to protect data - whether inside an organization or beyond its walls. In this discussion, we will cover the fundamental differences between encrypted data, "de-identified", "anonymous", and "coded" data, and the methods to implement each. We will summarize the landscape of maturity models that can be used to benchmark your organization's data privacy and protection of sensitive data.
IB08 : Good versus Better SDTM: Data Listings
Henry Winsor, Relypsa Inc
Mario Widel, Eli Lilly
Monday, 4:30 PM - 5:20 PM, Location: Centennial C
What does SDTM have to do with data listings? The popular answer is not much, if all you have is "good" SDTM. SDTM (or its predecessor with FDA, Item 11 data sets) have been supposed to replace data listings for about 15 years now. However data listings aren't going away anytime soon, so while still here why not make the absolute best of them? Remember, they are still a required component of an ICH compliant CSR. Too many people think of data listings as a simplistic dump of the data, but when properly developed they can be a very useful tool for data review and CSR preparation. The authors will design and prepare an Abnormal Lab Listing (which is actually referenced as a table in ICH compliant CSRs) and show some ways in which SDTM creation can be used to store lab metadata that improves the listing appearance and utility while simplifying the creation.
IB10 : Moving from Data Collection to Data Visualization and Analytics: Leveraging CDISC SDTM Standards to Support Data Marts
Steve Kirby, Chiltern International Ltd
Terek Peterson, Chiltern International.com
Tuesday, 4:30 PM - 5:20 PM, Location: Centennial C
Data from clinical trials supports a wide range of clinical, safety, regulatory, and analytic groups who all share the same basic need: to efficiently access, analyze and review the data. When clinical data from multiple studies are combined into a "data mart" and linked to visualization and analytical tools, data consumers are able to efficiently find the information they need to make informed decisions. The raw data as collected in individual studies will vary (at a minimum) based on the specific collection system and forms used. Due to that variability, a foundational step in creating a data mart is to ensure that the data from across studies has a consistent, standard format. We will share our experience leveraging CDISC SDTM standards to support data marts containing data from many studies across several therapeutic areas. Practical considerations related to ensuring 1) that the SDTM implementation is consistent across studies, 2) that the data made available will support all consumer needs, and 3) that the data will be made available as needed by the consumers will be discussed. Thoughts on how the industry shift towards integrating CDASH standards into collection forms will benefit the future state of visualizations and analytics based on data marts will be shared.
IB11 : AE an Essential Part of Safety Summary Table Creation
Rucha Landge, Inventiv Health Clinical
Tuesday, 3:30 PM - 3:50 PM, Location: Centennial C
Adverse event summary tables are imperative part of the study and depict a clear picture of safety of a drug to the patients. Hence, it is very essential that we are clear and very careful while creating these tables and displaying correct counts. This paper will illustrate on some basic concepts of ADAE ADAM, common AE tables created in the study and provide some quick tips to self-validate and cross-check the counts that we display.
IB12 : Handling Interim and Incomplete Data in a Clinical Trials Setting
Paul Stutzman, Axio Research
Tuesday, 2:15 PM - 3:05 PM, Location: Centennial C
This paper discusses potential problems that may arise when working with interim and incomplete data. It identifies ways to mitigate these issues through preparation and ongoing reassessment. Techniques and programmatic tools for performing these preparation and reassessment tasks are also presented. Programmers and statisticians are frequently presented with interim and incomplete data. This is often the case when preparing reports for Data Monitoring Committees (DMCs) or Data Safety Monitoring Boards (DSMBs), performing interim analyses, or developing standards-compliant dataset programs. Thus, it is important to understand the constraints and potential pitfalls of working with preliminary data, to write programs that handle both anticipated and unanticipated values, and to develop tools and techniques for understanding how data change over time.
IB14 : Access OpenFDA A Cloud Based Big Data Portal Using SAS®
Jie Zhou, University of Bridgeport
James Sun, Insmed Inc.
Tuesday, 4:00 PM - 4:20 PM, Location: Centennial C
With the rise of data sharing and cloud data porta, big dada and analytics are gradually being adopted across pharmaceutical industry. It made first impact on part of business such as sales and marketing which data have readily available. Now big data analytic gradually reaching the core of pharma business- R&D.One such example is newly emerging big data of OpenFDA. As FDA collects all the NDA submission and pharmcovigilance data, its importance should never be overlooked. The paper briefly introduced OpenFDA, and its API when used in extracting data. We explore ways of handling such online data in JSON format with SAS data steps and procedures. We compare the strength of different methods and discuss some information. Keyword:Big Data, OpenFDA, online data retrieve, JSON, PROC HTTP, PROC DS2, PROC JSON, Groovy Outline: Landscape of big data analytic in pharmaceutical Industry *Emerging trends of public sharing of clinical studies data, examples emerging cloud data format JSON definition *JSON , XML, difference *JSON XML comparison OpenFDA *Brief history of initial development *Current state and Data contents. Value *REST-API introduction SAS method of retrieving data *Traditional methods: SQL limitation *PROC HTTP, PROC JSON and limitation *Using data step to parse JSON data *PROC DS2, short intro *Groovy and other hash programming Access OpenFDA Adverse Effect Data *Example, SAS code section *Discussion : beyond access to analytics
IB15 : Importance of Niche Provider for Successful NDA Submission: Rescue Case Study
Aparna Poona, Softworld, Inc. (Life Sciences)
Bhavin Busa, Softworld, Inc. (Life Sciences Division)
Tim Southwick, Softworld, Inc. (Life Sciences Division)
Tuesday, 1:15 PM - 2:05 PM, Location: Centennial C
Of the several outsourcing models, a Functional Service Provider (FSP) model has become most successful and significant especially in data management, statistical programming and biostatistics service areas of clinical drug development. Most Sponsors prefer to have a single, full-service contract research organization (CRO) offering end-to-end services for an entire study including data submission components. However, with the stringent requirements for the CDISC compliant datasets by the regulatory agencies, the Sponsors are at a higher risk of getting a refuse to file (RTF) due to compliance issues with the submitted data. The full-service CRO may not have an all-round expertise in each of the service areas which will lead to poor quality of the submission deliverables and delays. An FSP model is a perfect fit to these services as it gives the Sponsor an ability to access the optimal functional expertise, in addition to the quality and operational needs, to meet their expedited submission timelines. In this paper, we will discuss a case study where we, as a niche provider, worked with a Sponsor towards their final NDA submission packet, even though the Sponsor had outsourced these services to a global full-service CRO. We will outline significant gaps identified in the deliverables provided by the full-service CRO, services we offered, and how we rescued this Sponsor to be able to meet the quality expectations and submission timeline. In addition, we will summarize lessons learnt during our engagement with this particular Sponsor.
Management & SupportMS01 : Increase Your Bottom line and Keep More Money in Your Pocket - A Practical Guide to the Self-Employed
Margaret Hung, MLW Conslting LLC
Monday, 8:00 AM - 8:20 AM, Location: Centennial G
A large percentage of our attendees are self-employed and small business owners. With that in mind, the author explains fourteen tax saving deductions to help lower taxes and increase profit margins. Also included are "Points" of awareness and "Pitfalls" of certain deductions. This paper is for the sole proprietor, 1-person LLC and 1-person S Corp/C Corp - small business owners and DIYs with limited resources but want to be tax savvy and tax compliant.
MS02 : Outcome of a Clinical SAS University training program in Eastern Europe How are graduates performing in a real work environment?
Donnelle Ladouceur, Experis Clinical
Sergey Glushakov, Experis Clinical
Monday, 8:30 AM - 9:20 AM, Location: Centennial G
We introduced our Clinical SAS University training program at PharmaSUG 2015 - presenting the impetus for the program as well as the structure, development, obstacles, and keys for success. Our first class of graduates now have over a year of work experience, our second class has just gotten started with client work, and our third class of students are over half-way through their school year. The education provided in the program is valuable only if it translates to preparing a work force. Therefore, this presentation will provide updates on how our graduates are performing now that they've been assigned to client teams. As presented previously, the interns were split into 2 groups - one group shadowing client work and the other group working with US-based mentors. Regardless of where they started, all of the students that were accepted into the internship program are currently engaged in client work. Topics covered in this presentation include: what type/complexity of work graduates are doing, the processes, new programs, tools, and support structure we have implemented to support their success. This topic is thought to be of interest to those in management as well as programming and Biostatistics with no particular level of skill or background required.
MS03 : Getting Clouds Moving across the Pacific - a case study on working with a Chinese CRO using SAS® Drug Development
Chen Shi, Santen Inc.
Monday, 9:45 AM - 10:05 AM, Location: Centennial G
California has been in a long drought and some parts across the Pacific have been suffering from storms during the past 2 years. Why not move the clouds over? This can be a dream but not only a dream if we know how to manage the clouds. In this case study, we will share our experiences and lessons learned about delivering study packages on the SAS® Drug Discovery (SDD) platform with a newly on-boarded Chinese CRO. This paper will elaborate on topics including project setup, training, scheduling, and communications as well as debugging, quality control and project summarizations. It was a fun and challenging project as it proved that SDD can be a useful tool for enabling cross-continent teams to work together.
MS04 : QA and Compliance insights using SCAPROC Procedure
Ben Bocchicchio, SAS Institute
Sandeep Juneja, SAS Institute Inc
Monday, 10:15 AM - 10:35 AM, Location: Centennial G
The SCAPROC procedure is a relatedly new procedure that runs a SAS 'Code Analyzer'. This code analyzer captures metadata information about the contents of the SAS Code that is run. It collects information on files that are used in the code's input, the code's output, and the use of macro variables used **while the code is running**. How does QA and Compliance relate to this procedure? Just imagine if you could collect all the metadata about all the SAS code that was run to generate output for a FDA submission. You could programmatically prove all the data reference for input was consistently used, all macros called in the programs were consistent (no worrying about calling a generic macro when a project specific macro was required) and you can definitely tell all designated output were all saved to the correct location. Would this make you feel more confident about your submission? As part of this presentation, these uses of SCAPROC will be revealed. In addition, other potential uses of this information will also be discussed.
MS05 : Schoveing Series 2: Self-Management: The Science of Balancing Work and Life
Priscilla Gathoni, AstraZeneca, Statistical Programming
Monday, 10:45 AM - 11:05 AM, Location: Centennial G
Ever wondered why you don't have the time to do something like reading, thinking, planning, developing (things and yourself), dreaming, or simply embracing the beauty spots of life? Have your daily habits and mental attitude led you to a life of continued compliance with an unfathomable comfort zone that is slowly eating away at your strength, intelligence, and courage to change? Do you find yourself compromising your values, ability to be great, ability to achieve optimal health, and your ability to love unconditionally? Are you living in the future and not forgiving the past? This paper will motivate you to unlock 7 significant keys in your life that will help you embrace right-mindedness and open your thought system to your positive will, thus uniting you with the rewards that come, when you undo all the consequences of your wrong decisions and thoughts. Utilizing these 7 keys helps you awaken the Principle of Power within you, causing a fountain of knowledge to spring up from within. Your environment begins to reflect conditions corresponding to the predominant mental attitude which you entertain, because you now plan courageously and execute fearlessly. You begin a journey of accepting people and things as they are, and appreciate the fact that you are a wonderful being, living in a wonderful world, giving and receiving wonderful services, for a wonderful pay. You then come to the realization that self-management has a lot to do with the way you think, feel, and believe, which eventually determines your destiny.
MS06 : Recruiting and Retention Strategies for 2016 in the SAS Programmer Staffing Organizations
Helen Chmiel, Experis, Inc.
Mindy Kiss, Experis, Inc.
Andrea Moralez, Experis, Inc.
Tuesday, 8:00 AM - 8:20 AM, Location: Centennial G
SAS Programmers in the IT staffing sector continue to be in high demand with projections of continued growth of 6% in 2016 (Braswell, December, 2015). IT overall employment from 1999 to 2013 jumped more than 41% compared with 5.5% for nonfarm employment. Additionally, median unemployment rate for "computer and mathematical occupations" through 3Q14 was 2.9% compared to the 5% overall median unemployment rate (Braswell, January, 2015). Based on these statistics, it is critical to business strategy to understand the challenges both recruiters and managers face in order to successfully mitigate them. An additional pressure on the Staffing sector is that organizations are relying more and more on a contingent workforce. The fierce competition to attract the best and the brightest talent among the limited supply of skilled workers to a market that is increasing in demand results in both a difficult recruiting and equally difficult retention environments. This talk will explore some of the recruitment and retaining techniques that organizations can employ to stay competitive. Recruiting strategies that will attract the top players are explored including individualizing, use of social media, and calculating fit to the organization. Retention strategies which emphasize employee engagement are also explored, emphasizing current research findings for contingent workers and including several unique approaches to enhance employee engagement.
MS07 : Quality, Timely and Within Budget Analysis and Reporting - Yes, you can have all three! A process and tool to achieve this goal
Wilminda Martin, Alcon
Sharon Niedecken, Alcon
Syamala Schoemperlen, Alcon
Tuesday, 8:30 AM - 9:20 AM, Location: Centennial G
As the cost of conducting clinical trials is skyrocketing in today's competitive climate, the Pharma and the Biotech industries are stressing better efficiency, more precise resource forecasting, and tighter budgeting. The key to succeed in these objectives is to allocate, monitor, and manage budgets and resources. In addition, early planning of analysis and reporting tasks and the resources to support those tasks is crucial. These provide a roadmap to operate efficiently and avoid uncontrolled changes to project scope, therefore enabling organizations to be cost-effective. In order to set realistic timelines and cost-effective budgets, the analysis and reporting requirements must be fully defined. It is important to have a complete understanding of the scope of the tasks in the preliminary stages of budgeting in order to set realistic budgets. Otherwise, significant changes to requirements will require re-baselining of the initial budgets. At Alcon, Statistical Programming Therapeutic Area Leads are responsible for the budgeting and resource forecasting for data analysis and TFL reporting for studies and projects. The study/project programming team is responsible for delivering quality outputs in a timely manner within the budget. In this paper authors describe the process that the Alcon Statistical Programming group uses to estimate budgets, allocate resources and establish schedule of activities. It also includes the process to monitor the progress of activities which enables the team to mitigate any risks and successfully complete the programming of analysis and reporting tasks with high quality, on-time and within the assigned budget.
MS08 : Change Management: The Secret to a Successful SAS® Implementation
Greg Nelson, ThotWave
Tuesday, 9:45 AM - 10:35 AM, Location: Centennial G
Whether you are deploying a new capability with SAS® or modernizing the tool set that people already use in your organization, change management is a valuable practice. Sharing the news of a change with employees can be a daunting task and is often put off until the last possible second. Organizations frequently underestimate the impact of the change, and the results of that miscalculation can be disastrous. Too often, employees find out about a change just before mandatory training and are expected to embrace it. But change management is far more than training. It is early and frequent communication; an inclusive discussion; encouraging and enabling the development of an individual; and facilitating learning before, during, and long after the change. This paper not only showcases the importance of change management but also identifies key objectives for a purposeful strategy. We outline our experiences with both successful and not so successful organizational changes. We present best practices for implementing change management strategies and highlighting common gaps. For example, developing and engaging "Change Champions" from the beginning alleviates many headaches and avoids disruptions. Finally, we discuss how the overall company culture can either support or hinder the positive experience change management should be and how to engender support for formal change management in your organization.
MS10 : The CDISC's are coming!!
Annapurna Ravi, inVentiv Health Clinical
Caroline Gray, inVetiv Health Clinical
Tuesday, 10:45 AM - 11:05 AM, Location: Centennial G
2016 is here, bringing with it the much awaited NDA requirements to be CDISC-compliant for FDA submissions. For those few teams who are still latching onto a legacy environment, the need of the hour is to be CDISC-'trained'. What about the management team? Who will train their teams, maintain the morale and all while keeping an 'eye' on the budget. We work with busy and billable projects, collecting and compiling training while adjusting and meeting financial goals. Prioritizing and preparing your team for CDISC seems to be at cross-roads where most CROs are at these days. How do we hit the bullseye?
Panel DiscussionsPD01 : Panel Discussion - Define.XML
Tuesday, 10:00 AM - 11:20 AM, Location: Agate
PD02 : Panel Discussion - SDTM
Tuesday, 1:30 PM - 2:50 PM, Location: Agate
PD03 : Panel Discussion - ADaM
Tuesday, 3:45 PM - 5:05 PM, Location: Agate
PostersPO01 : Enough of Clinical&Let's talk Pre-Clinical!
Arun Raj Vidhyadharan, inVentiv Health
Sunil Jairath, Inventiv Health
In drug development, pre-clinical development, also named preclinical studies and nonclinical studies, is a stage of research that begins before clinical trials (testing in humans) can begin, and during which important feasibility, iterative testing and drug safety data is collected. The main goals of pre-clinical studies are to determine the safe dose for First-in-man study and start to assess product's safety profile. Products may include new or iterated or like-kind medical devices, drugs, gene therapy solutions, etc. This paper talks about preclinical trials conducted in animals prior to testing in human subjects.
PO04 : Data Issue Visualization for Quality Control in NONMEM Data Set
Linghui Zhang, Merck
Non Linear Mixed Effects Model (NONMEM) data set is widely used for pharmacokinetics (PK) / pharmacodynamics (PD) modeling and simulation, which studies the drug concentration in the body over time (measured in terms of absorption, distribution, metabolism, and excretion [ADME]) and the body's pharmacological response to a drug (measured in terms of adverse events [AE] and efficacies). In a very specific pre-defined format, the NONMEM data set includes a chronological mixture of dosing records, PK/PD observations and covariates of the dosing and observation records. To create NONMEM data sets, it takes tremendous programming efforts for programmers to derive dosing history, order PK/PD observations and merge various types of covariates. The variables required for NONMEM data are often complicated and come from different source data sets. It is a tough challenge to perform data validation and cleaning. Good quality NONMEM data is critical in PK/PD analysis and errors from a small portion of the data can redirect the conclusion of a study. To guarantee the accurate and meaningful PK/PD analysis, data cleaning is essential and crucial for quality control in NONMEM data set production. Graphs are visual summaries of data and very effective to describe essential features than tables of numbers. This paper illustrates some commonly used graphs to virtualize the data errors and questionable records in both raw clinical data and NONMEM data set. Scientific programmers and pharmacometricians with minimal programming skills can apply these graphs to check data issues and examine data thoroughly.
PO05 : A Data Preparation Primer: Getting Your Data Ready for Submission
Janet Stuelpner, SAS
Does your data conform to industry standards? Is it in the format that is necessary for submission to the regulatory authorities? Beginning next year, the FDA will require that submission data adhere to CDISC industry standards. All data will need to be in SDTM and ADaM standard format. Above and beyond that, there will be requirements to submit your data in very specific formats. What do you need to do to make sure that your data is in the correct format? The first thing to do is create procedures to know what to do and where to start. This presentation will point out some of the requirements and make suggestions as to the things that you need to do to have a better chance to have your submission accepted and not sent back.
PO06 : Mixed Effects Models
Yan Wang, Bristol-Myers Squibb
PROC MIXED procedure has been commonly used at Bristol-Myers Squibb for quality of life and Pharmacokinetic/Pharmacodynamics modeling. This presentation has provided two examples to explore the use of the SAS PROC MIXED procedure. We will first discuss the Strengths of using the PROC MIXED procedure, then introduce the two case studies. One example is a phase 3 neuroscience study, we will use this example to demonstrate the longitudinal data analysis. The other example is a phase 2, PK, HIV, cross-over study. We'll describe the programs that have been used to carry out the analyses, and the interpretation of the outputs. This presentation also shows the organization data analysis procedures, such as Statistical Analysis Plan and Data Presentation Plan.
PO07 : "Creating Time to Event ADaM dataset for a Complex Efficacy Endpoint in Multiple Sclerosis Therapeutic Area"
Ittai Rambach, Teva Pharmaceuticals Industries
This article will give an Introduction for the creation of Time to Event (TTE) ADaM Dataset for a complex efficacy endpoint - Time to Confirmed Disease Progression (CDP) which is commonly used in Multiple Sclerosis (MS) therapeutic area (TA) studies. A common endpoint for MS is CDP. The CDP definition is as follow (should be defined specifically in the protocol and SAP): 1) An increase from baseline of X points for a neurological assessment 2) The increase should be sustained for at least 3 months 3) Progression cannot be confirmed during a relapse In this article I will make an introduction to the MS TA, will look into the MS Data - Neurological Assessments and Relapses. In the main part of the article, I'll explain how to define the CDP by creating ADaM dataset for Neurological assessments (ADXS) and the Time to Event (TTE) ADaM Dataset, with multiple censoring values and different event and censoring description. I'll also present the metadata display and the traceability from ADTTE to ADXS and to SDTM.
PO08 : Summarizing Adverse Events of Interest - Onset, Duration, and Resolution
Avani Kaja, Seattle Genetics Inc.
John Saida Shaik, Seattle Genetics Inc.
One of the main objectives of clinical trials is to study the safety of the drug. To understand the safety of a drug in a first-in-human clinical trial, it is important to study adverse events and their corresponding severity grades. In most of the clinical trials there will be specific AE's of interest that need to be analyzed in detail. Summarizing the onset, duration and resolution of AE's of interest will provide the medical monitor with important information to determine the safety of the drug. In this paper, we will introduce the data analysis & presentation of treatment-emergent AE onset, AE duration and time-to-resolution of AE's of interest, after End-of-Treatment using SAS®. The onset data analysis provides a summary of time to first onset of AE of interest. The duration data analysis provides the duration of first occurrence of grade 3 or 4 events, to improvement or resolution. The AE resolution data analysis provides a summary of improvement or resolution of AE's of interest, after End of Treatment.
PO09 : Maximizing efficiency & effectiveness of SAS® Programming in Clinical Trials using Project Management Organizing Methodologies
Shefalica Chand, Seattle Genetics, Inc.
Technology plays a major role in implementing and completing a project, however there are certain project management methodologies that can help maximize efficiency and effectiveness of a project. These methodologies can provide an organized structure to improve integration, communication, conflict management, time and cost management, risk analysis, quality control, providing clear directions, to achieve enhanced benefits and results. This paper talks about how project management organizing methodologies can be advantageous to SAS® Programming projects in the Clinical Trials Industry. We will learn about creating a project management plan and using Five Process Groups of Project Management for SAS® Programming projects. " Initiating - To authorize a project or the next phase of a project. Establishing objectives, scope, deliverables, resources and responsibilities " Planning - Core planning and facilitating. Designing the best courses of action to achieve project objectives " Executing - Coordinating resources to carry out the project plan " Monitoring and Controlling - To ensure project objectives are met and corrective actions are taken to handle a crisis situation " Closing - Project scope is met and appropriately closed, to formalize project completion
PO10 : Tips and Tricks for Bar Charts using SAS/Graph® Template Language
Randall Nordfors, Seattle Genetics
Boxun Zhang, Seattle Genetics
Bar charts are frequently used in clinical reporting to visually display quantitative information and identify patterns for a variety of purposes. SAS/Graph® Template Language (GTL) provides powerful functionality to create sophisticated analytical graphics independent of the need for statistical procedures. Programmers may face situations where a conceptual idea for a graph display style is specified by a biostatistician or medical writer, even though there may not be a natural programming technique, or available creative precedent, for generating it using SAS/Graph®. In this paper, using selected tips and tricks, we focus on ways to help you solve bar chart programming challenges efficiently. Our examples included in this paper are: " How to create mirror bar chart for side-by-side comparison around a single zero point on the x-axis. " How to display multiple distinct bar pairs within a single axis graphing infrastructure. The trick is that they only appear to be independent, while in actuality the data are from a single source.
PO11 : Delivering a quality CDISC compliant accelerated submission using an outsourced model
Mei Dey, AstraZeneca
Diane Peers, AstraZeneca
Pharmaceutical companies are faced with the challenge of developing innovative treatments for patients and replacing older and expiring pipelines whilst containing research and development costs. Outsourcing models, whether functional outsourcing or full service outsourcing, are increasingly being relied upon by pharmaceutical companies to meet these challenges. This includes the outsourcing of the analysis and reporting of clinical trials and the creation of an electronic submission package to clinical research organizations (CRO's). Sponsors are owners of drug compounds and are ultimately accountable for the quality of submission packages delivered to regulatory agencies. How to ensure submission quality in an outsourced model is a big challenge facing all sponsors. To what extent should sponsors be reviewing the CRO's work and what level of checking is sufficient? This paper uses a case study approach to examine a recent sponsor/CRO partnership in support of an accelerated oncology drug submission. High quality, CDISC compliant eCRT submission packages contributed to accelerated approval of less than 6 months under priority review by the FDA. We will share our best practices and lessons learned from this submission experience and illustrate how sponsor/CRO partnerships in an outsourced model can work well to support an accelerated drug submission when the sponsor ensures proper planning, regular touchpoints with vendors, impeccable execution, and that a thorough quality review process is in place.
PO12 : CDISC Standards End-to-End: Transitional Hurdles
Christine Mcnichol, Chiltern International
Tony Cardozo, Chiltern International Ltd.
Alyssa Wittle, Chiltern International
"Plus", "-like", "-ish" - We have all heard it in some variation: SDTM& plus, ADaM&ish, CDISC&like. It is evident there are still some things preventing us from accepting pure CDISC. Many companies find the transition to CDISC difficult for a variety of reasons. They enjoy a "CDISC+" philosophy and believe it is "compliant enough" to work. The types of changes to the standards might be adding or ignoring controlled terminology, changing the definitions of CDISC specific variables, adding non-CDISC compliant variables, and only using CDISC standards for some datasets in a study, but not for all datasets. This presentation discusses common challenges encountered while a company transitions onto CDISC. The pitfalls of the "CDISC+" design will be discussed in depth. Conversely, the pros of what having a fully CDISC - and CDASH - compliant database will also be covered. By using CDISC from end-to-end, meaning from Protocol and CRFs through TLGs, many efficiencies can be gained for project team members at every level. Finally, once a decision for compliance has been made, how can pharmaceutical companies effectively learn CDISC standards so that they feel comfortable using, reviewing, and understanding CDISC compliant studies? The different options available for training through CDISC along with examples of teaching methods which have a positive impact on user knowledge will be presented.
PO13 : The Hadoop Initiative: Supporting Today's Data Access and Preparing for the Emergence of Big Data
Michael Senderak, Merck & Co. Inc.
David Tabacco, Merck & Co., Inc.
Robert Lubwama, Merck & Co., Inc.
David O'Connell, Merck & Co., Inc.
Matt Majer, Merck & Co., Inc.
Bryan Mallitz, Merck & Co., Inc.
Currently underway within Merck's Center for Observational and Real world Evidence is a proof-of-value evaluation of Hadoop, a cost-efficient, powerful, new-generation architecture designed to quickly process massive data sources known collectively as 'Big Data'. Hadoop replaces large, expensive commercial computers by distributing the storage and processing load across an array of low-cost computers, scalable virtually limitlessly with additional inexpensive computers to greatly exceed the storage and performance capabilities of stand-alone high-end computers. The Hadoop initiative within CORE is two-fold. The first objective is to evaluate the performance benefits of Hadoop versus the current high-end Oracle platform in extracting the electronic health care and medical insurance records currently used within CORE, while synchronizing with the UNIX platform used by Statistical Programming for further processing and analysis. The second, longer-term objective is to allow the capability to both extract and analyze truly massive, unstructured 'Big Data', all within the Hadoop architecture. This latter objective is key to our future preparedness to manage the emerging Big Data sources that are expected to become increasingly relevant to the pharmaceutical industry--sources that may very well include social media and other streaming web-based data. And with this readiness for processing these novel, untested Big Data sources comes need and opportunity for novel data mining, summarization and statistical analysis approaches. Without these efforts to extract value from these largely untested 'data lakes', we risk our competitive edge against other players who are committed to leveraging the potential buried within the emerging world of Big Data.
PO14 : Making Greedy into Optimal! A Poor Woman's Attempt to Get Optimal Propensity Score Matching from a Greedy Matching Algorithm
Janet Grubber, VA HSR&D
Carl Pieper, Duke University Medical Center, Dept. of Biostatistics and Bioinformatics
Propensity score matching (matching on the probability of treatment assignment (treated vs. untreated) conditional on observed baseline characteristics) is a popular method used by many observational studies to approximate, as much as possible, randomized clinical trial methodology. In the medical literature, greedy matching is the form of matching most often reported, though optimal matching is often said to be a superior method. In our real world example, our goal was to match 1 treated patient to 3 untreated controls if 3 suited controls existed; however, if fewer (1 or 2) existed, we still wanted to retain the 1:2 or 1:1 matched pair to increase our power to detect significant differences in analyses. Optimal matching was well-suited to accomplish our goal; however, our organization lacked funds to pay for the needed SAS module. Greedy matching algorithms, which were runnable using our existing SAS 9.4 modules, typically create only fixed ratios of treated:untreated control matches (e.g., for a desired 1:3 ratio, only treated patients with a full complement (3) of untreated controls are retained; those with fewer matched controls (1 to 2) get dropped from the final data set and their matched controls go back into the pool of possible controls for other treated patients). Here we share our solution; we build on an existing greedy matching macro to produce matched pairs (treated:untreated) of 1:3, 1:2, and 1:1 ratios within the same data set. Our solution adds one, but not all capabilities of optimal matching to statistics toolbox! (Appropriate for intermediate users.)
PO15 : SDTM Metadata The Output is Only as Good as the Input
Sue Sullivan, d-Wise
Capturing all of the intricacies, exceptions to the rules, and finer points of the SDTM IG takes both attention to detail and manual effort. Many companies configure their own interpretation of SDTM metadata to meet their specific sponsor needs and ensure compliance with federal regulations. However, CDISC has created SHARE, "&an electronic repository for developing, integrating and accessing CDISC metadata standards in electronic format." As part of SHARE, eSHARE files have been created that contain SDTM metadata for SDTM models v1.2-1.5, which are available for download to CDISC Gold members. The eSHARE files may be used in a number of ways. For one, the files may be used as a starting point to compile a sponsor's metadata by IG and model version. Additionally, the files may be used for compliance checking of sponsor standards, MDR content, or study level metadata. However, the output is only as good as the input and the eSHARE files do not include all of the SDTM metadata contained as text in the IG and needed to completely describe your data. To create a comprehensive set of SDTM metadata based on these files, it is necessary to supplement the eSHARE files. This paper defines a process to leverage the eSHARE files and extend those files to encapsulate all of the SDTM metadata needed to house or check a sponsor's SDTM metadata. Examples of why supplemental information is needed and how to develop comprehensive metadata and tools will also be provided.
PO16 : "The car is in the shop but where are the mechanics?" The future of Standard Scripts for Analysis and Reporting
Dirk Spruck, Accovion
Jeanina (Nina) Worden, Santen
The CS Standard Analyses Working Group has met its main goal: to establish a framework for standardizing analyses across the industry. Recent progress falls mainly in 3 areas: 1) The White Paper project has developed guidance on 9 data analysis topics, and has published 3. 2) The Infrastructure project has established a GitHub code repository. 3) The Content project has established implementation guidelines and a qualification process, and has published scripts that display standard measures of Central Tendency. The focus for 2016, and moving forward, is to promote adoption of these standard analyses; to coordinate review of, finalize and publish white papers; and to increase the scope, quality and usability of the corresponding R and SAS scripts. A framework for collaboration enables standard analyses, but it is just a starting point. To fully realize the vision of standard industry analyses requires expertise and resources: a commitment by stakeholders. This poster will review the vision, summarize progress to date, and outline proposals to resource further advances.
PO17 : Standards Implementation & Governance: Carrot or Stick?
Julie Smiley, Akana, Inc.
With the looming FDA mandate to submit CDISC compliant datasets for regulatory review and approval, BioPharma companies are seeking better methods to implement and govern standards. Traditional standards management practices have typically involved unstructured or semi-structured CDISC and internal standards that are read and interpreted by a group of standards experts who develop policies and procedures (governance) around how the standards are to be managed and used. These processes generally include a set of spreadsheets and other templates, a relatively manual deviation request process, and auditing or compliance checks. With tools and processes like these, many programmers think of standards and governance as a stick meant to punish them. To them it is another set of burdensome tasks that add no value to their already full plates. However, if implemented using the right tools, processes, and automation, standards and governance can become a carrot or incentive by optimizing business process efficiency. This paper will cover how a metadata repository integrated with SAS and other systems can help change the perception of your organization by facilitating more efficient standards implementation and governance. It will outline how a metadata-driven approach to standards management can be used to automate business processes, enabling not only regulatory compliance, but also better resource utilization and improved data quality.
PO18 : Importing Data Specifications from .RTF and .DOC files and Producing Reports
Sandesh Jagadeesh, PPDi
Programming reports that are used in the process of data cleansing and patient level checks can be tedious. These reports are of great value to data management teams, providing in-depth information needed to identify adverse events, compare variables against database standards, and to reconcile data. The report specifications are typically provided in .RTF or.DOC format and require manual programming to generate the desired reports. The number of variables in these reports can be quite extensive and time-consuming to review and program. This paper will introduce a method for importing the required datasets, variables and labels directly from the specification, reconciling them to the variables in the database, and producing reports in .PDF, .RTF, or .XLS format for review.
PO19 : SAS Macro for Summarizing Adverse Events
Julius Kirui, SCRI
Rakesh Mucha, SCRI
Adverse Events (AE) summaries are very important in determining the continuity of most early phase studies. These summaries can be reported in various ways or formats as requested by study reviewers, FDA or other regulatory agencies. Majority of the mock shells created by the biostatistician specifies that AE dataset is best summarized at patient level, System Organ Class (SOC) and Preferred Term (PT) by worst grade or subset of relatedness or seriousness. Writing and validating SAS programs to mimic these mock shells, especially for multi-cohort or randomize studies can be challenging, time consuming and tedious. This paper will discuss a SAS macro program (%sumAE) that can summarize raw or ADaM AE dataset by simple manipulation of macro parameters. The flexibility to output one summary table at a time or output as many summary tables as possible to one folder is accounted for. This macro can be used by individuals with minimum SAS 9.3 skills. Macro logic flow chart, macro parameters description and the macro will be include in the appendix.
PO20 : Using GTL Generating Customized Kaplan-Meier Survival Plots
Joanne Zhou, GSK
In cardiovascular and oncology clinical trials time-to-event endpoints such as overall survival and progression-free survival are the main focal point of clinical interests and usually graphically displays of the Kaplan Meier (KM) curve plays a key role in presenting the key clinical results and importance. With advent of SAS Graph Template Language (GTL) and newer version of the LIFETEST procedure, generation of conventional KM curves becomes much easier. However, some adhoc requests for customized KM plots could be still quite a challenge. In this paper, we use SAS GTL, ODS and SAS macros along with other SAS statistical analysis procedures to overcome some of the challenges to generate the customized KM plots.
PO21 : SDTM Automation with Standard CRF Pages
Taylor Markway, SCRI Development Inovations
Much has been written about automatically creating outputs for the Study Data Tabulation Model (SDTM). This paper provides brief descriptions of common approaches in practice and details an approach to automatically create SDTM outputs when case report form (CRF) pages are uniquely identifiable. The process uses a fixed SAS® program structure that allows programs to be automatically constructed and deconstructed. This approach depends on three assumptions: 1) all CRF pages are identifiable, 2) all CRF pages that have the same set of identifiers are unique, and 3) all CRF pages are consistently represented in the electronic database. At SCRI, the statistical programming team and the data management team worked to meet these assumptions while designing the standard database. After meeting these three assumptions we can automatically create SAS SDTM programs for each domain. These programs that are easy to read due to their systematic structures. Considering that the programming team is still working directly with SAS, minimal training is needed to implement them. Deviations from the standard CRF or sponsor-specific SDTM mapping are implemented directly in SAS programs without complicated macro calls or outside mapping tools.
PO22 : Automatic Consistency Checking of Controlled Terminology among SDTM Datasets, Define.xml, and NCI/CDISC Controlled Terminology
Min Chen, Alkermes
Xiangchen (Bob) Cui, Alkermes, Inc
In FDA electronic submission, the most current version of NCI/CDISC controlled terminology for SDTM variables was expected to be submitted in define-xml. With the large amount of controlled terminology and the occasional update of NCI/CDISC controlled terminology, the controlled terminology applied in the SDTM programming was often out-of-date, and subject to the OPENCDISC report. It is desirable to ensure the consistency among SDTM datasets, define.xml, and NCI/CDISC controlled terminology to achieve the technical accuracy. In this paper we created a library of controlled terminology in SDTM specifications spreadsheet which contains both standard NCI/CDISC controlled terminology and sponsor-defined controlled terminology. A SAS® macro tool was applied to automate consistency-checking of controlled terminology between SDTM datasets and define.xml. The macro tool also checked the update of the NCI/CDISC controlled terminology, and if needed, automatically updated the library of controlled terminology accordingly. This Macro-based comprehensive approach can ensure consistency between SDTM datasets and define.xml, as well as between controlled terminology in define.xml and NCI/NDISC controlled terminology, for final FDA submission. The high quality of the submissions can be achieved in a cost-effective and efficient way.
PO23 : Validation Methods and Strategies for Presentation of Clinical Reports: The Programmers Road Map to Success
Vijayata Sanghvi, InVentiv Health
Validation is one of the core tasks of pharmaceutical industry clinical report programming. This is a very important job role which places responsibility on the programmer's knowledge and experience to make sure the end products meet the study design requirements. This paper will provide a list of methods and strategies which a programmer can adopt to ensure quality submission of clinical reports, before they are released to stakeholders within an organization, or to an external agency for review. Furthermore, this paper will show how to use a QC plan checklist to describe and document the validation tasks and completion guidelines.
PO24 : Building Efficiency and Quality in SDTM Development Cycle
Kishore Pothuri, Softworld
Bhavin Busa, Softworld, Inc. (Life Sciences Division)
The typical cycle for generating compliant CDISC SDTM datasets is to develop mapping specifications, programming and QC of the domains, followed by checking these domains for compliance issues per the FDA requirements. In some cases, the compliance checks of the SDTM domains take place towards the end of the study (i.e. closer to or after the database lock). This seems like a logical approach to the programming of the datasets, however checking for compliance and addressing issues identified by OpenCDISC validator once the programming is complete could result in re-work. A programmer will come across three possible scenarios: 1) compliance issues that will need for the programmer to go back and update mapping specifications and make updates to their dataset programming, 2) compliance issues that will need for the programmer to just make update their dataset programming, and 3) compliance issues that will not require any update. In scenarios 1 and 2, the back and forth process results in programmer spending more time with the SDTM dataset development then originally estimated. In addition, depending on the changes made to the SDTM datasets to address compliance issues, it also affects the quality and the timing of the datasets and analysis deliverables (ADaM and TLFs) down-stream. In this paper, we recommend programmer to understand the compliance requirements upfront and implement those during mapping specifications development. This will avoid scenarios 1 and 2 as explained above and will result in building efficiency as well as quality during SDTM development cycle.
Quick TipsQT01 : Log Checks Made Easy
Yogesh Pande, Merck Inc.
Tuesday, 2:30 PM - 2:40 PM, Location: Centennial F
It is Good Programming Practice (GPP) for a programmer to check the SAS® log for errors, warnings, and all other objectionable SAS NOTES. In order to successfully create tables, listings, and figures, the programmer must ensure that code is correct. The accuracy of the code, assuming that the program logic is correct, solely depends on SAS log. Using SAS macro language and Base SAS®, this paper will introduce a macro that will enable a programmer or statistician to check all SAS logs in a folder and tabulate the log issues by SAS log files using PROC REPORT.
QT02 : Plan Your Work Using PROC CALENDAR
Suresh Kumar Kothakonda, inVentiv Health Clinical
Wednesday, 10:45 AM - 10:55 AM, Location: Centennial F
Most of the times, we rely on our mailbox calendar, notepads, etc. to get a snapshot of our schedules. Everyone loves to have visual flow of their activities. To make this convenient to everyone by using SAS system, let me show you a simple and easy way to organize your work life using PROC CALENDAR which can give a visual representation of collective information of tasks for a single or multiple projects of the team. CALENDAR procedure combines data from individual personal schedule with corporate schedule/events and print out calendars for individuals. While scheduling applications abound, CALENDAR procedure serves more than just printing calendars. Besides, it also organizes calendar schedules for a group or a company and keeps the data for easy retrieval. It's easy and beneficial to maintain Individuals, team's schedules. More information is detailed in this paper.
QT03 : Quick and Efficient Way to Check the Transferred Data
Divyaja Padamati, Eliassen Inc.
Tuesday, 3:00 PM - 3:10 PM, Location: Centennial F
Consistency, quality and timelines are the three milestones that every department strives to achieve. As SAS Programmers, we often have to work with data generated by another team or a competitor company. Checking the data for compliance with CDISC or company standards will be a major hurdle in latter cases. In this paper we shall discuss how data compliance can be achieved in a quick and efficient manner. The first step in this process will be checking for redundant data i.e., removing any variables with all missing data or an observation where all the variables are missing. Secondly, we need to make a quick check for the presence of required variables as per the required standards. Finally, we have to create a list of required variables which are absent/not in transferred data. By letting SAS do all the work for us, the result will be faster and efficient. This paper will offer an approach to achieving all these goals by a click of a button. The skill level needed is intermediate to advanced.
QT04 : Keeping Up with Updating Dictionaries
Divyaja Padamati, Eliassen Inc.
Tuesday, 3:30 PM - 3:40 PM, Location: Centennial F
Often in Pharmaceutical Industries we work with data for poster presentations, manuscripts or for additional exploratory analyses after the CSR submission. Generally for these kinds of submissions we would be working on old databases and as SAS programmers we need to be extra vigilant. In this paper we shall discuss about; (1) How to check whether the data is using current version of dictionary.(2) If data has previous version dictionary terms how shall we proceed. The skill level needed will be intermediate to advanced SAS programmer.
QT06 : Scalable Vector graphics (SVG) using SAS
Yang Wang, Seattle Genetics
Vinodita Bongarala, Seattle Genetics
Tuesday, 3:45 PM - 3:55 PM, Location: Centennial F
Scalable Vector Graphics (SVG) is an XML-based vector graphic format, compatible with most modern browsers. Unlike pixel-based graphics which lose resolution when enlarged, Scalable Vector Graphics can be magnified infinitely without loss of quality for any screen resolution and size. Starting with SAS 9.2 version, Scalable Vector graphics is supported through Vector Graphics device. SVG can be produced by Graph Template Language (GTL) and traditional SAS/GRAPH procedures like PROC GPLOT/GCHART. This paper explains the difference between the two kinds of graphs and discusses the pros and cons of using them for clinical outputs. SAS features such as GTL and SGRENDER can be efficiently used to produce high quality SVG. Examples that use traditional SAS/GRAPH such as PROC GPLOT/GCHART procedures to produce SVG graphs are also included. This paper will serve the following purposes: " Help decide what kind of graphics format to produce to meet specific needs " Quick Start guide to learn to use GTL to create vector graphics " Quick conversion of current pixel-based graphics to vector graphics.
QT07 : Adding Statistics and Text to the Panel Graphs using INSET option in PROC SGPANEL
Ajay Gupta, PPD Inc
Tuesday, 4:00 PM - 4:10 PM, Location: Centennial F
The SGPANEL procedure creates a panel of graph cells for the values of one or more classification variables. We often get requests from business users to add a text or statistics to each panel cell of the graph. Normally, this task can be accomplished using Graph Template Language (GTL). But, the GTL approach requires a good amount of programming time and can be tedious. In SAS v9.4, the INSET statement is added in PROC SGPANEL. Using the INSET statement, text and statistics can be added in the panel graph very easily. This paper will introduce the INSET statement in the SGPANEL procedure and provide a brief overview of all options (with examples) which can be used in the INSET statement to enhance the appearance of text and statistics in the panel graph.
QT08 : Trivial Date Tasks? PROC FCMP Can Help
Jueru Fan, PPD Inc
Tuesday, 4:15 PM - 4:25 PM, Location: Centennial F
Imputing partial date, comparing dates and creating flags are important tasks when working on CDISC ADaM datasets. If there are functions or CALL routines that can help us deal with this and therefore free us from these repetitive works, we can better focus on tasks requiring more logical thinking. Yes, the SAS Function Compiler (PROC FCMP) Procedure is there for us. In this paper two user-defined functions and a CALL routine respectively aiming at comparing complete/partial date, imputing partial date and creating pre concomitant flag, concomitant flag, and post concomitant flag will be introduced. SAS 9.2 and Windows XP operating system was used. Readers are expected to have knowledge of SAS base programming and CDISC ADaM.
QT09 : A SAS Macro Tool to Automate Generation of Customized Patient Profile in PDF Documents
Haining Li, Mass General Hospital
Hong Yu, Mass General Hospital
Tuesday, 4:30 PM - 4:40 PM, Location: Centennial F
Clinical trials data are collected from many different sources and the data are often stored in many SAS tables while each SAS table includes multiple patients. Once the trial begins, all of the data needs to be reviewed and cleaned in real time. To effectively monitor study progress and subject safety, patient profiles are commonly used in this review process. These profiles will contain all current data for each patient that will allow reviewers to rapidly assess the patient's overall status as well as any level of detail information. To reduce the effort in organizing the huge volumes of tables and variables within each clinical trial, this paper will provide a SAS Macro to automatically drop, keep, rename or label variables from all the tables, and generate the ultimate patient profile by subject in PDF documents. This macro also provides options to automatically drop any column when all the data in the column are missing or equal to certain values. This method will significantly improve the efficiency in clinical trial data review process.
QT10 : Selecting Analysis Dates - A Macro Approach Using Raw Data for Expediting Results
Abdul Ghouse, Seattle Genetics
Tuesday, 4:45 PM - 4:55 PM, Location: Centennial F
Clinical trials data contain a variety of values including dates which support several key analyses such as Time to Event (TTE). It's often seen that the algorithms defined to conduct TTE analyses require the selection of a particular date that can be found across large number of datasets. Using a macro to create a single dataset with all the CRF collected dates allows selecting the event or censoring per patient basis. This approach simplifies the validation process improving efficiency and quality. In addition, using data from the raw SAS library allows for faster output creation and review of interim efficacy and safety endpoints for decision making in a phase 1 study.
QT11 : Data Validation: Bolstering Quality and Efficiency
Anusuiya Ghanghas, Novartis Healthcare Private Limited
Rajinder Kumar, Novartis Healthcare Private Limited
Houde Zhang, Novartis Pharmaceuticals
Tuesday, 5:00 PM - 5:10 PM, Location: Centennial F
We are working in health care industry and here a small mistake means risk for many lives. Statistical programming is the small portion of that industry but our tiny efforts help people to live a better life. It is necessary that all the work done here should be with higher quality and to maintain quality, validation process needs to be defined. This paper includes general outline of two validation processes of statistical programs using in clinical research. In the first process, validation programmer will not write independent code and manually check results or small portion of results. This is normally preferred for small data. In the second process, validation programmers will write their own code and compare the final datasets with programming codes or procedures (PROC compare). Second process is more precise, less dependent and more aligned. The paper also includes efficient way of second validation process which generally takes lot of time and that is sometimes more painful than creating original results. For that, initial programmer will save their final dataset in predefined format that will be same for all the programs based on the type of report. Structure of final dataset from the initial programmer is now fixed and that helps validation programmer to use same piece of code each time to obtain the information and perform validation. It minimizes the reprogramming and chance of errors.
QT12 : Breaking up (Axes) Isn't Hard to Do: A Macro for Choosing Axis Breaks
Alex Buck, Rho, Inc
Tuesday, 5:15 PM - 5:25 PM, Location: Centennial F
SAS 9.4 brought some wonderful new graphics options. One of the most exciting is the addition of the RANGE option for SGPLOT. As the name suggests, specifying ranges for a broken axis is controlled by the user. The only question left is where to set the breaks and if a break is actually needed. That is what this macro is designed to do. The macro analyzes the specified input parameter to create macro variables for the overall minimum and maximum, as well as macro variables specifying values prior to and following the largest difference that occurs between successive parameter values. The macro will also create variables for suggested break values to ensure graphic items such as markers are displayed in full. The user then utilizes these macro variables to determine if an axis break is needed and where to set those breaks. With the macro's dynamic nature it can be incorporated into larger graphics macro programs easily while making specific recommendations for each individual parameter. A complete and intuitive graph is produced with every macro call.
QT13 : Lab CTCAE - the Perl Way
Marina Dolgin, Teva Pharmaceutical Industries Ltd
Wednesday, 8:00 AM - 8:10 AM, Location: Centennial F
The NCI Common Terminology Criteria for Adverse Events (CTCAE) is a descriptive terminology which is utilized for Adverse Event (AE) reporting. A toxicity grading scale is provided for each AE term, it varies from 1 (mild) to 5 (death). Typically, CTCAE grading is directly collected from the site on the adverse experience case report form. However, this may not be the case for laboratory results. Usually, the calculation of CTCAE grading for laboratory results would require an explicit coding of each criteria, per each lab test, by means of "If" and "Else" statements. This paper demonstrates a method (macro) for calculation of toxicity grade based on laboratory value and original NCI CTCAE grading file. This macro is using Perl Regular Expressions patterns and functions, which is much more compact solution to a complicated hard coding task and also eliminates a need for per test programming of the toxicity grades. Lastly, macro may be used with both ADaM and SDTM structures.
QT14 : SAS and R - stop choosing, start combining and get benefits
Diana Bulaienko, Experis Clinical a Manpower Group Company
Wednesday, 8:15 AM - 8:25 AM, Location: Centennial F
The R software is powerful but it takes a long time to learn to use it well. However, you can keep using your current software to access and manage data, then call R for just the things your current software doesn't do. Learning to use a data analysis tool well takes significant effort and quite a lot of time, so this paper will be valuable to experienced SAS users who don't want totally switch to R, but want to use advantages of both SAS and R software. This presentation introduces a minimal amount of R commands you will need to know to work this way. It will also describe how to call R routines from SAS to create beautiful graphs.
QT15 : Tips on Checking and Manipulating Filenames with SAS
Solomon Lee, K Solomon LLC
Wednesday, 8:30 AM - 8:40 AM, Location: Centennial F
This is a 10 minute paper with tips on 1. Checking the filenames in a folder against Excel TOCs. 2. Checking the timestamps of two related lists of filenames for internal logics. 3. Partial replacement of filenames for long descriptions in the bookmarks of combined PDF file with multiple tables or graphs.
QT16 : When ANY Function Will Just NOT Do
Karl Miller, inVentiv Health
Richann Watson, Experis
Wednesday, 8:45 AM - 8:55 AM, Location: Centennial F
Have you ever been working on a task and wondered if there might be a SAS® function that could save you some time? Let alone might be able to do the work for you? Data review and validation tasks can be time consuming efforts that with any gain in efficiency is highly beneficial, especially if you can achieve a standard level where the data itself can drive parts of the process. The 'ANY' and 'NOT' functions can help alleviate some of the manual work in many tasks from data review of variable values, data compliance and formats through the derivation or validation of a variable's datatype. The list goes on. In this paper we simply cover the functions and a summary of their details in use, and also cover a couple of examples in handling SDTM/ADaM data and use for the define.xml process.
QT17 : Becoming a more efficient programmer with SAS® Studio
Max Cherny, GlaxoSmithKline
Wednesday, 9:00 AM - 9:10 AM, Location: Centennial F
SAS Studio is a web-based interface to SAS. It is similar to PC-SAS; however, SAS Studio provides additional features to make SAS programming easier. These features include auto-generation of SAS code and output, automatic formatting of SAS code, syntax checking, and better ways of handling SAS errors and warnings. SAS studio is even capable of predicting and suggesting the next word a SAS user may wish to type. SAS Studio makes it easier to run, maintain and document SAS programs with Process Flows feature. This paper describes how a SAS user can become more productive with SAS Studio. The paper is intended for a SAS user at any skill level.
QT18 : PROC SQL : To create macro variables with multiple values and different use in Clinical programming
Anish Kuriachen, Inventiv Health Clinical
Wednesday, 9:15 AM - 9:25 AM, Location: Centennial F
PROC SQL : To create macro variables with multiple values and different use in Clinical programming by Anish Kuriachen. The intent of this paper is to present a method to handling the macro variables with multiple values and its usage to work or modify multiple datasets and multiple variables in a dataset. These methods could useful in clinical programming in different ways.
QT19 : A Macro to Automatically Flag Baseline in SDTM
Taylor Markway, SCRI Development Inovations
Wednesday, 9:45 AM - 9:55 AM, Location: Centennial F
The derivation of the baseline flag in SDTM is a good candidate for using a standard macro because 1) SDTM allows for a generic definition of baseline and 2) baseline derivations can be broken down into a few simple steps. The macro presented in this paper translates the steps for derive baseline into a set of macro parameters. It also provides an option to use SDTM to our advantage by automatically determining the correct parameters, instead of manually determining parameters that should be passed to the macro. By leveraging SDTM, the user is only required to provide input and output data sets.
QT20 : SAS® Abbreviations: a Shortcut for Remembering Complicated Syntax
Yaorui Liu, University of Southern California
Wednesday, 10:00 AM - 10:10 AM, Location: Centennial F
One of many difficulties for a SAS® programmer is remembering how to accurately use SAS syntax, especially the ones that include many parameters. Not mastering the basic syntax parameters by heart will definitely make one's coding inefficient because one would have to check the SAS reference manual constantly to ensure the syntax is implemented properly. One of the useful tools in SAS, but seldom known by novice programmers, is the use of SAS Abbreviations. It allows users to store text strings, such as the syntax of a DATA step function, a SAS procedure, or a complete DATA step with a user-defined and easy-to-remember abbreviated term. Once this abbreviated term is typed in the enhanced editor, SAS will automatically bring up the corresponding stored syntax. Knowing how to use SAS Abbreviations will be beneficial to programmers from different levels. In this paper, various examples by utilizing SAS abbreviations will be demonstrated.
QT21 : Enhancing the SAS® Enhanced Editor with Toolbar Customizations
Lynn Mullins, PPD
Wednesday, 10:15 AM - 10:25 AM, Location: Centennial F
One of the most important tools for SAS® programmers is the Display Manager window environment in which programs are developed. Most programmers like to use shortcuts when developing programs to save time; the less typing we have to do; the more satisfied we are, especially with repetitive tasks. The default Enhanced Editor window can help you edit SAS® files and code using built-in tools, one of which is the toolbar. This paper will describe how to customize the toolbar to perform tasks by just one click of an icon.
QT22 : Using PROC GENMOD with count data
Meera G Kumar, Sanofi
Wednesday, 10:30 AM - 10:40 AM, Location: Centennial F
Use of PROC GENMOD in clinical trials data is quite common and more straightforward due to the availability of patient level data. How do you use the procedure to calculate event rate ratio with count data? The key is to set up dummy variables for each dose level along with the 'offset' option. There can be situations in the Epidemiology area where you get only summary data for the number of events in each dose group or treatment arm. This paper demonstrates how to use the count data and set up a Poisson distribution for the calculation of rate ratio along with the confidence interval and its associated p-value. Skill level: Beginner
QT23 : SAS Techniques for managing Large datasets
Rucha Landge, Inventiv Health Clinical
Tuesday, 2:45 PM - 2:55 PM, Location: Centennial F
As SAS programmers we often have to work with large data, having millions of rows, hundreds of column . Usually it takes enormous time to process these datasets which can have an impact on delivery timelines. This problem triggers us to think that how can we reduce the time required for the execution and compress the size of the output data without losing any valuable information from the data. In this paper will focus on techniques and concepts to compress the size of huge datasets and work with them efficiently to reduce the processing time. We will have a look at few features available in the SAS® System like COMPRESS, INDEX, BUFSIZE which provide you with ways of decreasing the amount of room needed to store these data sets and decreasing the time in which observations are retrieved for processing. Few other very common dataset option that we regularly use and improve effective time are LENGTH statement, DROP, KEEP, sub-setting options like WHERE, IF-THEN.
QT24 : Remember to always check your "simple" SAS function code!
Yingqiu Yvette Liu, PA
Wednesday, 11:00 AM - 11:10 AM, Location: Centennial F
In our daily programming work we may not get expected results when using seemingly clear logic and simple SAS functions. When we dig into the problem, we may discover the issue: either a SAS function was utilized incorrectly or the programming logic wasn't applied properly. In this paper, the author will use examples such as SCAN function and LAG function to demonstrate these points in an effort to share potential pitfalls and challenges when using SAS functions. Reviewing others' mistakes is often an excellent way to learn and improve our programming skills!
Statistics & PharmacokineticsSP01 : Cox proportional hazards regression to model the risk of outcomes per double increase in a continuous explanatory variable
Seungyoung Hwang, Johns Hopkins Bloomberg School of Public Health
Monday, 8:00 AM - 8:20 AM, Location: Centennial H
The Cox proportional hazards model to explore the effect of an explanatory variable on survival is by far the most popular and powerful statistical technique. It is used throughout a wide variety of types of clinical studies. If the explanatory variable is continuous, the hazard ratio per 1-unit of change in the continuous explanatory variable is estimated by default in the PHREG procedure in SAS®. However, the estimates may not reflect a clinically meaningful change, especially for continuous and highly dispersed measurement. This paper introduces the hazard ratio per 'double' increase in continuous covariate of interest as another tool for comparing the two hazards. The author is convinced that this paper will be useful to any level of statistician, SAS programmer, or data analyst with an interest in medical follow-up studies and in general time-to-event studies.
SP03 : Programming Support for Exposure-Response Analysis in Oncology Drug Development
Peter Lu, Novartis Pharmaceuticals Corporation
Hong Yan, Novartis Pharmaceuticals Corporation
Monday, 9:45 AM - 10:05 AM, Location: Centennial H
Over the last decade Exposure-Response (ER) analysis has become an integral part of clinical drug development and regulatory decision-making. It plays an increasingly important role in the identification of early evidence of drug efficacy and safety and thus can support internal and external decision-making processes for evaluating drug benefit-risk and optimizing trial design. Unlike population pharmacokinetics analysis, however, the regulatory guidance and industry recommendations for ER analysis are still lacking in terms of the details of statistical modeling approaches, including multivariate logistic regression, linear mixed effects, nonlinear regression, and non-parametric or parametric Cox proportional-hazards regression. To ensure a successful ER analysis, quality SAS programming is essential in data preparation and presentation. Due to the nature of ER analysis, ER programming often faces challenges: Programming may start without a formal ER (PK/PD) analysis plan; the source data may not be fully available (primary endpoints of efficacy and safety); and studies may have different data standards and dictionary versions (e.g., AE or Concomitant medication). The purpose of this paper is to share ways in which SAS programmers can provide flexible, timely, and efficient support for ER analysis, and includes examples to elaborate the relevant ER programming processes and considerations.
SP04 : Scrambled Data - A Population PK/PD Programming Solution
Sharmeen Reza, Cytel Inc.
Monday, 10:15 AM - 10:35 AM, Location: Centennial H
Population pharmacokinetics/pharmacodynamics (pop-PK/PD) modeling and simulation is a necessity in the drug discovery process. It allows PK scientists to evaluate and present safety and potency of a drug. Also regulatory agencies require population analysis results as part of submission package. Scientists' involvement in mainstream clinical study team is essential in aligning analysis timelines with study conduct activities. In order to support analyses, pop-PK/PD programmers create NONMEM®-ready data sets at different stages of a trial. It is critical to deliver data sets to PK scientists in a timely manner enabling them to prepare models, and optimize based on updated data at each stage. Upon receiving final data, pop-PK/PD programmers produce NONMEM-ready data set in a short window after a study database lock. Due to the sensitivity of PK data, accessibility is a major difficulty that programmers face during the development phase. Since blank concentration results is not a feasible option for data set creation and in turn PK analyses, a reasonable solution is to build and test code on scrambled data at intermediate stages. At present, formal data requests need to be in place and takes several weeks to process. The idea is to have scrambled data available throughout a trial with pre-planning and required approval as necessary. Careful measure needs to be taken for scrambling PK related variables where sample collection method is not standardized and regular randomization process is not in effect. Suitable SAS® techniques are discussed in this paper with clear advantages of scrambling in research and development.
SP05 : Unequalslopes: Making life easier when the proportional odds assumption fails
Matthew Wiedel, inVentiv Health
Monday, 10:45 AM - 11:05 AM, Location: Centennial H
With the advent of SAS 9.3 came the proc logistic model option, unequalslopes. This new option allows the programmer to quickly produce results for cumulative logit models which fail the assumption of proportionality. Models thus become either partially proportional or non-proportional. Previously, these models could be programmed using the procedure NLMixed, but this new capability of proc logistic provides an easier task for the SAS writer and an analyst friendly output. Ordinal responses use these models to capture the inherent information of their ordering. Hoping to make it easy to generalize the interpretation and programming of partial and non-proportional logit equations, the response variable, number of episodes, will be a count (1,2,3, or 4) against 2 categorical independent variables, rank and level, with 5 and 3 categories, respectively. Working from simpler to more complex models, the body of the paper will be split into three parts: Proportional Odds Model, Partial Proportional Odds Model, and Non-Proportional Odds Model. Each section will display the model equations, the NLmixed code, NLmixed results logistic code, and finally logistic results. It would be easy to digress, thus the focus will be on looking at the proportional odds assumption and interpretation of the odds ratios. The NLmixed code along with the mathematical equations should give a deeper understanding of the logistic output, hence making it easier to explain the results to any audience.
SP06 : A Dose Escalation Method for Dual-Agent in Phase 1 Cancer Clinical Trial using the SAS MCMC Procedure
Gwénaël Le Teuff, Gustave Roussy
Mohamed Amine Bayar, Gustave Roussy
Monday, 11:15 AM - 11:35 AM, Location: Centennial H
Continual reassessment method (CRM) is a model-based dose escalation method commonly used to design a phase 1 trial in oncology evaluating one agent. The main characteristics include the definition of a working model for dose levels, a targeted level of toxicity and a model defining the dose-toxicity relationship (for example, power or logistic function). This relationship is updated after the toxicity evaluation of each patient cohort and we assigned to the next cohort a dose level closest to the target. This allows estimating the maximum tolerated dose (MTD). With the advance of target therapy era in oncology, more and more phase 1 trials aim to identify one or more MTDs from a set of available dose levels of two or more agents. Combining several agents can indeed increase the overall anti-tumor action but at the same time increase the toxicity. Since the single-agent dose finding methods (algorithm-based or model-based) are not appropriate for combination therapies, several authors proposed different methods. In this paper, we propose to illustrate the SAS MCMC procedure through 2 examples related to the phase 1 cancer clinical trials with more emphasis for the latter. The first example shows how to estimate the model parameters of Bayesian CRM. The second example present a program we developed to implement a dose escalade method for dual agent.
SP07 : Latent Structure Analysis Procedures in SAS®
Deanna Schreiber-Gregory, National University
Monday, 8:30 AM - 9:20 AM, Location: Centennial H
The current study looks at several ways to investigate latent variables in longitudinal surveys and their use in regression models. Three different analyses for latent variable discovery will be briefly reviewed and explored. The latent analysis procedures explored in this paper are PROC LCA, PROC LTA, PROC TRAJ, and PROC CALIS. The latent variables will then be included in separate regression models. The effect of the latent variables on the fit and use of the regression model compared to a similar model using observed data will be briefly reviewed. The data used for this study was obtained via the National Longitudinal Study of Adolescent Health, a study distributed and collected by Add Health. Data was analyzed using SAS 9.4. This paper is intended for any level of SAS user. This paper is also written to an audience with a background in behavioral science and/or statistics.
SP08 : Everything or Nothing - A Better Confidence Intervals for Binomial Proportion in Clinical Trial Data Analysis
Sakthivel Sivam, Quartesian LLC, Princeton, NJ
Subbiah Meenakshisundaram, L.N Government College, Ponneri, Tamilnadu
Tuesday, 8:00 AM - 8:50 AM, Location: Centennial H
Often in Pharmaceutical Research, Confidence intervals have become an important aspect of reporting statistical results. In particular, extensive literature is available for the interval estimators for a binomial proportion (p) that usually arises from the inferential problem for number of successes X ~ Binomial (n, p). However, the occurrences of the event exactly at boundaries (x = 0 or n) have drawn more research interest in spite of few recommendations such as continuity corrections and truncating the limits with 0 or 1. In this paper, one of the widely applied methods, score interval has been considered to improve its performance exactly when x = 0 or n. The proposed approach is based on boundary corrections due to exact method and the Score Interval in its original form; this alleviates the issues related to continuity corrections and adding any pseudo successes or failures. Performance and comparative analyses have been carried out to study the robustness of the present approach using coverage probability, expected length and Root Mean Square Error as evaluation criteria under different n, x and p with special focus on proportion exact boundary. Results have revealed that the proposed interval uniformly achieves nominal coverage and has uniformly minimum expected length as well. We illustrate the implementation of the proposed intervals in a computing environment with real time clinical study data.
SP09 : Simulation of Data using the SAS System, Tools for Learning and Experimentation
Kevin Viel, inVentiv Health Clinical
Tuesday, 9:45 AM - 10:35 AM, Location: Centennial H
Statistical models, like many fields of mathematics, rely upon assumptions (postulates). The successful use of these tools conventionally involves examination of corresponding statistics that inform the statistician whether violations of model assumptions might have occurred. Simulation, enabled by ever increasing computational power, is making experimentation a mainstay of statistics and mathematics. Further, the ability to simulate data should be required of every student of statistics, much like how the fluency of matrix "language" and the ability to code the likelihood should be requisite skills. The goal of this paper is to introduce simulations using the SAS System® and to provide the technical (programming) and statistical basis to examine the use of models in time to event data with special consideration of recent and important reports of inhibitors in Hemophilia A patients.
SP10 : "I Want the Mean, But not That One!"
David Franklin, Quintiles Real World Late Phase Research
Tuesday, 9:00 AM - 9:20 AM, Location: Centennial H
The 'Mean', as most SAS programmers know it, is the Arithmetic Mean. However, there are situations where it may necessary to calculate different 'means'. This paper first looks at different methods that are widely used from a programmers perspective, starting with the humble Arithmetic Mean, then proceeding to the other Pythagorean Means, known as the Geometric Mean and Harmonic Mean, before ending with a quick look at the Interquartile Mean and its related Truncated Mean. During the journey there will be examples of data and code given to demonstrate how each method is done and output.
SP11 : ROC Curve: Making way for correct diagnosis
Manoj Pandey, Ephicacy Lifescience Analytics Pvt. Ltd.
Abhinav Jain, Ephicacy Consulting Group Inc.
Tuesday, 10:45 AM - 11:05 AM, Location: Centennial H
The performance of a diagnostic test is based upon two factors, one how accurately it detects the disease and another how accurately it rules out the disease in healthy subjects. Scientifically ROC Curves (Receiver Operating Curve) helps to evaluate the predictive accuracy of a diagnostic test. ROC Curves provides the ability to identify optimal cut-off points, evaluate performance comparison across multiple diagnostic tests and evaluate performance of a diagnostic test across multiple population samples. The property that makes this ROC Curves more desirable is that the indices of accuracy are least affected by arbitrarily chosen decision criteria. Calculations of Area Under Curve (AUC) and Measure of Accuracy determines the differentiating power of the test. This paper will focus on application of ROC Curves in clinical trial data analysis and deriving insights from ROC measures like sensitivity, specificity, AUC, Optimal Cut-off Point and Youden Index.
Submission StandardsSS01 : Creating Define-XML version 2 including Analysis Results Metadata with the SAS® Clinical Standards Toolkit
Lex Jansen, SAS Institute Inc.
Monday, 8:00 AM - 8:50 AM, Location: Centennial C
In 2015 CDISC published the Analysis Results Metadata extension to the Define-XML 2.0.0 model for the purpose of submissions to regulatory agencies such as the FDA as well as for the exchange of analysis datasets and key results between other parties. Analysis Results Metadata provide traceability for a given analysis result to the specific ADaM data that were used as input to generating the analysis result; they also provide information about the analysis method used and the reason the analysis was performed. Analysis Results Metadata will assist the reviewer by identifying the critical analyses, providing links between results, documentation, and datasets, and documenting the analyses performed. This presentation will show how Define-XML v2 including Analysis Results Metadata can be created with the SAS Clinical Standards Toolkit.
SS02 : Preparing Legacy Format Data for Submission to the FDA - When & Why Must I Do It, What Guidance Should I Follow?
David Izard, Accenture
Monday, 9:45 AM - 10:35 AM, Location: Centennial C
The U.S. Food & Drug Administration (FDA) released a number of binding guidance documents and companion materials that require clinical studies initiated on or after December 17, 2016 to utilize FDA endorsed data standards at the time the study is planned and executed if you intend to include the study as part of a future New Drug Application (NDA), Abbreviated New Drug Application (ANDA) or Biologics License Application (BLA). These guidance documents spend considerable effort documenting these new requirements but give little consideration to the body of clinical data that currently exists in legacy format. Furthermore, all previous guidance documents have now been deprecated in favor of these new, forward-looking guidances, leaving a void for how a Sponsor or Service Provider should prepare legacy data and related documentation for regulatory submission let alone when the submission of legacy format data is required or expected. This paper will examine the agency's thinking on the role legacy format clinical data should play in a submission, drawing on the limited information available in current guidance as well as feedback from questions to the FDA posed at conferences and via the eData division support. It will also examine what constitutes a legacy format data submission and how one should utilize both current and legacy format guidance documents to prepare these assets for inclusion in a filing.
SS03 : Strategic Considerations for CDISC Implementation
Amber Randall, Axio Research
Bill Coar, Axio Research
Monday, 9:00 AM - 9:20 AM, Location: Centennial C
The Prescription Drug User Fee Act (PDUFA) V Guidance mandates eCTD format for all regulatory submissions by May 2017. The implementation of CDISC data standards is not a one-size-fits-all process and can present both a substantial technical challenge and potential high cost to study teams. There are many factors that should be considered in strategizing when and how which include timeline, study team expertise, and final goals. Different approaches may be more efficient for brand new studies as compared to existing or completed studies. Should CDISC standards be implemented right from the beginning or does it make sense to convert data once it is known that the study product will indeed be submitted for approval? Does a study team already have the technical expertise to implement data standards? If not, is it more cost effective to invest in training in-house or to hire contractors? How does a company identify reliable and knowledgeable contractors? Are contractors skilled in SAS programming sufficient or will they also need in-depth CDISC expertise? How can the work of contractors be validated? Our experience as a statistical CRO has allowed us to observe and participate in many approaches to this challenging process. What has become clear is that a good, informed strategy planned from the beginning can greatly increase efficiency and cost effectiveness and reduce stress and unanticipated surprises.
SS04 : To IDB or Not to IDB: That is the Question
Kjersten Offenbecker, Spaulding Clinical Research
Beth Seremula, Chiltern International
Tuesday, 8:30 AM - 8:50 AM, Location: Centennial C
In Shakespeare's Hamlet we hear Prince Hamlet ask the now cliché "To be or not to be" question as he contemplates suicide. How does this relate to ADaM integrated databases (IDBs)? As Hamlet weighs the pros and cons of death, we too must decide whether it is better to stick with the status quo or venture into the unknown world of integrating our ADaMs. We shall examine the pros and cons of ADaM IDBs as well as some of the basic pitfalls we have come across while undertaking this daunting task. Along this journey we will show why we think IDB is the future and why it is better to be on the cutting edge.
SS05 : A Practical Approach to Re-sizing Character Variable Lengths for FDA Submission Datasets (both SDTM and ADaM)
Xiangchen (Bob) Cui, Alkermes, Inc
Min Chen, Alkermes
Monday, 10:45 AM - 11:35 AM, Location: Centennial C
FDA issued Study Data Technical Conformance Guide  in October 2015, which stipulates "The allotted length for each column containing character (text) data should be set to the maximum length of the variable used across all datasets in the study". FDA/PhUSE 'Data Sizing Best Practices Recommendation' suggests optimizing the size of dataset through managing character variable length to save wasted space. OpenCDISC has built the checks for the compliance. Re-sizing character variable length from the pre-determined in SDTM and ADaM to the maximum length of the variable on the actual data values is the common solution to be compliant with FDA rule. Some sponsors resize character variable length at post database lock for FDA submission to solve the challenge of identifying or predicting up front the longest potential value for each character variable, to avoid risk of truncation. However keeping the resized variable length in metadata (define,xml), and keeping the same length of each variable inherited from SDTM domains in ADaM datasets both in datasets and their definex.ml are the most difficult in operational perspective. This paper presents a SAS-based macro approach which automates the resizing of character variables in both SDTM and ADaM datasets, simultaneously updating the resized variable lengths in define.xml, and keeping their length in SDTM and ADaM datasets same as ones in define.xml. The strategy for handling each individual study and ISS/ISE is also proposed to share the vision to achieve technical accuracy and operational efficiency.
SS06 : New Features in Define-XML V2.0 and Its Impact on SDTM/ADaM Specifications
Hang Pang, Vertex Pharma Inc.
Tuesday, 8:00 AM - 8:20 AM, Location: Centennial C
Define-XML is required for NDA/BLA submission (FDA Study Data Technical Conformance Guide, V2.3, Oct. 2015). CDISC Define-XML Specification V2.0 (2013) has some significant changes compared with CRT-DDS (Case Report Tabulation Data Definition Specification (define.xml), V1.0, 2005). This paper will discuss the new features in Define-XML V2.0 (e.g. integrates with industry standard NCI controlled terminologies, and supports more complicated Value-level metadata (VLM) definitions&), and its impact on SDTM/ADaM data specifications for submission readiness. The SDTM/ADaM data specifications will be used as a metadata for Define-XML (V2.0) generation, and simplified the NDA/BLA e-submission preparation process. An example of a SAS macro and ADaM specifications for Define-XML (V2.0) generation will be presented.
SS07 : Up-Versioning Existing Define.xml from 1.0 to 2.0
Jeff Xia, Merck
Lugang Larry Xie, Merck & Co.
Tuesday, 9:00 AM - 9:20 AM, Location: Centennial C
As per the latest Standards Catalog released by the FDA, both define.xml 1.0 and 2.0 are acceptable for NDA submission. However, because of the technical insufficiency of define.xml 1.0, the agency specifically encourages the industry to submit the define.xml of version 2.0. It is highly possible that multiple studies are included in a NDA submission. Some of them may have been done in early years with an older version of define.xml, while other studies may have implemented a newer version. To keep the defile version consistent in a single NDA submission, there is a need for generate newer version of define.xml for these old studies. Considering that significant amount of work had been invested to check the validity and compliance of these older version define files, it is more efficient and cost effective to convert these define.xml from version 1.0 to 2.0. This paper briefly discusses the difference of define.xml of version 1.0 and 2.0, then introduces a simple approach to perform the define.xml up-versioning. It includes the following 4 steps: 1. Convert the existing define.xml in version 1.0 into a well-defined Excel spreadsheet. 2. Update the spreadsheet to meet the requirements of define.xml 2.0, i.e., Value Level Metadata, Where Clause, and populate CDISC/NCI C-codes for each of the Controlled Terms. SAS macros have been developed to implement this process in an easy and automatic way. 3. Convert the updated Excel spreadsheet into Define.xml version 2.0. 4. Perform the final compliance check and schema validation.
SS08 : A SAS® Macro Tool to Automate Generation of Define.xml V2.0 from SDTM Specification for FDA Submission
Min Chen, Alkermes
Xiangchen (Bob) Cui, Alkermes, Inc
Wednesday, 8:00 AM - 8:20 AM, Location: Centennial C
A define-xml file is required to be submitted in FDA electronic submission, in addition to SDTM and ADaM datasets. An insufficiently documented define.xml frequently arouses a common complaint from FDA reviewers. Compared to define.xml Version 1.0, define.xml version 2.0 is a more powerful and user-friendly tool. An SDTM programming specification spreadsheet provides the SDTM mapping rules and derivation rules for SDTM programming and QC, which can naturally serve as a define file. In this paper, the standard SDTM specification spreadsheets were re-designed for define.xml V2.0 new features, which provided complete details for derived variables. A metadata-driven SAS macro tool was developed to automate the creation of define.xml V2.0 from CDSIC SDTM specification spreadsheets, ensure the consistency of the two files, and achieve technical accuracy and operational efficiency. We hope the methodology and sample SAS code provided in this paper can spare your resources and energies with FDA submission.
SS09 : Achieving Clarity through Proper Study Documentation: An Introduction to the Study Data Reviewer's Guide (SDRG)
Terek Peterson, Chiltern
Michael Stackhouse, Chiltern
Tuesday, 10:45 AM - 11:35 AM, Location: Centennial C
With the ever-growing standardization requirements of pharmaceutical submissions to the FDA, it can sometimes be difficult to understand where exactly you place nonconformant, essential information that does not have a home within other submission documentation. Now, with the development of the Study Data Reviewer's Guide (SDRG), you can find this information a home and easily communicate it in an effective manner. This document provides reviewers with the key details they will need to perform a thorough review of the data without the need to search through all of the individual study material. Using the SDRG, FDA reviewers should be able to review the full documentation package of your clinical trial quicker, with less questions. PhUSE has provided a step-by-step template that helps you understand what to report, where to report it, and makes sure nothing is lost so that traceability is clear to both you and your FDA reviewers. This poster will help you understand why it is important to think about the SDRG from the start, how to use these documents to your advantage, and provide real examples of their flexibility and utilization.
SS11 : What is high quality study metadata?
Sergiy Sirichenko, Pinnacle 21
Tuesday, 9:45 AM - 10:35 AM, Location: Centennial C
High quality study metadata is an important part of regulatory submission since it allows reviewers to interpret and understand submitted data, which means your submission can potentially move through the process more quickly. However, poor study metadata is most often cited by reviewers to be deficient. In fact, 77% of submissions in 2015 could not be loaded into FDA Clinical Trial Repository mostly due to issues with Define.xml and Trial Summary dataset. In this presentation, we will share the most common issues with study metadata in our industry and provide recommendations how to avoid or correct them to ensure successful regulatory submission.
SS12 : Submission-Ready Define.xml Files Using SAS® Clinical Data Integration
Melissa Martinez, SAS Institute
Wednesday, 8:30 AM - 8:50 AM, Location: Centennial C
SAS Clinical Data Integration simplifies the transformation of raw data into submission-ready datasets that conform to CDISC data standards. It also has a built-in transformation that creates a define.xml file from a study's CDISC domains, with just a few simple selections required from the end user. With the appropriate metadata definitions, the built-in transformation will pick up and include computational algorithms and controlled terminology codelists in the resulting define.xml file. In SAS Clinical Data Integration 2.6, a new feature was added that simplifies the process of adding information about supplemental documents and value-level metadata to the define.xml file. This paper will provide examples and instructions for creating a submission-ready define.xml file complete with the appropriate computational algorithms, controlled terminology codelists, value-level metadata, and supplemental documents using SAS Clinical Data Integration. In addition to describing how to make use of the newest features in SAS Clinical Data Integration 2.6, this paper will also describe how to create a submission-ready define.xml file using earlier releases. Keywords: define.xml, define, define file, SAS, CDI, Clinical Data Integration, value level metadata, annotated CRF, supplemental document, annotated case report form, computational algorithm, controlled terminology, codelist
SS13 : The Standard for the Exchange of Nonclinical Data (SEND): History, Basics, and Comparisons with Clinical Data
Fred Wood, Accenture Life Siences
Wednesday, 9:45 AM - 10:35 AM, Location: Centennial C
The CDISC Standard for the Exchange of Nonclinical Data (SEND) Implementation Guide (SENDIG) contains domains for general toxicology and pharmacology, carcinogenicity, and reproductive toxicology studies. The SEND Model was first developed in 2002, utilizing domains described in the CDER 1999 Guidance. In 2007, an effort began to completely align the SENDIG with the SDTM Implementation Guide (SDTMIG), with the first such version (v3.0) published in 2011. Since that time, the SEND Team has been working add more examples, clarify existing text and examples, add new domains. Version 3.1, which actually underwent two public reviews (2014 and 2015), is expected to be posted in Q1 of this year. This paper will provide an overview of the history of SEND and its close ties with the development of the SDTM and the SDTMIG. It will also cover some of the basics of the SEND model and how the nonclinical implementation of the SDTM compares with the clinical implementation.
Techniques & TutorialsTT01 : Removing Duplicates Using SAS®
Kirk Paul Lafler, Software Intelligence Corporation
Monday, 1:15 PM - 2:05 PM, Location: Centennial B
We live in a world of data - small data, big data, and data in every conceivable size between small and big. In today's world data finds its way into our lives wherever we are. We talk about data, create data, read data, transmit data, receive data, and save data constantly during any given hour in a day, and we still want and need more. So, we collect even more data at work, in meetings, at home, using our smartphones, in emails, in voice messages, sifting through financial reports, analyzing profits and losses, watching streaming videos, playing computer games, comparing sports teams and favorite players, and countless other ways. Data is growing and being collected at such astounding rates all in the hopes of being able to better understand the world around us. As SAS professionals, the world of data offers many new and exciting opportunities, but also presents a frightening realization that data sources may very well contain a host of integrity issues that need to be resolved first. This presentation describes the available methods that are used to remove duplicate observations (or rows) from data sets (or tables) based on the row's values and/or keys using SAS®.
TT02 : The Dynamic Duo: ODS Layout and the ODS Destination for PowerPoint
Jane Eslinger, SAS Institute
Monday, 2:15 PM - 2:35 PM, Location: Centennial B
Like a good pitcher and catcher in baseball, ODS layout and the ODS destination for PowerPoint are a winning combination in SAS® 9.4. With this dynamic duo, you can go straight from performing data analysis to creating a quality presentation. The ODS destination for PowerPoint produces native PowerPoint files from your output. When you pair it with ODS layout, you are able to dynamically place your output on each slide. Through code examples this paper shows you how to create a custom title slide, as well as place the desired number of graphs and tables on each slide. Don't be relegated to the sidelines - increase your winning percentage by learning how ODS layout works with the ODS destination for PowerPoint.
TT03 : Controlling Colors by Name; Selecting, Ordering, and Using Colors for Your Viewing Pleasure
Art Carpenter, CA Occidental Consultants
Monday, 3:30 PM - 3:50 PM, Location: Centennial B
Within SAS® literally millions of colors are available for use in our charts, graphs, and reports. We can name these colors using techniques which include color wheels, RGB (Red, Green, Blue) HEX codes, and HLS (Hue, Lightness, Saturation) HEX codes. But sometimes I just want to use a color by name. When I want purple, I want to be able to ask for purple not CX703070 or H03C5066. But am I limiting myself to just one purple? What about light purple or pinkish purple. Do those colors have names or must I use the codes? It turns out that they do have names. Names that we can use. Names that we can select, names that we can order, names that we can use to build our graphs and reports. This paper will show you how to gather color names and manipulate them so that you can take advantage of your favorite purple; be it 'purple', 'grayish purple', 'vivid purple', or 'pale purplish blue'. Much of the control will be obtained through the use of user defined formats. Learn how to build these formats based on a data set containing a list of these colors.
TT05 : Generalized Problem-Solving Techniques for De-bugging and Diagnosing Logic Errors
Brian Fairfield-Carter, inVentiv Health Clinical
Tracy Sherman, Chiltern
Monday, 4:00 PM - 4:20 PM, Location: Centennial B
The most troublesome code bugs are those that seem to elude rational diagnosis: these are the 'logic errors' in syntactically-correct programs, where the SAS® log offers little or no insight. More disturbingly, these crop up in programs in which we otherwise place reasonable confidence, convinced as we are of the logic behind our initial efforts. Syntax errors are almost always easy to spot, as they are the primary focus of built-in de-bugging features, and also enjoy a wealth of documentation to aid in diagnosis and correction. When it comes to logic errors, however, we're very much on our own: logic errors are rarely captured by built-in de-bugging features, and discussion of strategy seldom extends beyond general recommendations like "fully understand your data" and "carefully review output". Common software-testing paradigms, which assume compartmentalization and the separation of 'interface' and 'implementation', also tend not to fit well in the analysis-programming world, meaning that instead of the application of any sort of formal or systematic de-bugging strategy, what we often see is an ad hoc 'brute force' approach to bug and logic error diagnosis. This paper offers, as an antidote to brute-force problem-solving, a generalized error-trapping strategy (co-opting ideas behind 'software regression testing', combined with sequential functionality-reduction), supported by simple technical solutions tailored to analysis programming. Application of this strategy is illustrated in a few case studies: " Trapping unintended artifacts of code revision & adaptation " Problems in rendering RTF and XML output " Mysterious record- and value-loss in data derivation
TT06 : SAS Functions You May Have Been "MISSING"
Mira Shapiro, Analytic Designers LLC
Monday, 4:30 PM - 4:50 PM, Location: Centennial B
Those of us who have been using SAS for more than a few years often rely on our tried- and-true techniques for standard operations like assessing missing values. Even though the old techniques still work, we often miss some of the "new" functionality added to SAS that would make our lives much easier. In effort to ascertain how many people skipped questions on a survey and, what percentage of people answered each question, I did a search of past conference papers and came across 2 functions that were introduced in SAS 9.2-- CMISS and NMISS. By using a combination of these functions and Proc Transpose, a full missing assessment can be done in a concise program. This paper will provide examples and explore the features of the "newer" functions NMISS and CMISS and compare them with the longer standing MISSING function.
TT07 : Array of Sunshine: Casting Light on Basic Array Processing
Nancy Brucken, inVentiv Health Clinical
Tuesday, 1:15 PM - 1:35 PM, Location: Centennial B
An array is a powerful construct in DATA step programming, allowing you to apply a single process to multiple variables simultaneously, without having to resort to macro variables or repeat code blocks. Arrays are also useful in transposing data sets when PROC TRANSPOSE does not provide a necessary degree of control. However, they can be very confusing to less-experienced programmers. This paper shows how arrays can be used to solve some common programming problems.
TT08 : Best Practices: Subset Without Getting Upset
Mary Rosenbloom, Alcon, a Novartis Company
Kirk Paul Lafler, Software Intelligence Corporation
Tuesday, 1:45 PM - 2:35 PM, Location: Centennial B
You've worked for weeks or even months to produce an analysis suite for a project, and at the last moment, someone wants a subgroup analysis and they inform you that they need it yesterday. This should be easy to do, right? So often, the programs that we write fall apart when we use them on subsets of the original data. This paper takes a look at some of the best practice techniques that can be built into a program at the beginning, so that users can subset on the fly without losing categories or creating errors in statistical tests. We review techniques for creating tables and corresponding titles with by-group processing so that minimal code needs to be modified when more groups are created, and we provide a link to sample code and sample data that can be used to get started with this process.
TT09 : Formats and Informats - Concepts and Quick Reference
Emmy Pahmer, inVentiv Health
Tuesday, 2:45 PM - 3:05 PM, Location: Centennial B
Using formats and informats is very common in SAS® programming. They are used to read external data, to temporarily or permanently change how data are displayed, to categorize, or to look up related values as with a lookup table. This paper will look at how to create and use formats and informats in various contexts, and provide a quick-reference table with examples.
TT10 : May the Function Be With You: Helpful SAS Functions, Particularly When Handling CDISC Data
Angela Lamb, Chiltern
Tuesday, 3:30 PM - 4:20 PM, Location: Centennial B
SAS Functions are a basic component of the DATA step, but some lesser-known functions are often overlooked, leading to lengthy, complicated, and unnecessary code. Programmers can get stuck in a rut, habitually handling data one way, while a more efficient and more robust function could simplify the process. CDISC data presents us with unique programming challenges. One such challenge is the need to handle long, ISO 8601 formatted dates. Functions can be very helpful in processing these dates--for instance, in breaking them apart to calculate durations. This paper explores the basics of SAS functions, some that are especially useful for handling CDISC data, and functions that are lesser-known but invaluable to the clinical trials programmer.
TT11 : Exploring HASH tables vs. SORT/Data step vs. PROC SQL
Lynn Mullins, PPD
Richann Watson, Experis
Tuesday, 4:30 PM - 4:50 PM, Location: Centennial B
There are often times when programmers need to merge multiple SAS® data sets to combine data into one single source data set. Like many other processes, there are various techniques to accomplish this using SAS software. The most efficient method to use based on varying assumptions will be explored in this paper. We will describe the differences, advantages and disadvantages, and display benchmarks of using HASH tables, the SORT and Data step procedures, and the SQL procedure
TT12 : Let's Make Music: Using SAS Functions for Music Composition
Kim Truett, KCT Data, Inc
Zak Truett, KCT Data, Inc
Wednesday, 8:00 AM - 8:20 AM, Location: Centennial B
SAS has been experimented with for entertaining diversions before, such as the Pegboard Game, Tetris and Solitaire, so we wondered if we could get SAS to write music using improvisational rules. Improvisation in music is not just playing random notes - it follows a defined set of rules - sometimes just one or two, sometimes many. Following a basic set of these rules for note progression and duration, we use SAS random number functions and weighted probabilities to determine what the next note or chord will be (the pitch), and how long the note is played (the duration). The goal of this light-hearted talk is to illustrate the basic use of SAS functions to compose a few bars of music that can be played.
TT14 : Setting the Percentage in PROC TABULATE
David Franklin, Quintiles Real World Late Phase Research
Wednesday, 8:30 AM - 8:50 AM, Location: Centennial B
PROC TABULATE is a very powerful procedure which can do statistics and frequency counts very efficiently, but it also it has the capability of calculating percentages on many levels for a category. This paper looks at the automatic percentage calculations that are provided, and then delves into how a user can specify the denominator for your custom percentage.
TT16 : Beyond IF THEN ELSE: Techniques for Conditional Execution of SAS Code
Josh Horstman, Nested Loop Consulting
Wednesday, 9:00 AM - 9:20 AM, Location: Centennial B
Nearly every SAS program includes logic that causes certain code to be executed only when specific conditions are met. This is commonly done using the IF&THEN&ELSE syntax. In this paper, we will explore various ways to construct conditional SAS logic, including some that may provide advantages over the IF statement. Topics will include the SELECT statement, the IFC and IFN functions, the COALESCE function, as well as some more esoteric methods, and we'll make sure we understand the difference between a regular IF and the %IF statement in the macro language.
TT17 : Capturing Macro Code when Debugging in the Windows Environment: The Power of MFILE and the Simplicity of Pasting
Kevin Viel, inVentiv Health Clinical
Wednesday, 9:45 AM - 10:05 AM, Location: Centennial B
Macros are ubiquitous in a programmer's toolkit. During development, especially of long or complex or even simple macros, deciphering warnings and errors can be problematic. Copying and editing the log is resource intensive and subject to errors. The goal of this paper is to demonstrate a macro that captures the SAS code produced by a macro using the MFILE system option and allows the user to simply paste that code in the Windows environment to an Enhanced editor for review and interactive submission or %INCLUDE it, thus making possible the tracking of errors and warnings to their exact line number in the resulting log.
TT18 : Duplicate records - it may be a good time to contact your data management team
Sergiy Sirichenko, Pinnacle 21
Wednesday, 10:15 AM - 10:35 AM, Location: Centennial B
Most programmers are already familiar with the concept of duplicate records, where multiple records are identical in values across all variables. These duplicates are easy to catch and clean. However, there are also cases where clinical data has more than one expected record for the same assessment at the same time point with different results. These are much harder to manage and can complicate analysis by producing incorrect outcomes. In this presentation, we will examine the concepts of the unique observation and key variables. We'll review common causes and examples of duplicate records during the data collection and mapping process. And finally we'll demonstrate how to detect, clean, and document duplicate records.