PharmaSUG Single Day Event
Tokyo, Japan - SAS Institute offices
September 4, 2018
Analytic Evolution: Exploring the Next Phase of Drug Development
|Poster (click for abstract)||Presenter (click for bio)||Poster|
|The Implementation of RBM in Chugai||Juri Kato, Chugai Pharmaceutical Co., Ltd.||Poster (PDF, 0.5MB)|
|Dealing with Time-Varying Confounding Using SAS||Ryo Nakaya, Takeda Pharmaceutical Company, Ltd.||Poster (PDF, 1.0MB)|
|Data Visualization by Python: Data from Pandas to Matplotlib||Yuichi Nakajima, Novartis||Poster (PDF, 0.9MB)|
|Case Study on Central Monitoring in RBM||Yukikazu Hayashi, A2 Healthcare Corporation||Poster (PDF, 0.6MB)|
Presentation AbstractsElectronic Data Submission and Utilization in Japan
Hiromi Sugano, Biostatistics Reviewer, Office of New Drug II / Office of Advanced Evaluation with Electronic Data, Pharmaceuticals and Medical Devices Agency (PMDA)
Pharmaceuticals and Medical Devices Agency (PMDA) started to accept electronic clinical study data with New Drug Applications on October 1st, 2016. The study data have been successfully received and the new drug reviewers, mainly biostatistics reviewers and medical reviewers in PMDA, use submitted data for their new drug review. PMDA have issued several guidance and FAQs so far, and since the transitional period will be ended on March 31st, 2020, PMDA is now preparing for the next phase. In this presentation, Practical cases for utilization of submitted data in review process will be presented in detail, and current status and future perspective of electronic data submission will be shown.
New CFDA Requirements in NDA and Its Implementation in Process
Eason Yang, Senior Principal Statistical Programmer, Novartis
This presentation focuses on:
- Background of CFDA reform. Why CFDA reform has been happening and what are the objectives and actions for the reform.
- New regulations/guidelines/requirements has been released since 2015 including:
- Guidelines for MRCT, General Considerations to Clinical Trials, Biostatistics Principles, Communications for Drug Development and Technical Evaluation, Electronic Data Capture, Data Management Planning and Reporting of Statistical Analysis, eCTD Implementation, Post Approval Safety Surveillance
- On-site inspection requirements
- Priority Review & Approval Procedure
- New Chemical Drug Registration Classification
- Data Protection Regime
- Adjustment of Imported Drug Registration
- Case study of breakthrough heart failure treatment Entresto® (21% reduction in CV mortality or HF hospitalization). What is the benefit from the new regulations makes this drug got won CFDA approval merely two years after its launch in Europe and the US.
- Summary of the trend and landscape of the future environment of new drugs development in China
Best Practice for e-Study Data Submission to PMDA
CJUG ADaM Team:
|Takashi Kitahara (Novartis Pharma K.K.)||Tomotaro Shiraishi (A2 Healthcare Corporation)|
|Ayuko Yamamura (Eli Lilly Japan K.K.)||Yoshifumi Arita (Bayer Yakuhin, Ltd.)|
|Akira Kurisu (MSD K.K.)||Ataru Nogawa (intellim Corporation)|
|Yasuhiro Iijima (Novartis Pharma K.K.)||Yohei Takanami (Takeda Pharmaceutical Company, Ltd.)|
|Ayako Noda (Janssen Pharmaceutical K.K.)|
Electronic study data submission (eData submission) to Pharmaceuticals and Medical Devices Agency (PMDA) began in October 2016 with a 3.5-year transitional period, and will be mandatory starting in April 2020. Several sponsors have already experienced eData submissions to PMDA during the transitional period and its know-how is gradually accumulated in each sponsor.
Therefore, CDISC Japan User Group (CJUG) ADaM team has been discussing the best practices and creating a document to summarize useful tips of eData submission to PMDA based on our practical experiences. In this presentation, lessons learned, major considerations and key factors for successful eData submission to PMDA, such as identification of study data to be submitted, consultation on data format of submission of electronic study data, validation of conformance to the CDISC standard and timeline for eData submission, will be provided.
Data Mapping Using Machine Learning
Toru Tsunoda, Information Platform Innovation Group, Platform Solution Division, Solution Division
Standards define the targets to which source data needs to be mapped. People can interpret these targets in different ways, which can lead to inconsistencies in the resulting standardized data. The ability to allow different teams working on similar studies (possibly located at different offsite locations or offshore sites) to re-use prior knowledge gained by the team would not only save significant time in mapping studies, but also increase the quality in the resulting standardized data.
This session talks about capturing source-to-destination data mapping as metadata into centralized libraries, and applying Machine Learning algorithms to streamline and predict mapping for newer studies that have metadata which is similar to already-mapped studies. This process could lead to consistent destination data mapping and can significantly reduce the mapping timing by re-using system-suggested mappings.
Automated Generation of PowerPoint Presentations Using R in Clinical Studies
Nobuo Funao, Biostatistics, Takeda Development Center Japan, Takeda Pharmaceutical Company, Ltd.
In closing stage of a clinical study, we should provide a statistical analysis result (SAR) with tables and figures, and then create a clinical study report (CSR) based on the result. In the meantime, we should also create a PowerPoint slide deck (say, topline report) including a brief summary of the study to report our managers or directors. When we create the slide deck, we usually copy contents from the SAR and paste them into the slides. Due to the manual labor, however, the slides would have several errors (e.g., mispostings or writing errors). I would like to introduce an efficient way to automatically generate a PowerPoint slide deck using R in order to reduce work time and avoid errors. This presentation will also include an introduction of R packages of "officer" and "flextable", and an application for a virtual clinical study with CDISC/ADaM data.
Using R-Shiny as a Data Reviewing and Validation Tool in Clinical Trials
Markus Niederstrasser, Senior Statistical Programmer, Novartis Pharma K.K
SAS has been widely used for statistical analysis and reporting during the entire clinical drug development cycle in the industry. Backward compatibility, reliability, scalability and the ability of batch processing were often crucial considerations made. During the last decade, CDISC has been established as the common language for how data should be structured and shared in the pharmaceutical industry. Automatic validation tools like Pinnacle 21 provide the opportunity for further improvement of the quality and compliance of clinical data. Based on these established standards, metadata-driven reporting systems attempt to use such data directly to relieve the burden of repetitive programming for a large set of deliverables. Besides these solutions, especially during the trial execution phase, reviewing, validating and reporting of "work in progress data" may also be needed, sometimes even frequently. These relatively time-consuming tasks consist mainly of examining data within a table viewer, or running short code parts to select, group or summarize data for further investigation. Often the outcome should be exported and shared within a team quickly.
In this presentation, I would like to illustrate how a web-based solution based on R in combination with the packages Shiny, data.table and haven could be used to support such tasks. Shiny is an open-source R package which provides an easy, small and powerful web framework for building interactive web applications using R. Haven enables R to read in datasets directly from the native sas7bdat format, and the data.table package provides an enhanced version of R data frames in terms of speed and memory consumption. The solution uses pre-generated native SAS datasets (following the CDISC SDTM and ADaM standards), and data will be processed in-memory on a capable back-end server. In contrast, the slim local web client application allows the user to set up a report from the user interface, having the flexibility to choose various parameters that affect the content of the generated report. Settings can be saved locally, which allows the user to come back in another session and quickly run a previous set-up without having to remember all of the prior parameter values. In addition, a CSV data export and a simple SAS code generator have been implemented for further usage.
Past and Future of Our AI Making Use of Data Governance: How to Make Process/Product Innovations
Ryo Kiguchi and Shogo Miyazawa, Data Science, Biostatistics Center, Global Development Division, Shionogi & Co., Ltd
The Artificial Intelligence (AI) that we define is the system with the series of processes of “Recognition”, “Learning” and “Action”, which assists people's activities. There are various types of data used in AI, and so that, the methods of recognition, learning, and action are different depending on the data format. However, "Data Governance" which collects, manages, and archives any data (including information) for the purpose of innovation is extremely important regardless of the data formats. Based on Data Governance using Python and SAS, we have been using AI for “Process innovation” in the past. Specifically, our "AI SAS programmer" system is that semi-automatically creates SAS programs to analyze clinical data. Using this system, we realized 33% reduction in analysis work. Currently, we are considering cross-cutting use of data governance technology acquired by AI system. In the future, we will make use of this technology in “Product innovation” of new drug development, and we will introduce a part of the idea.
Technology Overview About Artificial Intelligence for Clinical Data Science
Ippei Akiya, Founder and CEO, DataDriven, Inc.
The development of artificial intelligence (AI) in recent years is expected to bring about a great social impact. Image recognition, voice recognition, chat bot, task automation, and etc. are successful examples of using AI. However, the definition of AI is ambiguous and its definition varies in different people. My definition of AI is a complex computer system with knowledge base, natural language processing, machine learning, and others. I will introduce the technology overview of AI and our technology challenges with knowledge graph and natural language processing for creating the clinical trial data with CDISC standards.
Mapping Reported Term for the Adverse Event into MedDRA Using Deep Learning
Yoshihiro Nakashima. Manager, Standardization and Management Group, Data Science, Development, Astellas Pharma Inc.
Today, it is said that artificial intelligence (AI) is the third boom, and not a day passes without hearing AI. This boom is led by deep neural networks (DNN) using techniques of deep learning. DNN has made tasks (e.g. image recognition, natural language processing, speech recognition) more advanced than ever. In this presentation, I will implement deep learning to map reported terms for adverse events into MedDRA. The procedure consists of two steps. First, convert each word in reported term into a numeric vector produced by word2vec using Wikipedia data. Second, train the DNN using words represented by the numeric vectors. Through this implementation, I would like to examine the applicability of AI technology to clinical trials.
Simulation Study to Use “REAL World Data” Using RDB and Hadoop
Hiroshi Ohtsu, Clinical Epidemiology Section, Department of Data Science, Center for Clinical Sciences, National Center for Global Health and Medicine / JCRAC Data Center, Department of Data Science, Center for Clinical Sciences, National Center for Global Health and Medicine
Shiro Matsuya, Clinical Epidemiology Section, Department of Data Science, Center for Clinical Sciences, National Center for Global Health and Medicine
We are now living in an era of information technology and face information explosion with huge amount of data. Wikipedia defines big data as "data sets that are so voluminous and complex that traditional data-processing application software are inadequate to deal with them." If you need to handle big data in a clinical research setting, our experimental report provides several lines of evidence that modern massively parallel processing SQL query engine like Apache Impala surpasses traditional relational databases in most cases.
We focused on a comparison of the search performance of MySQL, a traditional relational database, and Impala, a native analytic database for Hadoop with the OSIM2 data. Using MySQL and Impala separately we tried to create a data set based on a pseudo-research question which is described in the report. CDH, Cloudera's open-source Apache Hadoop distribution, was used for evaluating the performance of Impala which ran on several virtual machines on a physical machine. At the same time, we will show the tentative flow of these databases and SAS analysis, and also discuss the possibility of integrated analysis.
We Choose Intelligent Chatbot or AI Chatbot?
Riwa Tanaka. Medical Information Management Group, Medical Information Dept., Chugai Pharmaceutical Co., Ltd.
The development of artificial intelligence (AI) in recent years is expected to bring about a great social impact. Image recognition, voice recognition, chatbot, task automation, etc. are successful examples of using AI. However, the definition of AI is ambiguous and its definition varies in different people. My definition of AI is a complex computer system with knowledge base, natural language processing, machine learning, and others. I will introduce the technology overview of AI and our technology challenges with knowledge graph and natural language processing for creating the clinical trial data with CDISC standards.
Poster AbstractsThe Implementation of RBM in Chugai
Jun Kato, Clinical Information & Intelligence Department, Chugai Pharmaceutical Co., Ltd.
In Chugai, the RBM implementation working group was launched in February 2015 and RBM methodology has deployed in all studies since 2017. I would like to introduce activities carried out by the RBM implementation working group and the Chugai RBM methodology: what kinds of policies each study team has to follow and what kinds of documents each study team has to prepare in study set-up phase and conduct phase e.g.) Study selection criteria, RACT, SDV frequency, functional plans, central monitoring.
Dealing with Time-Varying Confounding Using SAS
Ryo Nakaya, Takeda Pharmaceutical Company, Ltd.
Time-varying confounding should be treated appropriately in observational research area, since choice of therapy in daily setting is usually done based on patient status, including treatment and covariate history. Although CAUSALTRT procedure handles Inverse probability weighting(IPW) methods in NON-time-varying situation, it does not cover time-varying situation. Examples of SAS programming of time-varying IPW will be presented using simulation data with background of the methodology.
Data Visualization by Python: Data from Pandas to Matplotlib
Yuichi Nakajima, Manager, Prinicipal Statistical Programmer, Novartis
Python is the one of the most popular programming languages in recent years. This poster will explain how to display CDISC data using Pandas and Matplotlib library.
Case Study on Central Monitoring in RBM
Yukikazu Hayashi, A2 Healthcare Corporation
Because of ICH-E6(R2), the number of trials by RBM has been increasing. In A2 Healthcare, we have already any experience of RBM trials. We introduce the method of data monitoring by central monitoring for RBM and the case of some results by central monitoring.
Presenter BiographiesIppei Akiya
Ippei Akiya has over 15 years’ experience as a clinical programmer, project lead, data standard consultant, and clinical data scientist. He has worked with CRO since 2000, and after that, he founded DataDriven, Inc. in 2015. He is author of R4DSXML R package for importing CDISC Dataset-XML file. Currently he is working on establishing an automatically process to generate SDTM, ADaM, and TFLs by utilizing data, metadata, and machine understood knowledge.
CJUG ADaM Team
CDISC Japan User Group (CJUG) ADaM Team consists of 68 members (as of April 2018) from Pharma, CRO, Academia and Regulatory agencies. Their objectives are to discuss issues and provide recommendations on the ADaM standards and whole process of eData submission and to provide materials which support the creation of ADaM datasets and other ADaM-related deliverables.
Mr. Nobuo Funao has worked for over 14 years at Takeda Pharmaceutical Company Limited as a biostatistician of clinical studies. Mr. Funao has made several presentations at external conferences and given some lectures at universities. Mr. Funao has also published several books about R and SAS.
Mr. Yukikazu Hayashi has been working for about 20 years at A2 Healthcare Corporation. Mr. Hayashi has a background of biostatistics and he is responsible for Director of Data Science Division.
Ms. Juri Kato had worked for 13 years at Chugai Clinical Research Center: the subsidiary company of Chugai Pharmaceutical Co., Ltd. as a data manager, then transferred to the Chugai head office in April 2018. Ms. Juri Kato had been in the RBM implementation working group for 2 years.
Mr. Ryo Kiguchi has worked for over 4 years at Shionogi & Co., Ltd as a Data Scientist. Mr. Kiguchi has made researches about AI for Bigdata analysis with SAS and Python.
Mr. Matsuya is a part-time researcher at National Center for Global Health and Medicine(NCGM), Tokyo Japan. He started his carrier as a software engineer at Hitachi, Ltd. Prior to joining to NCGM, he has ten years of research experience in medical informatics at the University of Tokyo hospital. His research interest lies in the area of probabilistic database and machine learning. He received B.S. degree in mathematics from Hokkaido University, Hokkaido Japan.
Mr. Shogo Miyazawa has worked for over 1 year at Shionogi & Co., Ltd as a Data Scientist. Mr. Miyazawa has made business innovation with SAS and Python.
Yuichi Nakajima is a manager of statistical programming group of Novartis Pharma K.K. He joined Novartis in December 2010 as statistical programmer after 3.5 years experience as a statistician in domestic CRO. He has made several presentations at domestic / global conferences and also been organizing a global conference in Japan.
Yoshihiro Nakashima is currently manager of Standardization & Management Group at Astellas Pharma Inc. He is working for standardization using SDTM, ADaM and standard TFL template, vendor management and safety data analysis working group. He has over 10 years of experience in bio-statistician for urology, gastroenterology and oncology.
Mr. Ryo Nakaya has been working for 10 years at Takeda Pharmaceutical Company Limited. Mr. Nakaya is responsible for statistical analysis of clinical trials, CDISC, and observational database studies.
Markus Niederstrasser is working at Novartis Pharma KK as a senior statistical programmer. He worked in a research institute for 5 years and within the pharmaceutical industry for 11 years. Since he 2013 he lives in Japan.
Mr. Ohtsu is the manager of Clinical Epidemiology, Center for Clinical Sciences at National Center for Global Health and Medicine(NCGM), Tokyo Japan. At the same time, he is manager of JCRAC Data Center at NCGM and adjunct researcher of Waseda University. He received B.S and M.S. degree in mathematics from Kyushu University, Fukuoka Japan. His research interest biostatistics, methodology of clinical research, and regulatory sciences. His current research project is utilization of “real world evidence” by data cooperation aware of sustainability. He worked at Fujisawa Pharma. (Astellas Pharma), University of Tokyo, and Juntendo University.
Hiromi Sugano is a Biostatistics Reviewer for Pharmaceuticals and Medical Devices Agency (PMDA), Japan. She is in charge of the biostatistics review and consultation in Office of New Drug II. She has mainly reviewed cardiovascular disease-related drugs so far. Additionally, she works for Office of Advanced Evaluation with Electronic Data, and she is in charge of supporting utilization of submitted and accumulated electronic data in PMDA through offering training and practical assistance for usage of analysis software to reviewers.
Riwa Tanaka is a data scientist of Medical Information Dept. for Chugai Pharmaceutical Co., Ltd. She has been working on text data analysis, analytic support and implementation of new technology solutions in the call center since she joined MI two years ago.
Toru Tsunoda has been in charge of business development and sales support at SAS Institute Japan for Japanese health and life science industries for 9 years. Before joining SAS, he provided pharmaceutical & information technology companies with management consulting services at global & domestic consulting firms.
Yi (Eason) Yang is currently Senior Principal Statistical Programmer, global lead of programming team for Entresto® (LCZ696) which is the new foundation of care for reduced heart failure. He joined Novartis in 2010 and has been leading multiple Phase III global trials and pooling activities in Cardiovascular Metabolism, Immunology & Dermatology therapeutic areas. He is also actively participating in the organization and operation of PharmaSUG China conference as member of conference committee.