Friday, May 15, 2015

Big Data in Clinical Trials: The Opportunities and Challenges

By Kristina Lopienski, Product Marketing Manager, Forte Research Systems, Inc.

At Partnerships in Clinical Trials last month, there were several sessions that hit on the theme of big data and the potential for realizing what truly lies in the vast amounts of data sets from clinical research. The conference was host to several discussions around the opportunities and challenges in harnessing and analyzing existing data.

One session, titled “Can Big Data Cure Cancer?” was a conversation between Atul Butte, MD, PhD, Chief of the Division of Systems Medicine and Associate Professor of Pediatrics, Medicine, Stanford University and Professor Brendan Buckley, Chief Medical Officer, ICON plc. In this talk, an interesting discussion point came up on the role of CROs in the big data revolution.

CROs are sitting on tons of data. Yet, the general belief and practice has been that once the trial is over, the data is invaluable and goes straight to the attic. Most CROs’ spending on R&D is probably close to $0, but for those that see the results of tens of thousands of trials, they could potentially help predict the next best design. This can even be a competitive advantage for them.

CROs may be one way to help achieve better analysis and metrics, with the potential to provide benchmark data on things such as the average time it takes to recruit patients, patterns for lab tests, how long trials took, etc. across sponsors. Of course, there are some concerns, such as ownership as the data belongs to sponsor, the CRO is bound by contract, and relationships with sponsors may be very transactional. However, overcoming these challenges can open the door to a new realm of possibilities. Whether using this example or another opportunity, Butte challenged the audience to use their data to disrupt their own organization and reinvent their business.

In a separate session, titled “Big Data, Big Patient Impact,” the moderator polled the audience, asking what the top big data challenges are within life sciences. Among the top three answers were deciding what data is relevant, lack of big data analytics/scientific staff, and determining the appropriate technology.

The panel discussed that before we can curate and analyze data, the first step is to aggregate the data together since it resides in disparate systems and come from different sources. With a diversity of files and large amounts of multi-structured information, having it all in a standardized format opens it up for further analysis. After all, the data doesn’t make sense unless it all comes together.

In addition, pharma needs to be able to detect small signals and subtle anomalies in these huge sets of data. Seeing the need for a new kind of role, pharma can look to other industries that are more sophisticated in this area. For example, when it comes to fraud detection, banks go through the data available from every single customer at every single location to be able to know when those small signals and subtle anomalies are significant. As we work toward advancing medicine at a faster pace, pharma must better utilize its greatest resource, data, and continue to evolve in this area.

Looking forward to what we can actually do with this data – from understanding why previous protocols failed and acting on already existing data sets to sharing negative trial results – relates to another well-covered topic at #PCTUS: data transparency, which I’ll cover in the next blog post. Stay tuned!


About the Author: Kristina Lopienski is the Product Marketing Manager for Overture EDC at Forte Research Systems. In her role, she works to bring educational resources to clinical research professionals. She writes on a variety of topics affecting clinical trials on the Forte Clinical Research Blog. Kristina served as the guest blogger covering Partnerships in Clinical Trials 2015. 




No comments: