by Michael Christel, Pharmaceutical Executive
Most will likely agree that the data explosion taking place in clinical research today has moved well beyond trend status. It’s very real; industry attention in this area—as a legitimate force for change—is building considerably. And the focus is wide-ranging: big data, eSource, real-world data, electronic health records, wearables, advanced analytics, remote patient monitoring—and on and on.
These are indeed exciting times to dream big on how these new data technologies and process improvements can potentially revolutionize clinical trials. But before we get too caught up in the grandeur of what could be, a prudent lens on things is also warranted. R&D experts and clinical research professionals are often quick to point out that one of the biggest issues in managing this data convergence in the life sciences is connecting reality with the conceptual promise of these “next-generation” information tools and processes.
“Like so many industries, the drug development enterprise tends to approach the use of technologies with this tremendous exuberance that is so far removed from the realities,” says Ken Getz, the longtime chairman of CISCRP and director of sponsored research at the Tufts Center for the Study of Drug Development. “This exuberance comes without really thinking through the kind of long and painful journey that’s going to be required by all of the workforce within drug development to change legacy processes and systems. Some of that will require cultural change and philosophical change in how we approach development activities.”
Here are more excerpts from Pharm Exec’s recent conversation with Getz, where he discussed the steps and mindset necessary to truly harness data to improve the research process of tomorrow.
PE: What do you feel are the most compelling clinical research data trends at the moment?
Getz: I think the biggest trend is the big data movement, and the use of really large datasets on millions of patients. And even the use of real-world data in clinical research and in pharmacovigilance, for example. There are so many sources of structured and unstructured data—data coming from large electronic health record and electronic medical record platforms, from wearable devices, and mobile devices and applications. How do you gather and integrate all that data? What gets me most excited is how do you then use that data? Can we use that data to identify targeted patient subpopulations? Can we use that data to really inform us while the study is underway or at different critical points, so that we have a new understanding of data safety or we identify efficacy or safety patterns in the data? How can that data be constantly used to enrich our insight? Not only insight into treatments and their effectiveness, but from a management standpoint—to manage our clinical research more effectively.
Then there’s a lot of interest in using this data to predict performance. To select a more effective group of investigative site partners, for example. Or to predict patient recruitment and retention rates more effectively. That whole big data movement, it sounds so cliché, but it continues to make great progress and hold tremendous promise.
PE: What about mHealth, specifically? Can we measure the impact yet on clinical care?
Getz: A lot of companies are piloting the use of select wearable devices and the use of smartphone and mobile applications. The biggest challenge is that some of these technologies are not validated. So some of the biometric data that they are collecting technically can’t be used in a submission. There are issues with some of the technologies, but there’s a growing number of organizations that are finding ways to integrate with really large datasets of patient records. I think we’re going to see a lot of progress in this area over the next 18 months.
PE: How do you view the topic of data transparency in this whole equation?
Getz: Transparency—and I would add integration, which is central to that—are critical both within the enterprise at an operating level and obviously essential to partnering with patients and healthcare providers and payers. On the drug development side, we see that so much of this data and the technologies that are being used are siloed. They’re not integrated—they have really poor transparency. That’s requiring a lot of manpower to manage; it’s highly inefficient. Transparency and integration can play a huge role in breaking down those silos from a drug development operating standpoint.
With patients and healthcare providers and payers, there are huge ethical issues around use of patient data and making sure that it’s entirely transparent. That’s very essential to building trust with the patient community and with healthcare providers and payers. It’s also essential to ultimately best serving patients’ needs by creating what the FDA has called a learning environment, where clinical practice and clinical research are communicating with each other in real time and in an integrated way so that constant feedback and insight is coming from both domains. Integration and transparency are areas that we really have to adhere to the highest principles and ensure that we’re honoring and we’re meeting the obligation to provide unprecedented levels of transparency and disclosure.
PE: Any thoughts on efforts in the US and Europe to promote data transparency and require public disclosure of clinical trial results?
Getz: With a lot of these technologies, the patient isn’t even aware that data is being collected. Many of these applications now collect a time and geographical stamp on a person while they’re carrying out their day-to-day activity. Some of that information, if patients were fully aware of how it’s being used, many would be concerned—or at least should know and give their consent to having a lot of that information gathered. We have a lot of work to do to improve awareness of the kind of transparency that we’re looking for to build meaningful datasets and ultimately best serve the patient community.
PE: Have you heard direct feedback from patient groups expressing these concerns?
Getz: We have. Patients generally are extremely willing to share their data when they understand how it’s going to be used—and how the data has been used has been honestly and comprehensively explained to them. They’re quite eager to share their data and they’re quite trusting when that transparency and disclosure is presented completely openly. I think part of the issue is just building into the process the steps that we always need to take to partner fully and adequately with the patient community.
PE: In coming up with a future vision for research data, what are some logical steps needed to take these new opportunities from concept to reality in advancing the whole trial process?
Getz: The reality is that the use of most technologies is remarkably fragmented, it’s poorly integrated, we use so many disparate systems, and we use them inconsistently. All of these approaches ultimately are hurting productivity. They’re increasing inefficiency, they’re actually increasing our cycle times, and there’s just so much redundant activity. For example, the adoption and use of eSource is relatively low. We see so many professionals that have to do multiple data entry to move and transcribe from one system to the next.
Often, we throw new technologies and new systems that hold great conceptual promise into this stew of these kinds of operating conditions. As a result, there’s that wide gap between the reality and the conceptual promise. The Tufts Center and CenterWatch have gathered a lot of data in this area—just showing how siloed and fragmented and inconsistent is the use of various technology solutions. Those individuals involved with executing activities and using these solutions are often quite frustrated.
PE: You mention eSource. What challenges remain in the adoption of data standards governing the collection and management of clinical trial data electronically?
Getz: The Tufts Center, in collaboration with CDISC, did a study several months ago and we showed that the adoption of data interchange standards continued to grow steadily. Not only the adoption of some of the most mature and traditional standards, but also the use of some of the newer standards, including those that are tied to the sharing of data between compatible systems. But what we also saw is that the lack of integration and the lack of interoperability are hindering adoption. So you have a lot of professionals who are really committed to embracing the CDISC standards but they face a lot of headwinds in the process.
Some integration will come when it has regulatory winds behind it or when there’s such great necessity that it forces the sort of siloed functions to interact in a more integrated way. We haven’t yet reached that point.
Author: Michael Christel is the Managing Editor of Pharmaceutical Executive, a leading provider of in-depth industry business, strategy, and regulatory analysis. Pharm Exec serves top decision-makers across the biopharma value chain—from R&D to IT, finance, legal, and marketing. Visit pharmexec.com and follow on Twitter @PharmExecutive.