Tuesday, March 11, 2014

Rethinking Patient Enrollment, in One Graphic

Today, Partnerships welcomes guest blogger Paul Ivsin VP, Consulting Director at CAHG Clinical Trials. He specializes in study enrollment strategies and patient engagement. He writes on clinical trial issues at companion blogs Placebo Lead-In and Placebo Control.


In the run-up to this year’s Partnerships conference, I find it fascinating that both my fellow blogger Rahlyn Gossen and I immediately honed in on a specific facet of clinical trials: making study information more directly and easily accessible for patients.

On Tuesday, I wrote about clinical trial matchmaker services – organizations that are experimenting with a variety of approaches for parsing the massive amounts of data now freely available through the ClinicalTrials.gov registry.

I did not know at the time that, independently, Rahlyn was writing up a post on a potentially important collaboration between Lilly, Pfizer, and Novartis to make the data they were feeding into ClinicalTrials.gov even more standardized and accessible. If implemented, these standards will make it even easier for matchmaker services – and other interested parties – to help patients quickly and efficiently locate appropriate research studies.

What these two things have in common, of course, is the move to put information about clinical trials directly into the hands of patients. There is a belief that patients can be trusted to search, review, and select this information on their own (and if they need advice, they will be able to find it with family, friends, physicians, or others in their communities).

Traditionally, this has not been the case. Traditionally, the research sites have been the clinical trial fulcrum – the single point through which all information is channeled, interpreted, and explained:


Now, however, the twin engines of increased transparency and pervasive data have powered patients towards occupying a more equal position:


This is a critically important transformation in the way trial information is communicated. And embracing it, rather than fighting it, can be a major key to improved trial enrollment.

When we talk about improving enrollment, most conversations turn immediately to accelerating accrual into trials. And that’s highly important, especially for any one individual study team: they’re trying to get this trial done now. But almost as important, from the higher-level perspective of managing an entire pipeline’s worth of trials, is predictability. We need to know when trials will complete, and minimize the risk of delays.

And the site-centered approach to enrollment is, far and away, the biggest danger to reliable predictions about trial enrollment.

When people ask me about this, I have just one graph I need to show. It’s from a trial I was involved in not all that long ago:

A pharma client sponsor was just about to initiate a late-phase trial. They selected a CRO who had significant depth of experience in the therapeutic area, and who had in fact just completed a trial with almost exactly the same inclusion/exclusion criteria. As is almost always the case in such situations, the CRO ended up re-using many of the same sites for the new trial. They even included a few sites that hadn’t done great in the first trial – everyone was in agreement that sites that were “known quantities” were preferable to starting from scratch.

The sponsor team was bright, energized, and responsive. The CRO team was experienced and highly reliable. But were the sites known quantities?

In a word, no. In one graph:

Same sites, identical study, different performance [click to enlarge]

This graph shows the enrollment performance for every site that was both in the original trial (A) and the new trial (B). Overall, these sites enrolled fewer total patients in trial B, despite their recent experience. And their performances were entirely unpredictable. About a third did about the same, but the rest changed heavily in both directions.

(For those who prefer a formal measure: the metric of predictability, R2, was 0.2. Roughly, that means past performance on trial A could only predict about 20% of the performance on trial B.)

And that’s the crux of the problem: even when we have the same sites, running practically the same trial, the results are different. Our ability to reliably predict what’s going to happen simply doesn't exist.

I have seen similar patterns arise in many studies since this one. It also goes without saying that our ability to rely on predictions gets progressively weaker when we also have to incorporate differences in protocol, data collection, and other critical factors. At that point, predictability goes from poor to extremely bad, which is why we see so many trials behind their planned schedule.

In a follow-up post, I will discuss why bringing information directly to patients actually helps solve the problem of enrollment predictability.





2 comments:

sergio sanchez gambetta said...

Can you do an inference that the different time when they ran the A and B is the principal variable that is causing this variation? many competitive trials had been started at the same time when the study B started probably. The contrary statement with the sites that have better enrolment in the B study.
Sergio Sánchez Gambetta, MD, MSC

Paul Ivsin said...

Hi Sergio,

That's a fair question. I can say that the team was not aware of any serious competitive trials - it is a rather niche condition. And, as you note, a good number of sites improved in the second trial.

Based on the monitors' feedback from the sites, my best guess is that there was a variety of reasons for the changes, and "every unhappy site was unhappy in its own unique way."