Tuesday, March 18, 2014

The Crystal Ball is on the Fritz

What to do with a broken feasibility process
“If you don’t know, then you’d better ask.” Could there be a more supremely sensible piece of advice? Don’t rest in your ignorance, but also don’t make assumptions – the answer is out there for those who look for it.
However, buried within this pragmatic advice are a few rather critical assumptions: that we know who to ask, and that we know what question to ask them.
That should be intuitively obvious, for the most part. There are some very specific ways in which the act of asking can go wrong, and it is helpful to list them. We can:
  • Ask a person who doesn't really know
  • Ask a biased person
  • Ask without specificity

When it comes to enrollment planning, sponsors tend to do a lot of asking. Especially small and mid-sized sponsors that do not have a wealth of historical data to query - although I have certainly encountered big pharma managers doing a lot of asking as well. And when they do ask, I often see these exact same issues.


Asking people who don’t really know: academic KOLs are a perennially popular source of wisdom about enrollment rates, despite the fact that many of them are simply in no position to provide anything even close to an accurate prediction of how a multicenter trial will enroll. Usually their estimates are wildly optimistic, as many of them have very little sense of the operational and regulatory frictions that trials are run under. Sometimes, however, they can be just as wildly pessimistic (I still remember one respected researcher who advised me confidently at an Investigator Meeting that we’d all be getting together in a year’s time to figure out why enrollment was stagnating. The study closed 4 months later, exactly on schedule.)


Asking biased people: this is an extremely common, almost embarrassingly common, problem. Sponsors solicit enrollment predictions from CROs or sites before a contract is awarded. Not only do sites not really know much about the final protocol at this point, but they simply have no incentive to provide a realistic estimate - they are trying to be awarded the job. (Sites of course do have some incentive to avoid taking on studies they won’t be able to enroll, as those will likely be frustrating and money-losing endeavors. However, that won’t necessarily stop them from exaggerating to the sponsor: better to get the study and risk realizing you don’t want it than to never get the study at all.)


Asking without specificity: even when feasibility is run after site selection, we often encounter generic questions like “how many patients will your site be able to randomize into this study?” The question is devoid of a large number of relevant facts that will influence the answer: when enrollment will start and end, what the screen fail rate will be, what the major inclusion/exclusion criteria are, and how big the burden of visits and procedures will be. The expectation is that the study coordinator will go back and locate the protocol synopsis (which hopefully hasn't changed much since the last version they received), somehow extract all of these relevant details, and come back with an accurate prediction.


In my last decade of watching sponsors and CROs conduct enrollment feasibility with their sites, I would say one or more of the above mistakes have been made in over 95% of the studies I’ve worked on. The end result: highly unreliable data. Often, senior leadership within the sponsor organization will actually refuse to use this kind of feasibility data to alter official study enrollment benchmarks.


And here’s the real underlying problem: even if you avoid all of these pitfalls, feasibility data just doesn’t work that well.


In a recent article on the Partnerships in Clinical Trials blog, I shared some data on how prior trial performance was an inaccurate predictor of subsequent enrollment. An astute commenter on LinkedIn wondered if perhaps this lack of reliability could have been at least somewhat avoided if the sponsor had done a better feasibility.


So I went back and checked. In this case, the feasibility was done as well as can possibly be expected. The sites were queried after they were under contract, the question was unambiguous, and major entry criteria for the trial were actually listed right on the questionnaire above the enrollment question.


It is amazing how few sponsors actually circle back to their feasibility data once the trial is over. It’s a relatively simple exercise to compare what the sites said to what they did.


Here are the results for that trial:


Site estimate (left-right) versus actual (up-down).
Each dot is a site: blue line represents accurate prediction in feasibility
The sites are generally all over the place, and their predictions were not at all well correlated with what happened.

In aggregate, the sites were about 50% more optimistic that reality proved to be. What’s worse, though, is that their individual predictions bore no real resemblance to reality. The only reason there is even a modest association (R2 = 0.37) between expected and actual enrollments is because of a handful of high enrollers who apparently knew they would be high enrollers. (Of the 5 sites who predicted they’d be top enrollers, 4 actually were, though you can see that their individual predictions were still heavily over-optimistic.)


So, it is fair to say that in this case, even a well-executed feasibility did not help to identify any subsequent enrollment issues. Individual sites were not particularly accurate, and neither was the total.


Based on this, and subsequent, remarkable similar, experiences, we have adopted a new slogan. Instead of “if you don’t know, ask”, I prefer this alternate formulation:


Don’t ask. Measure.


Fortunately, these days we have more opportunities to rigorously measure enrollment feasibility than ever before. The primary means for accurate measurement is the site database. Sometimes this is an Electronic Medical Records (EMR) database, but just as often  - especially if the site also engages in regular clinical practice - the billing database is even more accurate and better organized.


We encourage all our clients to engage their sites in more in-depth queries of their databases before study start-up. In fact, we go so far as to encourage database queries to be written into site contracts, with appropriate compensation for the site’s extra work, and reporting of anonymized results back to the sponsor. This can be done with all sites, or just a big enough sample to obtain an accurate read on the true patient population.


In parallel, I strongly encourage sponsors to eliminate any feasibility data collection that violate the above criteria. And all feasibility data should be electronically stored in a format that is easy to access and query once the study is done. Paper feasibility questionnaires are a sad waste of time and effort, and the data they collect is simply not there to make us wiser when it comes time to start planning for the next study.

No comments:

Post a Comment