- Clinical Trial Solutions
- Learning Center
Statsols Head of Statistics & nQuery Lead Researcher, Ronan Fitzpatrick had the opportunity to sit down with Professor Stephen Senn to talk about Adaptive Design and Early-Phase Clinical Trials.
This video is an excerpt from a feature length video titled "nQuery Interviews Professor Stephen Senn". The full video interview is available to watch on-demand by clicking here.
Ronan Fitzpatrick is Head of Statistics at Statsols and the Lead Researcher for nQuery Sample Size Software. He is a guest lecturer for many institutions including the FDA.
Stephen Senn is a statistical consultant for the Pharmaceutical Industry. Stephen has worked as an academic in a statistical capacity at the Luxembourg Institute of Health, University of Glasgow and University College London. A leader of one of the work packages on the EU FP7 IDEAL project for developing treatments in rare diseases, his expertise is in statistical methods for drug development and statistical inference.
Ronan: So one other kind of area which has got a lot of interest recently and the FDA published updated guidance on it late last year was on Adaptive design.
So Adaptive design obviously is kind of seen as one of these mechanisms to hopefully make clinical trial practice better but also hopefully to ameliorate some of the issues that come around where the understanding now is that clinical trials are too expensive, too many of them are failing and stuff like that.
How much promise do you see in Adaptive design and is there any types of adaptive design (adaptive clinical trials) that you would see as being less valuable or more valuable or are being over-hyped or under-hyped right now?
Stephen: Well first of all I think it was a very interesting theoretical development. It's one of those developments, I'm thinking in particular of Bauer and Cohn and others back in the 1990s, which in retrospect seems absolutely obvious and that's the beauty of it. It’s actually a very clever idea which in many ways is very very simple and that's extremely nice. Secondly, of course in theory it's always good to be flexible, you have options. One of the things I worked on when I did work in drug development was how do I decide which projects to develop and one of the things you soon come to realize is options to escape from a project in particular are extremely valuable and so flexibility in itself is sort of good and there I'm quite happy with this particular idea being exploited. I'm much less happy with the idea that somehow this is going to save a phenomenal amount of resources except to the extent that it is being used for stopping for futility. That is always valuable getting out of projects that are not on the success path in some particular way early, cutting your losses is a very fine and sensible thing to do and that's good but where it's being used to make successful projects smaller in some way then we should be a little bit careful because what we're doing is we're providing the regulated with less information than they would otherwise have got and we need to stop and think whether we are actually optimizing the correct thing there. There I’d be a little bit nervous about being flexible just to guarantee a pre-specified power and a pre-specified type one error rate and not thinking about the other things that go on.
Also, as regards the effect on sample size for successful projects anyway, even if you're flexible, it's nowhere near as important as choosing the right analysis. So not doing responded dichotomies but sticking with the original covariance and original values and doing analysis of covariance would have a far more dramatic effect on the sample size than being flexible using some of the more modern approaches. These simple steps unfortunately not being taken so for me flexible designs are nice, interesting, glad we're getting the ability to do them but it's not the priority there are other simpler things that we're not doing that we should sort out first.
Ronan: I suppose to a certain extent there's no getting around the fact that before we approve drugs we want very very strong evidence that they're safe, that they work, that they’re worth putting out there and selling in addition to whatever is available for that particular treatment and that requires that there's certain amount of cost that comes with gathering good evidence. So I suppose one other kind of area where adaptive clinical trial design has kind of come on a lot it's just an early phase trial. I know you probably have more experience in phase 3 clinical trials but it has made a fairly big contribution in phase 1 trials for MTD phase 2 for dose finding. I don't know if you've any familiarity for those?
Stephen: So I have a quite a lot of experience of phase 1 and 2 in non-cancer, so what I don't have experience of is phase 1 trials in cancer where things like the CRM I use, however, I was privileged to hear John Quigley explain I think it was at the IRCD conference in NIEM in 1990 how the CRM design worked and I was immediately convinced that it was a great idea.
Ronan: I think what was the alternate at the time was the 3 Plus 3 Rule.
Stephen: In fact in the first edition of Statistical Issues in Drug Development drug development I covered it and suggest someone could use it and I think even the first edition I pointed out it has a particular feature and that is that essentially if you operate it algorithmically as one might well do then in that case there are a number of possible dose paths but they're determined in a sense by exactly whether there was a toxicity or not of the previous dose so in theory they’re codable not in terms of the parameters but they're actually codable simply in terms of if this result, if there was a toxicity here go down if it's toxic to go up but the rules are much more complex than the standard rules is used in three plus three now just the other day I was mentioning that I was revising Statistical Issues in Drug Development and drug development for a third edition and I've been talking about this to Kristine Yap of Birmingham at the ISEB meeting in Melbourne this last year and I then sent her a copy of the chapter and she said wow did you realize we just had a paper on this and they now have a paper in which they describe essentially how you can try and produce simpler algorithms and simpler procedures from this so maybe we are on the threshold of the CRM being implemented much more than it has been hitherto I think that would probably be a good thing but as regards non-cancer, no then I have quite a bit of experience of course of using crossover trials for example in asthma where using what we might call pharmacodynamic outcomes rather than therapeutic ones here's a very useful way to do dose finding because dose finding is so fiendishly hard and it's one of the hardest things in drug development. You really are always looking for ways to leverage it if you want to get a good result later on. Get the wrong dose and you don’t have a product.