Worked example begins at 6.24
We’ll just cover flexible design. Just to note that this will mostly be dealing with adaptive design which I've done full webinars on multiple times in the past, so today it's really just looking over that same topic of unblinded sample size re-estimation I’ve done previously, but just showing an example for the recently added sample size re-estimation from blinded data using it for a survival model.
Basically just in the wider context of flexible design, as I said, there's a lot of interest in approaches to clinical trials which allow greater flexibility to make changes in trial while it's still ongoing. Costs are increasing, failure rates are increasing, and so is there a way that we could make more trials feasible by having the opportunity to make trials that are failing(not because there isn't the treatment that’s worth seeing underneath the hood, but simply because we just made some wrong assumptions upfront), we didn't account for something, the effect size is maybe smaller than we wanted, but still clinically relevant; those kind of issues.
There's always been ad-hoc changes required to protocols to deal with recruitment issues, and other similar issues, but you need a significant justification to get those done. Of course they are ad-hoc so they did introduce bias, they did introduce errors. From a legislative point of view as amended like The Innovative Cures Act, and from the sponsors have a major incentive to have more successful trials, but even regulators and from a legislative point of view, there's an interest.
If we're going to have flexible design the idea will be that it’s on a per protocol basis; it's pre specified effectively. We know what’s going to change beforehand, we know when we're going to change it, and so that makes it more tractable for regulators and that makes them more comfortable with the concept.
Flexible design on a protocol basis basically means you're moving into adaptive design, which of course is any design where a decision or choice is being made about the trial while it’s ongoing - whether that's stopping early, sample size re-estimation enrichment, etc. The idea here is that there are certain decisions that can be made earlier, certain costs that can be reduced, and the cost of a failed trial of course, and seamless designs etc.
Just to mention the regulatory context; the FDA published its updated adaptive guidance in October last year, or on September/October last year as a PDUFA six requirement, and just to mention the EU (Adaptive Pathways), is similar. One small note is that ICH E20(the international organization for the harmonization in clinical trials), will be on adaptive design and it's starting up in 2019. It's a very long path between then and actually getting agreed-upon guidance, but it's a very important step regardless.
The FDA are basically saying “our doors are open come to us early, let's talk about the issues that you want and what opportunities you believe can be opened by using an adaptive design versus using a standard design”. The issues of making sure these are pre specified, making sure that you retain your blinding if you're using a blinded sample size, or even if you’re making sure you're keeping blinding for whoever should keep blinding, even in an unblinded case it's really usually the independent data monitoring committee that's really doing those decisions, so it's not without issues. Simulation is a vital tool to evaluate the design beforehand.
We're going to focus on sample size re-estimation, which is basically where we're going to increase the sample size if there was some mistaken assumption at the planning stage for a sample size calculation. In the case of unblinded sample size re-estimation that will tend to be on the effect size, and on the blinded sample size re-estimation which we won't be covering today, that would usually be for a nuisance parameter.
For sample size re-estimation we generally have a set of designs which are often called ‘promising zone’ designs. Most of these papers emerge from work done and adaptive group sequential design, but also this work by Chen, DeMets & Lan, where they defined the idea of promising results; results which integrate based on the interim data that your power is not at the target power(so it's not 80% or 90%), but it is high enough that we believe that the clinical, the interim effect size would still be clinically relevant and we want to increase the sample size so that we still have a reasonable chance of finding a statistically significant result at the end of the study.
You can consider it to be an extension of group sequential design which just adds an extra choice. Instead of just being able to continue the trial, or stop the trial early because you have efficacy or futility you now have the option to increase sample size for a promising result. The idea here is that maybe you could design a study which powers initially for your expected result but still has the optionality to increase the sample size if you find a smaller but still clinically relevant results.
Conditional power is basically the criteria usually put forward for defining what a promising result would be, with conditional power just being the probability based on the interim data, how likely do you think you're going to get significant results based on the interim data.
Like any trial design, careful planning can mitigate or eliminate certain risks. If you are exploring adaptive designs, one important factor is to select validated and trusted software that is designed for your adaptive trials. nQuery has dedicated adaptive trial design functionality that contains a selection of sample size tables designed specifically for areas of adaptive design.
We recently hosted a webinar examining Advantages & Disadvantages of Adaptive Sample Size Re-Estimation. You can watch this webinar on demand by clicking the image below.
In this webinar you’ll learn about: