Explore the Scientific R&D Platform

Try Now
Pricing
MENU
Try Now
Pricing

Dealing with Uncertainty in Clinical Trials - with Ronan Fitzpatrick & Prof. Stephen Senn

March 21, 2019

Statsols Head of Statistics & nQuery Lead Researcher, Ronan Fitzpatrick had the chance to sit down with Professor Stephen Senn to discuss dealing with uncertainty in clinical trials.


This video is an excerpt from a feature length video titled "nQuery Interviews Professor Stephen Senn". The full video interview is available to watch on-demand by clicking here.

About The Speakers

Ronan Fitzpatrick is Head of Statistics at Statsols and the Lead Researcher for nQuery Sample Size Software. He is a guest lecturer for many institutions including the FDA.

Stephen Senn is a statistical consultant for the Pharmaceutical Industry. Stephen has worked as an academic in a statistical capacity at the Luxembourg Institute of Health, University of Glasgow and University College London. A leader of one of the work packages on the EU FP7 IDEAL project for developing treatments in rare diseases, his expertise is in statistical methods for drug development and statistical inference.

Watch The Full Interview Now


Interview Transcript

Ronan: So returning to the question of having different objectives in your study there is a school of thought that one underutilized approach to doing sample size calculations is rather than focusing on testing on power that perhaps people in certain areas if they're very uncertain or just a set of clinical trials where decision making perhaps is not as big a deal that it should be based on precision that like you should be targeting a certain precision in your actual estimate rather than tying your sample size that's made to this decision approach the Frequentist decision approach. Do you see any merit in that?

Stephen: I think there is a lot of merit in that and I think also maybe Frequentists have not done enough to explain how the particular calculations they do could be interpreted in this way in which case they might become somewhat more palatable to Bayesians because if you think about it if what you do is you target 80 percent power and you have a five percent type one error rate 2-sided, which is a very very common combination or 2.5 percent one-sided which is perhaps a more honest way of describing it then in that case you're targeting a signal-to-noise  ratio if we put it in engineering terms of about 2.8 so let's say three.

So you can say a lot of typical power calculations are sort of saying how many patients would we need to study in order to get a data ratio of 3 signal-to-noise ratio. So ignoring any other prior information we have with just the data would have this from that point of view one can see that well you know maybe that's not such a such a difficult thing. The purpose of Delta then is a sort of scaling device it's a way of interpreting the standard error.

What the  standard error will come in particularly useful for is what do these units mean in practical terms to the patient and that's what the function of Delta is there and so you're targeting this value of three and so that's very very close then to although you've gone around the houses you've talked about a type one error rate you've talked about a type two error rate and so forth but basically what you're talking about ultimately is some sort of precision that you're targeting in the data and I think that this probably has quite a reasonable Bayesian justification as well. This doesn't mean that the values that we're targeting are necessarily correct, maybe three is not a good sort of target we're looking at and that's something we could maybe have more of a debate about among statisticians but it's another way of looking at it.

Watch The Full Interview Now

Subscribe by Email

No Comments Yet

Let us know what you think