- Clinical Trial Solutions
- Learning Center
Get sample size updates by email
Receive great industry news once a week in your inbox
Get sample size updates by email
Receive great industry news once a week in your inbox
In this webinar, we explore the characteristics of vaccines trials, review recent Covid-19 trials while evaluating the impact of group sequential design on the early assessment of these vaccines
Vaccine trials are among the most difficult clinical trials due to their larger target populations and the resultant study sizes in Phase III trials.
The unique considerations of vaccine trials has led to the development of domain specific terminology, methods and tools.
However, the race for Covid-19 vaccines has led to a number of interesting evolutions and adaptations that successfully accelerated the development and evaluation of these vital vaccines.
Looking for more resources?
So, hello and welcome to today's webinar, The Design and Sample Size of Vaccine Trials, Using the Case Study of Covid 19 Vaccine Trials, specifically looking at the phase three vaccine trials, which we've got a lot of attention over the last 3 or 4 months.
And hopefully, this webinar will give you a kind of nice introduction to the general design considerations of vaccine trials, and then, hopefully, show some of the relatively unique aspects that were used in the Covid 19 trials to reflect. I suppose the differing priorities in the context of pandemic use compared to your more standard vaccine trial, basically. We'll talk about that trade off between speed, safety, accuracy and how Covid 19. And the pandemic obviously kind of led to some acceleration due to some additional speed being required. Before we get started just some frequently asked questions is this webinar being recorded? Yes, and the recording alongside. The slides will be sent to you very shortly after this webinar is complete.
If you have any questions or any technical issues, there should be a chat or question function on the right-hand side of your webinar screen.
So, feel free if you have any feedback, both technical in the sense of no audio issues or any artifacts. So, let me know, but also, if you have any questions about anything covered in this webinar, or anything covered and nQuery in general, feel free to put them in there and I'll try to answer as many of those as possible. Any of those that I don't get to, I will answer to you via e-mail afterwards as well, probably tomorrow at the very latest.
So, obviously, this this webinar is demonstrated on nQuery. So, nQuery is a leading sample size design platform, and we'll be using nQuery today for some of the worked examples of sample size determination for vaccine trials. But obviously, the hope is that you'll get a general understanding and education on this topic. This is kind of more of a primer on it. And I'll be mentioning some more detailed, both blogs, a protocol, like the actual protocols themselves, that were used for the vaccine trials for Coffee 19, some blogs on the topic, which are very interesting, and a couple of other webinars, which go into the some of these aspects, in even more detail on some of the more general aspects, as well, off the trial results of the trials. So without further ado, I suppose I should introduce myself. My name is Ronan Fitzpatrick. I'm the head of statistics here at NQuery, and I've been the nQuery lead researcher since ... from Tree. That's about 5 or 6 years ago. at this point I've given workshops in places like the FDA GSM.
Obviously the hope is in the near future we'll be able to get out and do more stuff out and meet people like yourself in the field at places like GSM and other conferences. But for now, obviously, we're mostly limited to these online spaces. And I do apologize if there's any background noise you can hear there.
I do live near a fairly large, thorough away. So you may hear some traffic noise. I apologize for that in advance.
So in terms of what we'll be covering today as mentioned, firstly, we're going to cover the basics of vaccine trial design, basically kind of the unique issues to differentiate vaccine trials from more standard therapeutics in the context of clinical phase three trials. And then kind of will look at Covid 19 trials and then kind of, you know, show the similarities, but also for some of the differences that existed. When these trials were designed, and in terms of how they've actually operated, the most obvious, and, most important, will be, the usage of. Interim analysis, is something which, at this point, based on the spectacular results that we saw for most of the candidate vaccines that are now being put into people's arms. Has been a huge, you know, had a huge effect on our world right now. And then giving some kind of worked examples, or one of the worked examples of a Covid 19 trial design, basically some of the sample size calculations that were used for those designs, and then some brief discussion, I conclusions at the end.
Before we get started, obviously, as I mentioned, this webinar is presented and demonstrate to the nQuery. NQuery is a complete trial design platform to make clinical trials faster, that's called, the more Successful. And, you know, it's used by Iran, over 90% of organizations who have clinical trials approved by the FDA phase three trials, have a license of nQuery, so it's been around for over 25 years. It's very widely used and respected for both fixed term sample size and also adaptive design. And hopefully, you know, if you see this and you're interested, there'll be information at the end of this webinar about how you can get a free demonstration of this. And, you can kinda play around with it online, if there's anything you find here that you might be interested in trying yourself.
So, let's get into the actual educational component of the webinar and kind of talk firstly about vaccine trial design in a general sense.
So, what's the first thing to say is that the development of vaccines is traditionally a very long and uniquely challenging type of trial type of process. Generally, from phase one, to phase tree, to post marketing surveillance. We're going to mostly focus today on the challenges in designing these phase tree confirmatory trials. But obviously, it goes without saying that particularly in terms of post marketing surveillance, that vaccine trials tend to have a higher degree of scrutiny.
I suppose, even, like, you know, the reasons for that are fairly obvious, as we'll see in a moment, like, I suppose, in terms of vaccine trials, they tend to be large, even by phase tree standards. So going into tens of thousands, for example, like to cover 19 trials that we're talking about here, is not very unusual, And this reflects the fact that, you know, you know, most trials are event driven, that involve some kind of, you know, yes slash no, or survival type i-pod.
And the, you know, the reality of a vaccination program is that typically very few of the vaccinations, relatively speaking, will be activated in inverted commas, which is to say that the majority of people will be unlikely to be exposed to the disease that you're vaccinating against.
So, that's kind of a really important point in vaccine trials is that obviously unlike most or many, probably most therapeutics that you're familiar with that are that are subject to phase three trials in a vaccine trial.
You're really dealing with a therapeutic, which is given to effectively, everyone, or at least a very large population of people, the majority of whom will never have, to use the vaccine hope for you because, you know, this, you know, if it's a disease that already had some vaccination or therapeutics or or other controls on it, then you're hoping that it's not so endemic data. You know, that everyone is getting it already. You know, this is something that's endemic at a low level. That's been, you know, it's probably around for awhile stuff like measles stuff, like HPV, in the case of the cervical cancer.
Vaccines, you know, these are things that are endemic, they are around.
But, you know, either, they are not severe enough that, you know, people that are worth stopping, or you note that they kind of slowly have spread over time, effectively.
So, that's basically why one of the primary reason why these vaccine trials tend to be so large is that, you know, we're dealing with something which the majority of people will never have the event, and, therefore, never really contribute to the power. that. We're trying to the power of the hypothesis that we're testing, the other major reason why they tend to be so large, which is really what we focused on today, in terms like, sample size and design aspects.
But that's, obviously the safety concerns, which is that, you know, you know, if we're going to give, if we're going to give a therapeutic to people who are healthy, who have no current situation, then, obviously, the site, like, the balancing of harms and benefits that we do with every therapeutic is very much weighted against the therapeutic, in a vaccine case. Because, if, even relatively minor side effects, you know, on a large enough scale, are likely to cause significant issues. And, therefore, we tend to have a higher burden of proof in terms of safety aspects for vaccine trials, which requires a higher sample size, to feel confident that those are done. And also, not just sample size, but also time.
So, vaccine trials will tend to be quite long, because we want to get long term safety data before approving a vaccine, ideally.
Now, So maybe if there's, there's a bit of an argument in, you know, by by some people, that this abundance of caution has perhaps lead to vaccines, being kind of under an underfunded area of research, because the returns are so, you know, uncertain on the investment upfront is so high. That, you know, the solution would either maybe these things or I've gone too far in that direction, Or perhaps the state need the states need to invest to obviate or ameliorate those issues. But, you know, that's not really the focus of today. Like, that's kind of a much more interesting, wider societal debate.
I think it's one other thing worth noting about vaccine trials, is they tend to have a kind of their own, because of the unique issues, and because vaccine trials can have their own no issues that they have to deal with. They have developed what I would consider to be almost a distinctive vocabulary unto sermons sent distinct set of methods that are more commonly used ..., and the most in many other areas. For example, kinetic models are very common in vaccine trials while they're growing in popularity. And other areas like, traditionally, they may have been less commonly used for other types of therapeutics.
But I think what we'll show, hopefully, in this webinar, is that most of these are very small differences in terms of ... vocabulary and translation between them. Like, one of the most interesting matters subjects in statistics and data analysis, in general. And then just this whole, like, in terms of dealing with all these different areas, that you find, two different words being used for the same methods. You know, repeated measures, but of panel data, Stuff like that, and that holds, to some extent, for vaccine trials.
And I suppose, just a small preview here. I suppose that, you know, we're gonna focus on general vaccine trials and then extend to cover 19 trials. And we'll see that you know some of the unique aspects and challenges of dealing with a pandemic and dealing with the short-term needs to, you know, remove the danger of the pandemic, have led to some things in the cover one thousand trials.
That you wouldn't expect, necessarily, in traditional vaccine trials.
So, what are the design issues that I'm referring to that are unique?
Well, that's not to say unique, but which are, you know, more prevalent in vaccine trials done in other therapeutic areas in terms of the design of phase three trials in particular.
I suppose, the first is, which I kind of alluded to very strongly already, is that they are event driven, You know, basically, know people who get vaccinated but don't have the disease. I wouldn't say there are irrelevant, of course, but they obviously aren't really that impactful. But, of course, you know, in the trial, you don't know who that cohort of people is. And who the cohort of people who have been exposed and say, the vaccine arm, who have been protected by the vaccine.
So we're just comparing, you know, the rates of what happened in the unvaccinated versus the vaccinated cohort, and basically a certain proportion of those two courses being wasted in inverted commas because they will never encounter disease have been vaccinated against.
And because of that, and the resulting higher sample size that's already been referred to, you know, that can affect the designs, that makes sense.
That's if you're dealing with very large sample sizes, then you may want to vaccinate on a region district, even like city level. And that may mean that you're more likely to do something like cluster randomization, which is where like you'd say, take one district, don't give everyone their vaccine, but then take another district and give everyone their that the placebo, or maybe the standard vaccine already exists, not space. And that's known as cluster randomization. That's quite common in the cases of vaccine trials. In particular, if we're talking about trials in developing countries, such as like, like, I'll be using example in a moment of a vaccine trial for influenza in India and no.
Also, because of that, you know, those those size issues and wanting to try to get this information. You may see the usage of more epidemiological type designs, like cohort designs in case control designs rather than your kind of traditional RCT. Now, obviously 4 days 3 approval. And obviously, if in the case of the call, the ... trials, because someone's resources have been, have been thrown at them, you know, money was no object necessarily because someone was let's take that the randomized controlled trial. The RCT is obviously still the gold standard for vaccine trial design, but as well it's just worth noting that, you know, especially in developing countries and the development of vaccines in for those areas that these type of I suppose compromises due to the size are sometimes introduce and required.
As mentioned, vaccine trials, obviously places much higher standard on safety concerns, as, it's on the slide, basically what I said previously, which is that, you know, you're giving a vaccine to a hugely, much wider population than more traditional therapeutics. You're not just giving it the cancer patients. Or just, you know, people who already have the disease, you're giving it to everyone to protect them against the hypothetical of get all been exposed to a disease. So that's just a, You know, that just means that, you know, the tradeoff is that if the loads people are never going to be exposed to the disease, then you better you'll want to make sure that the vaccine is safe to ensure that you don't even mind are side effects the way you know the the, you know, the hypothetical the counter-factual of them never actually having needed the vaccine in the first place.
Um, and, basically all of that feeds into some of the decisions that are required on the choice of endpoint and statistical methods. I suppose in the vaccine trial.
Um, literature there that there's been some debate, particularly recently, even the context of covered nitride. So I won't really go into too much today. You know, there's two kind of all levels that you could kind of think of vaccines traditionally, and what is the incidence? And one is the severity, right? Like, are you trying to prevent someone getting disease, full stop, which, obviously, an ideal world is probably what you want, you just want to, you know, go: You'll want to kill the disease in the cradle effectively, and basically, remove its capacity to generate on spread Completely. Or, No, but it's not realistic. Sometimes, you know, a vaccine doesn't, just doesn't stop the disease in its track. It doesn't destroy the disease, whatever analogy you want to use. It simply prevent the disease from manifesting in ways that lead to severe outcomes. And, you know, this was something that was debate has been debated over the last year in terms of the carpet, 19 trials.
About some people, you know, feeling das, the endpoint that was chosen, which was mostly symptomatic Covid 19.
So basically, you know, cough sneeze, things like that. It's a new sarcophagus, say, nothing, you know, cough, fever, these type of common, you know, symptoms that were now all familiar with for Covid 19. That's what they were trying to stop as the endpoint in these ... because it's the easiest one to count effectively. But some argued that you know, you know, you could see from both directions like shouldn't really shouldn't have been about like hospitalizations or deaths. Which is really the thing that people are most concerned about, or the reason that, well, you know, the, the huge societal changes we've had to implement of last year have been implemented because of those. You know, more severe outcomes. But also, of course, doc, you know, Covid 19 is notoriously difficult to detect has notoriously variable symptoms, or, you know, asymptomatic manifestation of being hugely problematic. You know, should we know maybe there's some could argue, I don't think this is really, this is much less common.
But you could argue that perhaps, you know, do these cohorts should be tested in full? And then find out the whole percentage of people who even had the disease in them.
Because obviously, you have to argue maybe for that would be the asymptomatic people could still spread the virus.
That's one of the most dangerous things about the curve at one thousand virus, was that it can spread before you show symptoms, or even if you don't have any symptoms, that's why it's become so endemic and been so difficult to deal with.
I suppose. You know, there's also an option to kind of maybe do a composite endpoints where you basically have some hypothesis for both of these or some combination of these. And we're not gonna cover that today. But if you're interested in composite endpoints or vaccines, I'll be happy to cover that in a future webinar.
Just one small note to make is that the endpoint that you can choose for a vaccine trial can be quite varied. So you could, you could choose, you can choose to treat it like a proportion. So, you know, X percent of people in the unvaccinated cohort get the disease X percent in the ... disease.
Hopefully, the vaccinated cord, that percentage is lower, There'll be treating as a proportion. There's an event driven approach to that, which is what we talk about the context of ... and so on.
cover it here now, but you can also keep it like a count.
So you're saying something like a Poisson model or negative binomial model to your country together account or rate. So let's say incidents per hundred thousand people. The expected, you know, incidence of disease, 100,000 people. But you can also treat it like a time to event or survival type process. So, you kind of just compare, you know, what were the times before the, you know, the, the disease for each person, and then, you know, that can be used basically analogies. Now, in real terms. All of these kind of, can be shown to, easily, map onto each other. They are basically different ways of thinking of the same problem, but, you know, there might be slightly, you know, what the traditional rule, that sample size calculation specifically is, whatever modeling choice you made should then be reflected in the sample size calculation you do.
That's basically like the kind of rule of thumb. So if you're doing, if you're treating it like a proportion than use the proportion sample size, you're treating it like a county use, a count model, sample size calculation.
Now, obviously, just the final point here, about the large sample size in the long length and the issues around safety means that adaptive design and vaccine trials is relatively rare. And, I suppose this is just kinda be another kind of tease in the sense that its character pointing to have Covid 19 trials were used where, you know, relatively simple adaptive design was applied due to differing priorities. You know, accuracy, especially over safety, is the priority with most vaccine trials. Speed, is the secondary concern to some extent, now, As I say, there is a debate about, you know, maybe that has gone a little bit too far. And that lead to a under investment in vaccines in general. But, we'll kinda leave that aside for, now. There's a very good that, you know, as we've kind of gone over, there are very good reasons why, to have that bias in favor of accuracy, especially safety over, you know, speed and economic concerns.
So, in terms of what the, kind of vaccine methods, slash sample size determination methods that you might use for a vaccine trial? So, one way, if you're doing a kind of cohort design or case control design, is that you don't even use something like power that you use. Instead, the, You know, you just want to have a certain precision for your estimate for vaccine efficacy.
And this is very well Explained in a kind of economical, very important paper by O'Neill in 19 88, which basically goes over the distribution of Vaccine efficacy and the CI for us from a case control and cohort type designs. So there was kind of power derivations based on the standard errors. Did there, which are basically, for all intents and purposes, you don't really need to worry too much about them. Because, effectively, if you choose to power for vaccine efficacy, you're effectively powering for a two proportion test, which is the third row here.
So, basically, vaccine efficacy is simply just one minus the risk ratio, which is just, you know, P two divided by P one, the Proportioning group, in the fact that a cohort divided. But unfortunately in the unvaccinated cohort, the proportion here being the portion of people who get the disease, the endpoint, whatever that happens to be.
So, in effect, any tests that you can do for a risk ratio you could do for a vaccine efficacy, those are effectively exchangeable.
For that point, if you see like a vaccine efficacy power calculation, if you look under the hood, it's probably basically just A re, pithy, a rubric, rescale rescaling of a traditional to proportion test sample size, or power calculation. I've mentioned there's compensated models, incidents, plots of their severity. And then, there's this paper by ..., et al, for that's quite recent, That kind of goes over this issue. There are older papers. This isn't like a brand new area, but it's kind of, I think it's kinda just come up a bit more recently and people are kind of going back to the best count models like the Poisson or the negative binomial N B here. Models. They're quite common, especially once again, in these very, very large like countrywide or region wide vaccine trial, to kind of make sense to think of, you know, rates like, you know, X, you know, outcomes for 100,000 people, kind of units like that. And that naturally brings you to, to count, or incidence rates, models that are effectively it more or less the same thing. *** regression or survival type parameter ization it's also possible.
And, you know, there's all these cluster randomized designs, which can be either for the candidates or proportions like, really, the cluster randomized trials are. bit are basically build on top of these other approaches for the two proportion tests. For the two, the two group count model tests using Poisson regression or negative binomial. In effect, they just build on that. They're basically, Oh, yeah.
The most obvious way, They're just hierarchical model extensions of those generalized linear models. So there's hierarchy of generalized linear models effectively.
From a sample size calculation point of view, there's various papers behave embedded. It's the most commonly used in the context of vaccine trials, as far as I can tell.
And then, just to kind of scoop on back, the one I kind of skipped here, which is the second line. These one proportion test, it's event driven approach, which for the corporate power calculations will see is basically just using a standard one proportional test, sample size calculation. This is what we'll be using in the example in ... trials. So, I'm not gonna cover it here, but, but in very, very short terms, you just basically going to imagine that you take the people who have an event. So, you condition on the people who've had an event and then it's like the proportion within the cohort who were vaccinated versus the proportion who were on vaccinated.
But we'll talk about that later before we get out, when we get to carbon 90.
So, what I'm gonna do now is actually just do A very, very simple introductory vaccine trial example from a influenza vaccine given to children in India. And this was a very large trial, 785 households.
And I believe there was, like, on average, two children per household, given the vaccine, and effectively. So, that's the sample size per cluster of two here, which is relatively small.
But, in the case of, indicates, a cluster randomized trials, I suppose it's worth saying that the, the unit of randomization is the cluster, and that, therefore, really, the hypothesis is around the cluster. And, therefore, it's the cluster sample size.
That's really the, the true sample size, the important sample size. If you think, like, if you, if you, if you read material by people Like Stevenson, you know, the degrees of freedom are the important degrees of freedom. Like, I'd say, like an elder type of, like, a Nova type of analysis would be at the cluster. In this case, like family such as I sold unit size, that's where the meat of the the inferences are going to be made in this case.
But Effective e-day assumed a 5% rate of confirmed influenza attack in the, you know, in the control group based on historical data, and other data that had available to it. And then that will be halved by the usage of this particular vaccine. Obviously they probably hoped was better, but 50% was the lower threshold that they wanted to achieve. And they wanted the 95% power to find for that.
Given a coefficient of variation of zero point twenty five, which is just a measure of how similar each cluster was.
So, how much would we expect, basically detect the fact that you're within the same household, to correlate, like, with the other, basically, compared to other hazards, so how self similar our children within a family, compared to the all older children that exist in the population? Zero point twenty five, It's not that huge.
I think the idea here is that the, you know, for this, for this vaccine, you know, children, the effect of the coefficient of variation on the children's are likely to be that significant on this endpoint of influenza outcome.
And you know definition coefficient variation is literally standard deviation divided by the mean. So, you know, not a very difficult.
Another very difficult parameter to get your head around?
OK, so, yeah, so, if we just go into nQuery, we can kind of replicate this example very briefly. Now, obviously, this isn't the main example that we're interested in today, so I'm not gonna spend a huge amount of time on but this is a table on the CRT.
Ace it's for a cluster randomized trial for two rates to incidence rates, two counts, where you have to completely randomized clusters. So, basically, you know, half the families get the vaccine have to finally do not get the vaccine, probably gets standard treatments. And they basically wanted to power for 5% significance level at the two sided level, we had a treatment group rate of zero point zero two five on a control group rate of zero point zero five.
That's obviously on the proportion scale, rather, sorry, that's on the, sorry, the Portuguese that's converting these rates that are in hundreds onto the kind of we're basically going to, like, there's two ways you could do this calculation. I could use these directly. And then set the time unit, which is this row here to 100, or, I could just set this to one and then divide this by 100. So I'm just dividing 5 by 100 to get this down onto the kind of original race, expected on, for a single person race.
So for any given person, we'd expect around zero point zero five.
You know, events per that person.
Coefficient of variation of zero point twenty five. Observation time of one.
Because we've kind of standardized and scaled, and time isn't really the issue here.
We set the number of clusters. We said the sample size for cluster two. So we are only looking two children per family, and then we have a power of 95, and then we get 784, which is actually one less than the example. Both that could be due to various rounding issues and stuff like that, like off. Next, the often find with the sample size calculations that people have round to the nearest 5, 10, etcetera.
But, of course, you can see here, there's a little like, you know, there's a little kind of statement in inferable terms of what we find here, and, you know, we can add a dot and then query using this button here.
Um, let's just say, this is a vaccine trial.
No, we can also, then we can copy and paste out using like a Copy and Paste button, or we can transfer that into the Notes panel. Somebody clicked add notes.
You'll see that, that's not there in the notes. And then you can do various word processor things over there, which are available at the top. So, there's various options there and we could obviously, you know maybe if we wanted to see the effect of varying the coefficient of variation, we could do that in the in this tool here, the plot. You just selected rose option, let's say, going from very low up to a relatively high value of one in increments of zero point zero five.
So this is, say, the effect that would have on the the rate difference would be able to find, But let's do this, let's see this for the number of clusters.
So 0, 1, up to one in increments of 5% or zero point zero five, and then we'll go to the number of clusters per group.
Depending depending what we have, we, like, our sample size goes up to around 839, you will have require, like you obtain hundred and 40 clusters versus this scenario, if the coefficient of variation when all the way up to born, that's what we consider was arranged, they're of interest.
But as I said, this isn't really the primary example we're going to do today, but hopefully that kind of gives you. make sure that anyone who isn't familiar with nQuery, has a decent idea how I nQuery works, like that general principle, that each column is a calculation. Yellow rose our calculation, like our roles that you can calculate for, the definition of each row was given on the right hand. I'll start on the left-hand side here.
You can also see that as we select rows using this help panel, that it will give information on what that role is on some suggestions and what, you know, usable values are, OK, so we've kind of covered an introduction to vaccine trials there. So, let's get into Covid 19 vaccine trials and some of the unique or different issues that came up in the context of these phase three trials, Which, obviously so Famous now, because of the great success that we've seen for them, and the, you know, very quick approval. that we've now seen for those who are interested. I put the, the, you know, the canonical paper for each of the four trials we're gonna kinda briefly cover today or at least summarize today, as well as the Sputnik five Trial, the rushing candidate vaccine.
In the various papers New England Journal of Medicine, Lancet, that talked about the interim analysis results for each of those trials. I'm not gonna cover them today because obviously we're focusing today on the design and the sample size aspects. But if you're interested, those are in the references at the end of this demo at the end of this slide deck.
So in terms of Covid 19 vaccine trials it's you know kind of goes without saying that the Covid 19 pandemic represented a number of unique challenges. And I suppose the number one thing is that you know this pandemic, as you expect ... to be, was quickly becoming a huge issue all around the globe. And basically, within the Western world, in particular, Europe and the United States of America and its neighbors, it basically will, for all intents and purposes, was effectively endemic and was never got rid of. So, effectively, no, vaccines to basically presented solution to the fact that it proved for various different reasons to be impossible to actually stop it using, you know, non therapeutic, non pharmaceutical interventions.
And the vaccine speed was a huge priority and the Saudi, you know, pretty significant knock on on the of design choices made.
So, in effect, we wanted vaccines to save our bots to a certain extent over here, and in places like Ireland, where I live, and the rest of Europe and the United States. Obviously, we do know that there were some countries that were very successful in controlling the vaccine, using, you know, border controls, to track, trace, talk, track, test, and trace. So countries like New Zealand and Australia, countries like South Korea, Taiwan, China, latterly, after its initial failures. So, you know, those things do work, But for lots of parts of the world, both ironically in the, most of what could put generally would have considered as most fell apart. And obviously, in the, in some of the poorest regions in the world, that those, those were knocked on for various reasons and have not been successful.
And vaccines basically become the default way that the we needed to get out of debt could take into a pandemic and are currently, hopefully succeeding and doing so at the moment.
So I suppose, just to say that seemed, as I mentioned, it kind of alluded to earlier, Symptomatic disease was the primary endpoint.
Probably because it was the easiest one to measure. This is what hospitals were financed, What people can self report. This is like relatively easy to find out about. Did they get a cough, did they get a fever that they get some other symptom? And then if they get a symptom, give them a PCR gold, standard type test and verify that they have Covid 19 and not some other type of disease. And that was basically the endpoint that they were interested in in these trials.
And, upfront, the FDA, and I think, effectively, the other regulatory agencies agreed with this, is that they wanted to prove that the vaccine efficacy was greater than 30%. So the null hypothesis that it was less than 30% vaccine efficacy. So, remember that one minus the risk ratio, so that the risk ratio was no less than like, that was no worse than 70% risk ratio like the ratio of the proportion. You get it in the vaccine group compared to the the unvaccinated group.
So, one small notice that that's actually not a, that's not an inequality hypothesis, you know, we're not testing for a vaccine efficacy. Are equal to zero. We're not testing, you know, risk ratio equal to one.
Which means that this is what we'd call a super superiority, or superiority by a margin hypothesis, which has a, which is basically for all intents and purposes, a non inferiority type hypothesis.
So if you're thinking in terms of, like, like, sample size calculations and stuff like that, And you want to model the sample size directly. As we'll see, there's kind of a hack where like a high, like, not the hack, but a way of thinking about this as an inequality type hypothesis or your traditional superiority hypothesis.
Um, you can do that, but if you wanted to model the sample size directly on you, either to proportion problem, or to rate problem, then it will be a super superiority or ... margin problem.
But there's also one other condition, which was that the vaccine efficacy point estimated needed to be greater than 50%. Which, depending who you talk to some people, would consider to be arbitrary.
Some people think the point estimates are very much overrated, and even the usage of point estimates is, know, it is what leads to, a lot of confusion in, statistical inference. and understanding. But I think there's a, there was a relatively kind of ad hoc, or back of the envelope heuristic way of thinking why this was, which is basically that.
we kind of knew we kinda believe that the, you know, the, or the reproduction rate of the carbon 19 condition was around two or greater than two. So that is, like, Well, if you're a vaccine efficacy of 50% that, that gets it back down. Assuming everyone was back to them toward now. Obviously, that's an incredibly huge simplification of the reality of the epidemiological reality of vaccination. And people not taking the vaccines, and, you know, the actual or not, and the way that you know the or not really hides that. It's actually certain people, spread is a huge amount. And some people barely spread it.
If you think of like almost like a network type graph approach of how vaccines covered spreads, It's like what nodes that have huge number of points coming out of them.
And some of them, which, you know, that they basically, nothing happens with them, depending on their lifestyle and some of the, you know, intrinsic medical facts about that person. But, you know, this was chosen as a relatively, I know it kind of makes sense, and it's very easy to communicate to non statistician's onto other people, why you'd want to have it over 50%.
Thankfully, this ended up being fairly irrelevant because, thankfully, the vaccines that have had approval were well above the threshold.
We're talking, obviously, 1% for the m-r.n.a., Vaccines from Pfizer and Madonna. and we're talking I think, oh, it's 77% for the Johnson and Johnson vaccine.
And then the astra zeneca well, the estrogenic uh, it's a bit more complicated because there was multiple trials combined and, you know, lots of the base over that, which we won't go into today.
But if you're interested in that, Stephen Sand has a wonderful set of blogs covering the astra zeneca results and treating it like a treating it like a stratified type trial or design.
You know, he's not an expert in vaccines. He puts out the front of every single blog. But I think for me, I find that very helpful, in terms of understanding the design issues happened there. And that's all to say, like the vaccine trials. All of them including authentic or very well designed. But you know, just because there are so much scrutiny and so many eyes. On this particular issue even the most minor of Deviations are likely to generate. Talk on so it proved to be with the EAC. And of course with the recent talk about some of the rare side effects now for the intensities blood clotting issue that's another example of you know a that focus on safety issues that we talked about earlier. Why they're so important for vaccines would be, obviously, that these issues will tend to January a lot of interest, you to know just the nature of how much importance to play some of these trials and on these vaccines.
But, it's also the issue that I want to focus on today, is more this interim analysis aspect, this, this adaptive design aspect, which is basically just saying that unlike a traditional vaccine trial, which is, that was, it just takes its dominant don't type of attitude, but guess like, it takes 5 to 10 years. You have thousands to tens of thousands of people, you know, you're done when you're done effectively. But, in this because obviously you know accelerated approval was a prime objective boat for us, the public and also for the sponsors on the company's designing these vaccines, torment interim analysis, Which is well understood. Well, accepted by the FDA, DNA, using various, different approaches, was brought to an area where, traditionally, this, relatively simple type types of adaptive design are relatively innovative.
And there was various different approaches within the family, of sequential designs, available group, sequential designs, are kind of, classic, Those also sequential, probably ratio testing, which is you, know, because all the way back to walled from the forties, which is basically if you want to kind of do lots of different if you want to do lots of sequential testing.
In this case, I think they did it every two weeks. And then we'll talk briefly about the Bayesian sequential analysis in the next slide because that was used by Pfizer and it's kind of interesting of itself.
But I think, you know, sample size determination could be based, either directly on the models, in some of these cases. But the calculation we're gonna focus on today, is the event driven sample size determination, because I think it's one that's kind of interesting, and can, hopefully will give you an idea of where the sample size calculation cable.
And also allow us to talk about some of the parameters ended up being important in somebody's trials because the parameterization of like, what the effect sizes and we're talking about like estimates and stuff like that. But, this is really more or less with an estimated, but, you know, the idea of endpoints how those were kind of thought of by the vaccine trials.
And it's obviously weren't saying that, You know, this is written with the benefit of hindsight, which is that this interim analysis worked, we now have multiple vaccines approved, going at arms, leading to significant reductions in the severity and spread of Covid, 19 in many countries already, such as Israel. Now, more laterally in the US. And the UK, we're seeing the effect, the effects are spectacular in the cohorts that have impacted, like medical personnel, and the very elderly, and most of the developed countries. So, you know, that this worked so, we can kind of take a more, like a lot less Jaundiced view of any of this, I suppose.
Obviously, that's with the result that, you know? Hopefully, we don't look good on the wrong and ensue few years.
But based on everything that we know, these, these, these trials and these vaccines are truly a huge success for medical science.
So what I've given here on this slide is a summary of the design aspects of the four, I suppose, most widely cited trials. And the four are chosen primarily because all four of these trial published their protocols. Before results were published studies or these protocols are all published back last autumn. That access to this information is hugely valuable. We know that protocol, it's a in the New England Journal of Medicine are not published. And that's been a huge boon in terms of, like, for me, personally, trying to find examples, for example, .... But, obviously, just, for the general, scientific community, greater access to the protocols. For these trials, it's just a huge thing that could, could not be more valuable to the scientific community. And also, in this case, to creating confidence in this process and getting vaccines approved on a truly unprecedented scale.
timeline, you know, that the most detective shortest vaccine previously was, like, It was three years or something, or five years, something like that. And obviously, these have been approved and, you know, less than a year, truly, truly spectacular.
Um, you can see here, that the sample size here are all go in the tens of thousands. I thought it's just worth noting here that none of the thought, well, basically, all of the sample sizes are less about, you know, we needed something like that. We're basically similar to a survival type, sample size calculation that most of these probably calculate the number of events first.
And then kind of said, OK, how do people do we need in the time we have available to get that number of events based on our esteemed attack rates which are around, you know, zero point eight and 0.75%, or zero point 6% on a six month basis. So, it's like you know, because obviously this is the case where no resource was unspent kitchen sink, et cetera.
Obviously, big sample size is Let's get this done quickly. No, no, no padding around the No, no, no, faffing around in this case. And you can see, that the sample size for each of these trials was all around 150 to 160 or so. And really, the higher sample size, that provides A, reflects the more aggressive sequential testing to the to boost their sample size. A bit to the fact that they had five interim analysis versus these other ones, which either had 1 or 2 in the Case of Authentic Madonna. And then, for the Johnson, what they did to ask the question. probably ratio testing on a weekly, after 20 events. But, obviously, the wedding to duck that, could take a while to happen on the SBIRT has certain characteristics. It kinda makes it a bit more efficient, I suppose.
You can see that in this case, you know, there was a variety of different methods proposed despite its simple similar sample size where after zedekiah treated it as a Poisson regression, I believe with age It's a covariate. The Johnson and the bodies are trials are treated like a binomial, a single proportion binomial problem, which we'll talk about on the next slide in terms of the sample size calculations during a treated like a survival problem using a ... model.
And then as mentioned, the group sequential analysis was used by .... And then you had this Bayesian sequential for lack of better term approach from Pfizer, but in reality, the operating characteristics of this Bayes InDesign were shown to effectively be very similar to an O’Brian Fleming type design in terms of when the trial would stop early. So the Stopping boundaries effectively being more or less O’Brian Fleming type, stopping boundaries. So if you're talking about a traditional group, sequential design, and had, it was proven via simulation to have, you know, the appropriate type one error rate. So, even though there's some interesting Bayesian aspects there, which I'll talk about a little bit in a moment, but there's a very good blog known As, Credible Confidence from that Stevenson series and the full series of blogs and provide the link. at the very last slide. Of the slide deck.
They will provide a, you know, some talk about high dose or constructive. But in basic terms it was effectively just a symmetric design with a Bayesian i-phone in the sense that, instead of spending alpha, they talked about the operating characteristics, in terms of the probability of showing that the vaccine efficacy was above the threshold value of 30% or zero point three.
And then trying to show that that was very, very unlikely in their trial. You know that you just, you just want to have sort of a really high chance that we can reject that point tree estimates.
You know, the academic and the, but they're gonna use the more traditional approach And obviously you can see the alpha that was spent in each of those there. And you can see that, you know, accidentally can have a single interim analysis. But during a hub to Pfizer had five and as far as I know, based on a presentation done by Kert Viele from Barry consultants on this same topic, a very good Denis, very, very good Webinar as well. There's a link to it at the end of this slide deck as well. Very, very good. Definitely can recommend. Then, I believe, I think, you know, there's basically no vaccine trials with more than one interim analysis based on what was fine so far. So, basically, this is unprecedented to have this level of interim scrutiny being this aggressive in trying to find signals early. And, obviously, we know, in the case of Pfizer and modern and particular, towards a very strong signal, those interim analyzes. You can see here, the placebo rate was around, you know, on a six month rate between less than 1%.
And you can see that, you know, there was an exclusion of baseline of anyone who had antibodies for cup in 19. They see anyone, you have carbon ID before, because we assume that they had a greater degree of protection already. And that was ranging. You know, they had different assumptions for that to kind of, you know, kind of ad hoc their sample size calculation.
So, in effect, if you take this 150 and kinda try to scale it up, using these two assumptions, that's how they got the sample size, more or less. But you can kind of do when I look back at the envelope calculation, Kert Viele in, that, the demonstration for, for Barry consultants shows us, the one of the slides. It's very, very nice I'm not gonna cover today, because, that, we're going to focus on the power aspect today.
So, in terms of the worked example, in terms of data like, sample size determination, there, was, I've talked about, it's event driven, proportion approach to calculating where they've got the around 150 sample size from.
On this slide, basically convert.
This includes a conversion from these, like, initial vaccine efficacy, where the null hypothesis that's less than, or equal to zero point three or 30%. That we're going to power for a vaccine efficacy of zero point six or 60%.
Depending on how you want to parameters. Parameter is your vaccine efficacy.
And basically, you have these kind of definitions here. You have your, you know, what's the probability of having symptomatic of 19 given. you had placebo.
What's the probability of, you know, what proportion of people have Covid 19, given that they were in the vaccine group? And then, you can kinda show that for each of the hypothesis. This is, you know, either H zero or H one.
What is the probability that you are a vaccine case, given that your case? And that's what this equation here is.
And, you know, if you want it, basically, if you, in real terms, what this means is that we're going to say, You know, this is going to be the proportion of people.
This is going to be basically, the purport the, Sorry, the proportion of people who have a case in the vaccine group, divided up by the proportion of people who have it in the vaccine and proceeded with the total, like the proportion for each of these.
And, in real terms, this basically turns into basically a kind of busy proportion in like the proportionate and Placebo Group divided up by the proportion in the sorry, the enforcement action group, the vitamin portion in the vaccine group, plosive proportion in the placebo group.
And what do you get the sample size calculations? What that what we can kind of convert this into a proportion that you expect. Like. So, basically, imagine that you have a whole, like, X number of events. And so, assume a fixed number of events called N and then V is your number of vaccine cases. Well, basically, what we're going to do is that under the null hypothesis, we would expect X percent of the cohort to be placebo cases. And then under the alternative hypothesis we would expect that.
There could be a different proportion of people to halve the proportion of the cases who have had that have Covid 19.
So imagine you have N number of people who have Covid 19, or being found the cop covering, what proportion of that cohort is vaccinated, what proportion of that cohort is on backdated.
And obviously, under the null hypothesis, we expect that cohort to have more vaccinated people in it. And then under the alternative hypothesis H one, we'd expect to have less people here in the box, in the cohort who happen to. have Covid 19 in that cohort.
Basically, by conditioning, on the number of cases, being event driven, we can kind of increase the number of events required until the two proportions we'd expect under the alternative a null to be power sufficiently.
And in basic terms, what that boils down to, is that, basically, under the null hypothesis, we'd expect around 41%, or zero point four hundred eleven eighth of the people. In this cohort, of people who have had the disease that we found to be, in the fact that a cohort under 30% vaccine efficacy, both under the alternative hypothesis for the facts and efficacy, has gone up all the way to zero point six. That we would expect a much lower proportion to be in the vaccine group of that cohort of people who have had the disease. In this case, less than 30%, around 28.6.
That's all very basically kind of just thinking about it. Basically reducing this problem, assuming 1 to 1 randomization to a one proportion problems. So, basically, we're saying on no way to correct, by 40% of the people in the cohort of people who've had the disease to be vaccinated on the vaccine efficacy, 30% are under the alternative of 60%. Would expect that to be reduce all the way down to around 28%. And then, we basically, just treat that as, if it was a, no one proportion problem on power to a power analysis is, if it wasn't what proportion problem from the off.
I think one thing worth noting about this calculation is that, for the purposes of power calculations, this underlying race, which for this example I set to zero point zero eight, actually isn't relevant.
Because, you know, if you imagine this equation here, if I was to divide the numerator on the denominator by one over pi placebo, this would cancel out this would cancel out this cancel out, it will become one minus V over V, so, one minus P plus one.
So, you know, just one over Pi placebo for the top and the bottom, which, obviously, be, you know, trivially we're allowed to do, because those divide the Become one. This becomes one minus V E. This becomes V one minus V over V.
It's really the effect size of interest for R, for our, for our calculations. So, we can actually remove the zero point eight and just have one minus ... divided by zero Point Tree, and that would give the exact same thing.
And it's similar here for zero point four divided by zero point six, basically, For the non tariff levels. The reason that we would, This is shown at all, if basically, for two reasons. one is that you can kind of do the kind of, I suppose, you almost like the Bayesian type thing of how to do this from first principles, but secondly, that.
in effect, because this reduces down to basically the busy feel, the proportion of people in the basically the basically the litigants, the proportion of people who in the vaccine group divided by the proportion of the vaccine group plosive abortion in the placebo group. That's actually the, the parameter that is used in many of the protocol. So, for example, probably most importantly, in the ... protocol, where this ... like the parameter, all. Basically, pi, vaccine divided by the sum of pi vaccine plus pi placebo is known as theta and that is the parameter. But the prior that they use for their Bayesian analysis is based upon.
And that is basically, like the parameters are equal to one, sorry, S equal to zero point seven, 8 1, the beta parameters are equal to dot.
Which means, it gives you a distribution that kind of looks like this, which basically says that there's a there's a high enough chance that your vaccine is not useful. But, that the mean of that is equivalent to a vaccine efficacy of 30%.
Credible confidence by Stephen Sen goes into a lot more detail.
But effectively, the prior for this parameter, which has kind of a read more real-world interpretation than just, you know, one minus V divided by V, um, is such that it assumes that it's basically being quite pessimistic in a sense that it assumes a high, you know, a lot of the probability mass of the prior is on it not being effective. Both, at the point estimate, the mean is zero point tree, not because it is A, because it's not a peak distribution, it's a monotonically decreasing distribution.
Like, the mean, is that a meaningful parameter, insofar As a few means, anything, but, little word mean, there, but, in effect, it, That's how it was chosen, effectively due. To have the mean, be equal to the fact the FT of zero point three here.
So hopefully you get an idea like this is discussed in the various protocols.
This is the cost in the Stevenson blogs. And there's also one additional, there was one additional webinar that I mentioned from the PSI, the farmers, like the, the Industry Pharmaceuticals Statistical Station group, in the UK, Primarily, which did a great webinar on vaccine trials. And there was a presentation from the NIH or ... form, and from the NIH on this topic, and from, which a lot of this is derived.
So, if you're interested in hearing more about that, and the other design aspects of vaccine trials, I'd definitely recommend that. That's also on the very last slide of the slide deck, so, lots of lots of additional things if you want to watch based on this web from this webinar. Hopefully, at the very least, this is a useful kind of starting point to go into lots of older people who know far more on this topic than I do, obviously, I'm not a specialist in vaccine trials, IFC. Just know a lot about sample size in general. That's that's kind of my speciality by working on nQuery.
And, just kinda mentioned here, well, I'll talk about this in a moment, but first, let's just illustrate the point that we're making there about sample size calculations, which is we're going to true. We're going to basically take the example there were, these were all power to zero point zero two five, one sided level. Because it was a one sided hypothesis.
We're not really interested in knowing that our vaccine was bought in the other direction, unsurprisingly. And as you saw there, we calculate around zero point four one one eight under the null hypothesis. So, around 41% in the cohort of people who end up having to disease and our trial on not a policy, to expect around 41% of those people to be from the vaccine group. But under the alternative hypothesis where our vaccine works at a vaccine efficacy of 60%. We expect 28.57% of the people who've been in the vaccine group. We power for 90%, and we get 150.
Pretty, this is, So, this calculation is why they're all nearly around 150, and of course, you can extend this group, social design and stuff like that. But assuming you're not doing a very aggressive efficacy by which they weren't, this, is kind of the canonical sample size from which all, the other ones kinda just vary around to some extent.
It's worth noting that, of course, you have lots of options, we're doing one proportion, tasks from exact that to various Z tests. And, you know, this table here is a newer table.
which kind of has more options ranging from exact two Z test using various definitions of the of the standard error on whether to include a car, a continuity correction, or not. I've got to prepare this earlier, but this is called the Specified Multiple Factors, too. So we basically just set these so that will get multiple different all the different tests available in one go.
Using both exact binomial enumeration, liquidity is basically just counting every single combination that happens saying whether they're significant enough and then just saying how probable they are on the dais assumptions of the null and alternative. Based on docs, you can basically just sum those probabilities under the traditional, normal approximation approach where you just have a kind of algebraic equation to get this. And you can see that the sample sizes range from around, like around 157, down to around 140 tree, depending on the assumptions that you make on which particular equation you happen to you.
It's not really important today, but just to note that if you see some variation in these, this could be another source of dop beyond just the sequential testing.
I suppose the other that the slide that I just showed here, it's just showing the efficacy bonds that were used on the vaccine efficacy scale for each of these different trials.
You can see that each of them have a, have an end points, such that the vaccine efficacy for the final analysis and the information fraction equal to one, it would always have been powered sufficiently at the vaccine efficacy would be above 50%. But, did, you can see that for the various different trials? You know, you would have required a vaccine efficacy above 80% for the Pfizer trial after around the fifth of the fifth of the information, 50 of the events, that you required happened. Of course, it ended up being above 90%, so that's why that path. But you can see, for these other trials, you know, they're all very similar. They're all very conservative.
They're all that's basically say around and O’Brian Fleming type design. So, you know, I said this a lot in the group, sequential webinars. Like, do you know, efficacy?
vines tend to be conservative, because we don't we want to minimize the area of controversy, You know, basically extraordinary claims like stopping your trial a fifth of the way through its total, potential sample size, requires extraordinary evidence of vaccine efficacy. above 80% would be considered to be pretty extraordinary evidence. Thankfully, that is the evidence that we got.
So, thank God, we live in this particular world now, and I suppose, you know, this was based on that group, sequential design. so, the Pfizer trial actually, no, it wasn't a, wasn't a group sequential design. It was a Bayesian design, which happened to be created to, to, basically, probably, deliberately replicate the type of boundaries and operating characteristics of a group sequential design, but that the actual efficacy endpoint that they had was a Bayesian endpoint, this probability of being above 30% efficacy effectively, is what they were trying to reject, like.
And, you know, if we go into an nQuery and we do a group sequential design with the same inputs.
Know, we will take the Pfizer example, which was a five look design. So, they had four interim analyzes. We can quickly, kind of, I'm going to cheat a bit here by setting the maximum time to 164, and then basically setting the information times. By scaling on this, we can get a scale against 164, just divide all of these 164 for the information times. In that, you know, to get the 0 to 1 scale from the screenshot I showed there. I'm going to cheat a bit, because I know 164 is the actual information they had, So I can set the information times here, to the actual sample sizes at each look, sorry, the number of events at each block, from the original design, which were as follows, you know, if you want to see where that came from.
Just see this final call him here, here, with the number of events at which they did their interim analyzes there. So that's what I'm entering into this slide deck to this table here By default We haven't O’Brian flat and spending function, there was, actually, futility boundaries, as well for their trial, but we're going to ignore that for. now.
We had a relatively minor effect, because, obviously, you know, you know, the effect on power is relatively minor.
Because those are relatively conservative. And we power for 90%.
We see that we get a sample size of 160 tree, which is obviously one of what they had, for various reasons like we're using. We're not deriving this from the base and productive, we're using the analogous group sequential design. There may be other issues, a point here. Like, what was the actual variance assumption, like the variants parameterization, various other things but you can see that, you know, even taking a very simple approximation of their design using traditional groups control design. You end up with something that looks, which is basically for sample size purposes, very similar on these boundaries. If you convert these to vaccine efficacy, very, very similar as well.
So know. I think, you know, when it comes to phase three trials, Bayesian analysis is in general is slowly coming into four. It's so prevalent A, phase one, phase two, and in the wider community, inevitably, will bought a lot of it's happening. You know, 2.5, type one error is still A totemic elements of the phase three trial design and what happens as well as the consequences that what you often get us these days in designs, you end up feeling a lot like your traditional group sequential or other types of frequent to design. Some. People think that, you know, you could have a big, big, big debate about whether that's good or not. But I think, you know, personally speaking, you know, you know, like I think, you know, everything, you know, having innovation, such as this, in the context of these pivotal trials that everyone's going to read, and everyone's going to care about.
You know, it's certainly useful, because basically, you know, despite you know, that the incentives are generally conservative in clinical trials for obvious reasons. And there's good reason for them to be, to have a kind of birkin conservatism of you know what work before we should stick with? But there is obviously just a little bit of need to kind of bring the innovation, and keep people interested in making sure we're not throwing too much away by failing to consider these other approaches that exist out there.
So, yeah, Hopefully, now you've got an introduction to some to vaccine trials onto the I covered 1009 vaccine trials. In general. They say at the end of this slide deck there are various webinars, this PSI webinar that's very consultants webinar and various blogs from Stevenson as well as the four protocols for these four trials. And I would highly recommend all of these resources. The hyperlinks are available in the slide deck, and, to be honest, you know, so many people do a far better job than me at explaining this. But hopefully, you've had an, you know, how to taste of what's from these webinars as, well, how to take, like, a general overview of this area. So, if you want to know more. Please, you know, hopefully, these will provide the next step on exploring this even deeper.
So, in terms of discussing, conclusion, starting development is a complex and unique area, obviously, and the wider target population effects the size and the concerns, such as safety, that are emphasized in trial design. There's some distinct ideas about the types of models and parameters nations that makes sense. Mostly, just the important point to remember is a vaccine efficacy is equal to one minus your risk ratio, which is just, you know, P two divided by P one, or pi two, divided by pi one, or your hazard ratio if you're looking at account type model. And, you know, all these, like, count or proportion is probably a common models. But survival is a type of common law, ... can be used as well. I'm Covid 19 presented unique challenges, both practical, which I have already gone to, but even just on the statistical choices, and design choice. But also, opportunities to display the value of stuff like group sequential design on Adaptive design, even a relatively simple adaptive design substitutes schedule design. And, obviously, with, due to the focus on speed, due to the huge concepts covered in 19. And, you know, those interim analysis is ended up being incredibly important.
I don't think any interim analyzes have been quite So, quite so anticipated and quite so celebrated, you know, someone who's deals a lot with interim analyzes, suddenly defined the topic to be so central to the entire world, was an interesting thing to happen. Of course, the current success. Like the success of those trial designs, and these were very well designed trials, overall, are great credit to the people who, who did them, who designed them to operate them, and make them happen in such a quick aspect, is a credit to them. Helped, unfortunately, to some extent, by the failures and some of the more political aspects, you know. As well as the fact that these trials finished so quickly that, you know, we, we should remember, is sadly relied on a lot of people getting the disease that perhaps they didn't need to, given the lack. Some of the failures are the policy level that we saw.
But anyway, I won't go into the politics of that. So, you know, the success of the vaccines hopefully show that, you know, the success of this type of approach to taxing. So, hopefully, the next time, we'll be even better prepared to get these types of solutions out there as quickly as possible.
So, I think we're pretty much out of time. Apologies. I'm around even overtime, I think are mostly on time here. But as I mentioned at the very start, if you either don't have nQuery or have a license of creative doesn't have, however, that doesn't have the expertise, which is adaptive design, for example, feel free to try either nQuery in totality or those advanced. Basically try and create some fatality ... dot com forward slash trial, where if you just give us your e-mail alone, you can get access to a 14 day free trial. No credit card, no commitment, and you can try it in your browser. So it'll be, there's a VM, which you can do within your browser, and you can try everything for 14 days and see if the features of nQuery interest you, if you find nQuery to be a good software to try for sample size determination for trial design.
Basically, just finally, want to say thank you so much for attending today. There's a few questions, which I'll take a couple of moments. I don't think I've enough time to go through them all, Certainly, but if you have any further questions afterwards, feel free to e-mail info at ... dot com And I'll make sure to get to you after that as well. And if you have, if you want access to previous webinars on our tutorials and worked examples, you can find them at ...
dot com just to mention here that the references for kind of general vaccine trials And ...
vaccine trials are available as well alongside as the resources, these vax DD, these webinars, these blogs, these protocols, as well.
Let's take a moment to look at questions, and I'll be back in a moment.
OK, those, there were some questions, basically, around the explaining the Pfizer.
The Bayesian thing and, as I said, it's, it's, you know, you could do a whole webinar on the choice alone and there's some very interesting and I would very much recommend the, this the series of blogs by Stephen Senn.
And, in particular, the one not named credible competence, you're going to see how many tops I tend to keep open at anytime.
I apologize for anyone who is triggered by that bought.
This is basically a really in-depth explanation from Stevenson which he says he's not an expert. But, he has over 40 years of experience in clinical trials in May, not vaccine trials, but a very successful, and basically, he talks in detail about, you know, that this prior here, you can see here this, this line here, this square, the first parameter is zero point seven on, the other parameter, S equal to one gets this distribution here.
And then you can see the data for this parameter, theta, which is equal to the proportion at risk under the vaccine and the proportion of risk under the placebo.
Which, if we go back to our slide deck, um.
Maps onto this. Obviously, if we move before multiplying, by vaccine efficacy, we're getting out the kind of the relevant vaccine proportion level. So, this goes into a lot more detail into it and some of the, he's, he's always quite skeptical and everything.
But, some of, like how he thought this came about, basically, you know, it was kind of chosen to get this, the mean value, to be equal to zero point three, even though the mean isn't necessarily that meaningful for beta distribution, but, it's a very interesting topic.
And, obviously, if you want to get the into the actual full details of it, the best place to go with the actual trial protocol from Pfizer itself actually, that's going to cost something to happen there.
So I'm just going to X out of that, which I think I wasn't able to find in time. The ... published the protocol fully again, but there's a way back machine link here to where they originally potters away when they originally published back in the autumn.
There's a few other questions here, but I don't really have time to go to that today. I think I've kept you all along and also, I just want to thank you once again so much for attending today's webinar. I hope there's something interesting here, and I'd say, at the very least, hopefully, there'll be some useful links here that go on to learn more even more about this topic. For those are interested on anyone who, I haven't answered, the question for, all of the lecture, the answer, probably by tomorrow. Whatever query the queries that you've had. And as I say, if you have any other questions, feel free to info, e-mail info at ... dot com, or go to our website, ... dot com to learn more.
So once again, want to thank you so much for attending, for attending, I hope you learned something. So thank you very, very much, and goodbye.