Explore the Scientific R&D Platform

Free Trial

Non-inferiority And Equivalence Study Design

About the webinar

In this webinar, we examined the role of non-inferiority and equivalence in study design.

Non-inferiority and Equivalence
Study design considerations and sample size


In this free webinar you will learn about:

  • Regulatory information on this type of study design
  • Considerations for study design and your sample size
  • Practical worked examples of:
    • Non-inferiority Testing
    • Three Armed Trials
    • Equivalence Testing

Play the video below to watch
the complete recording of this webinar

Duration: 60 minutes 
Nothing showing? Click here to accept marketing cookies



What is non-inferiority hypothesis testing?
Non-inferiority testing uses a similar significance testing approach as for an inequality test of a one-sided null hypothesis for no difference except that in non-inferiority testing the null hypothesis (of inferiority) specifies that the difference between interventions is less than a specified inferiority limit rather than the no difference value used in inequality testing. The alternative hypothesis of non-inferiority is that the difference between interventions greater than the non-inferiority margin.

Note that the inverse of the non-inferiority hypothesis, known as superiority by a margin or super superiority, evaluates whether a new treatment is greater than the standard treatment by a specified margin. The null hypothesis is rejected if the difference is sufficiently above the superiority margin. This should not be confused with the common usage of “superiority testing” for the case of a testing a no difference (inequality) hypothesis.

What is a three armed trial?
Non-inferiority testing is a common hypothesis test in the development of generic medicine and medical devices. The most common design compares the proposed non-inferior treatment to the standard treatment alone but this leaves uncertain if the treatment effect is the same as from previous studies. This “assay sensitivity” problem can be resolved by using a three arm trial which includes placebo alongside the new and reference treatments for direct comparison.

In this webinar we show a complete testing approach to this gold standard design and how to find the appropriate allocation and sample size for this study.

What is equivalence hypothesis testing?
In equivalence testing, the test is a composite one in which it is testing whether the difference is not far against equivalence (i.e. the treatment and control are the same) in either direction. The alternative hypothesis is typically that the effect size is zero i.e. the interventions are equivalent against the null hypothesis of inequivalence in either direction.

As this is a composite hypothesis this requires the simultaneous testing of two hypotheses. The most common methods for equivalence testing are the two one-sided tests (TOST) approach, where each one-sided test is conducted independently and the null hypothesis is rejected only if both are significant, and the confidence interval approach, where a confidence interval is constructed and equivalence is found for if it is fully contained within the lower and upper equivalence limits. Note that for a given significance level, the 2-sided confidence interval would be constructed at two times the significance level (e.g. a 0.05 significance level corresponds to a 90% confidence interval)

Transcript of webinar
*Please note, this is auto generated, some spelling and grammatical differences may occur* 

So hello and welcome to today's webinar Study Design for non inferiority and Equivalence: Design Considerations and Sample Size demonstrated on nQuery.
So today's webinar we'll be exploring some of the design considerations when looking at non inferiority And equivalence type hypotheses on the type of common study designs end up showing up when you're doing this type of study, you know, stuff like crossover trials, etc.
Before we get started, just a few frequently asked questions, Is this webinar being recorded? Yes, it is. And that recording will be sent to you automatically afterwards, later today, alongside the slides.
So if you miss parts or need to leave early, make sure to get those later on, and they'll be sent on to you.
In terms of questions, feel free to use the Questions tab on the right-hand side of your webinar window. And let me know if you have any questions about anything covered. Here you have any audio issues or technical issues, do, use that as well to let us know I'll make sure to we can fix those as we're going along.
So that covers most of what we're interested in today. So please let me know any feedback that you have.
OK, so let's get back into today's webinar Study Design for non inferiority on equivalence. As I said, this webinar will hopefully give you a background on some of the common design considerations when considering this kind these types of hypothesis which are kind of generally dealing with when you want to evaluate whether a new treatment or drug is equivalent to or not worse than some preexisting treatment. And hopefully give you a background on the sample size calculations available in nQuery for this particular area.
Before we go any further, let me introduce myself. My name is Ronan Fitzpatrick. I'm the head of statistics here at an nQuery, maybe the nQuery lead researcher,  three point though now that's about 5 or 6 years ago at this point and I have given the workshops are places like the FTA GSM on similar And obviously I do these webinars every month and obviously you know and the current situation with the call. The 1009 pandemic. Obviously these type of online engagement or even more important than ever.
So make sure to take advantage of whenever you can.
So in terms of what we'll be covering today just, I'll be covering briefly just a background of non inferiority and equivalence testing as a kind of as a whole just to make sure that everyone hears on the same page like I'm not expecting you to have any pre-existing knowledge of this area.
So, this isn't the case of, you know, expecting, you know, you knowing the difference between mean ratio and a geometric mean ratio or something like that. This is kind of trying to build from the ground up.
And then we'll kind of split the main part of the webinar into two parts. one, looking at a non inferiority testing and some of the design considerations that are more common there and then equivalence testing on some of the situation. They're like they're in the past. There has been some ambiguity between non inferiority and equivalence tousling like equivalence testing has sometimes been used interchangeably with not a free already testing. But I think update point in time that delineation is now much more clear. And so hopefully everyone. can we kind of know that these terms are kind of where they're appropriate in the current day situation. And of course, we'll have some conclusions and discussion and looking over some of your questions at the end.
So as mentioned previously, this webinar is presented by nQuery, Your Solution for Optimizing Clinical Trials. On a, it's a complete trial design platform to help make your clinical trials faster, less costly, More successful with the primary focus being on sample size determination for your clinical trial. And nQuery has been around for over 20 years now and is used by 90% of organizations with clinical trials approved by the FDA have a license of some type of nQuery. And you can see some of the reviews and companies on this slide here.
So let's get straight into the background of this issue. So I suppose the important thing to note about non inferiority and equivalence setting is that they're really both about if a new treatment is similar to a preexisting treatment. 
So, you know, in a standard clinical trial, like a Phase three confirmatory trial in particular, we're usually looking to evaluate whether our new treatment is superior to either placebo or some preexisting treatments. So we think that our treatment is better than doing nothing or doing whatever the standard treatment of carries for a particular case. But in this case of non inferiority and equivalence testing, we're looking at a very different hypothesis which is instead of evaluating whether a drug is better, we simply want to evaluate that. either it is no worse than by a certain degree than the preexisting standard of care or treatment or that it is actually equal to some or equivalent to some preexisting treatment.
And, of course, you know, this is a very common objective in the development of generic drugs by a similar drugs medical devices. So, for the generics and Biosimilars, that's pretty obvious. like, we want, if we create a generic version of off patent drugs. We just want to show that it's basically the same or is certainly no worse than the preexisting drug for medical devices. Once again, it's pretty obvious why, like if you have a pacemaker, you're probably not looking for it to give different results than the preexisting pacemakers in the market. Now. Of course, I won't go into today. But there has obviously been some discussion and debate over the level of regulatory scrutiny taken upon these kind of these equivalency decisions. But today, we're going to just focus on the statistical side of, if you have a hypothesis of wanting to prove that your, your treatment or drug or medical device, is either not not inferior to or equivalent to, like, what does that look like in terms of design decisions? So.
Non inferiority, in short, is trying to prove that you're not inferior to control. Control here could mostly standing for standard treatment. And, you know, usually if you're looking at a non inferiority type hypothesis, you're looking at a direct measure of the efficacy of a treatment. So, say, yeah, so basically, as something like mortality, will be an obvious candidates. Something like blood pressure would probably be relatively a direct measure of like, you know, higher blood pressure is probably bad, in most issues for the, when the case when you're doing a medical intervention. At least, that's the most common situation. And so you're basically, you have a direction of movement for your treatment, that is good.
Like, actually increases is good. Why increases is bad. So, you want to have a certain direction.
And so, if it happens that your drug ends up being superior to the current standard of care, that will still be found to be significant and there is, there is a lot of interest right now in these kind of DD, It's kinda multiple hypothesis. Adaptive type models, where, you know, you could start off with a superiority hypothesis and drop that down to a non inferiority if required both. Or maybe, you know, theory in future maybe you could go the other way where it's like, oh, non inferiority swap we're looking for Bob.
If we, if this is looking promising, maybe we can jump up to that and of course you can't do that nearly Willie without affecting the type one error. So that can be consideration there. And really the only information that you need, like you need beyond your kind of standard like testing scenario. For its priorities. You need the addition of this non inferiority margin, which is basically an effect size below, which you would find your drug to be inferior. And basically you don't want, you know, you need to exclude the non inferiority margin from being possible based on the hypothesis test or confidence of our approach to be using.
And so, that's, that's a pretty standard, like idea. It's not too difficult, in reality for many situations. So for the, for, for example, if you assume T normally distributed data or, or groups, then, in fact, like, effectively, non inferiority testing is really just your standard T test analysis, but shifted null hypothesis like, it doesn't really affect things. But, obviously, if you go into cases where there isn't a variance location, independent stuff, that isn't true, that the null hypothesis if you shift can have a significant effect.
And then, we get into the idea of equivalence, and equivalence is probably more of a contour virtual approach to doing things. I'd say Controversially, they think it's obviously very widely accepted, and there's a, you know, hugely important part of Bioequivalence Studies and seminar. But I suppose more controversial because the idea of doing a, like, basically the statistical characteristics of equivalent setting are very distinct from that of non inferiority or superiority testing unusually.
In this case, you're interested in some kind of endpoint, which is an indirect measure of efficacy with no inverted quotes, good direction.
And so, for example, with bioequivalence, which we'll talk about later on, You know, we're mostly looking at the concentration of a drug in, in your blood. And of course, you know, in theory, you could go maybe increasing the concentration might be good, but we certainly can't, may take that as an assumption. And so for creating a generic medicine, were much more likely to want the concentration profile to be the same or similar as possible to the standard treatment or drug of care rather than wanting it to increase this versus the concentrations seen for that drug. And so for bio equivalence on, for seminar situations, we're now trying to prove that these two treatments are giving the same or equivalent results. We therefore need, typically, the way this is talking about is that the confidence interval would fall between these lower and upper limits, will talk about ..., test the equivalent in the relevant section.
So I think the main takeaway here is that it's this nice graph here, of what these kind of hypothesis looked like. And effectively, you can see here that case A is non inferior. In the sense that it is above this non inferiority margin completely to discomfort interval is fully above that.
Whereas for the second one, you can see that, in fact, the, the non inferiority C confidential falls somewhat below the noise margin. This dotted line here on the left, and therefore we could not reject the null hypothesis of non inferiority here. And then we see these trade confidence intervals here that fall between our lower and our upper.
And in that case, we would find for equivalent split for this first case we would not find for equivalence if that was the hypothesis of interest. Because of course, you can see that the upper confidence limit is above the upper equivalence limit and so on and so forth. You can see, obviously is, as the CIA's go, this direction DMC become obviously non equivalent as the upper see either above or complete or the entire CI is above the upper equivalence limit.
So hopefully that gives you an idea of what's going on here. one small nope, that's common. Across all these, you know, it's all of these views, two sided confidence intervals. Even the non inferiority case, which is really more of a one sided hypothesis, I would note that the confidence limit is basically half or, sorry, twice The alpha level of a test. So if I'm doing a non inferiority tests that zero point zero five alpha or like 5% confidence, sorry, 5% significance level than I would need to use, in this case a 90% confidence interval if I'm going to use a two sided interval was using a one-sided interval obviously there are equivalent.
But I'm using a if I'm using a one sided hypothesis which both non inferiority on equivalence are, then I need to use double the alpha so. 5% 5% significance equals 90% confidence level basically.
That's much more important for the equivalence case to be noted.
So let's get into the section about non inferiority testing. So as mentioned, non inferiority testing is where the hypothesis test is that the treatment is no worse than standard treatment or control by a specified margin.
And obviously we need to select that, no worse, done by a specified margin. So what is the margin? And this is a huge issue of debate, like lots of people have been discussing this and there's various different ways that you can do this, but effectively, what you need to do is select a non inferiority margin based on a mix of expertise and the prior data that exists.
So usually you want your, not if you're already margin, to be a fixed fraction of the original effect, so we know where do we not a priori details, were obviously assuming that the control agent or is a useful agent, It actually does something because we're trying to replicate it. And therefore the FDA talks about, therefore, we want to have a fixed fraction of data affect other non inferiority margin.
So like 80% of the original effect, for example, bought how much you need that fraction to be will depend on the number of factors. Ranging from how much variance theories in the, you know, in the actual effect, from the original study, or studies or meta analysis. or when other every other evidence is available, from other issues that are outside the effect, which I'll talk about after our example, but I suppose it's just very, you know, it's very common that this would be used for generics are medical devices where we're comparing, but usually non inferiority testing involves comparing a treatment to a control. So it's just a two arm trial, typically, with the control being, a reference illicit drug, or similar unusually. placebo isn't included, but will note that, that will cause issues, and we'll talk about those in detail. We'll move on to three arm trials, which we'll talk about later on this section.
But the important thing, though, it's called non inferiority testing, is mostly just, uh, a slightly more complicated version of one side of testing. It's relatively easy to extend to different types of designs, like parallel crossover, but also the different types of data such as proportion, survival, and count. But I think the important to know yourself, for the continuous outcome, data extension is pretty trivial. If you're making the assumption certainly of, like normally distributed data or log normal data, for the geometric mean ratio. But it becomes more complicated for things, like the actual just ratio of means, which obviously he doesn't have a normal distribution. Or for non normal data where there isn't a, you know, variants, location, independence. Which is to say that basically the fact that you're choosing an Ole ...
to not be like the nice trivial case. Where there's no difference. Or hazard ratio equal one, where there are situations like that, does make it more complicated to derive what's happening on the null, as well as the alternative, Because now, you're, knoll isn't a simple case like that. It's like, we think the null ratio is zero point eight instead of one, for example.
Just a small note there.
But in terms of sample size calculations, for the most cases, you're looking at continuous outcomes. And most cases this is a very simple calculation. So this is just a non inferiority calculation taken from the New England Journal of Medicine, comparing to stents, and where they were looking at late luminal loss in diabetic patients.
And obviously, that's what you want to avoid, effectively On ambiguously or effectively on and legacy certainly in this case. And so what they just wanted to do was to prove that their stent that their proposed stent was no worse than the standard treatment of care stent used for the treatment. And you can see here on the right-hand side a tabular summary of those results.
You'll also notice a little drop by right here, so the original sample size, what I drop it as 99, but they require 250 after accounting for this dropout rate of 20%. So I'll briefly to the drop by calculation just for convenience sake here, as well.
So this is an nQuery for anyone who's not familiar with nQuery. The calculations mostly occur in the top left-hand panel. When you select each row, you can see there's, like, some help card information here on down below. There'll be a verbal summary of the results, when we're finished. I'm not going to cover this in detail today. I think, like most people here are familiar with the software world, will catch on very quickly from what we're doing here. But just in case everyone that's on where, you have your row definitions here on the far left-hand side, and each column is individual calculation. And the yellow roles are those that we can calculate, assuming all other inputs are are used.
So in this case, they had a 5% significance level, So if you're using a confidence level approach, you be using 90%, two sided interval infusing one-sided interval. Obviously. 5% is fine on a non inferiority ratio of minus one point six.
So if the lumina loss was, you know, if the, if you could not reject the idea that you have luminal last compared to standard treatment.
Dan, obviously, that's a case where we would need to say to distant isn't non inferior to ordains inferior effectively to the preexisting strength this case. And they expected different, so the hope that they had with that. You get, basically the exact same results, on average between these two types of .... This is just a small convenience function, saying the difference between these two. That's kind of basically your denominator.
If you imagine on your test statistic, then they have a standard deviation here: Zero point four five, giving a effect size of about three point Poetry five six. This will be kind of considered medium ish effect size on the colon effect scale or colon, standardized effect scale, and then they only to the power of 80%.
And that gets a sample size of 99 per group. And you note here, that's a small verbal summary, and you can copy and paste that, I read it as needed. And you can see that that's relatively easy to calculate. So for most non inferiority calculations, there's not really a huge amount of complication involved now.
Like an nQuery, you have a large number of options available under non inferiority, ranging from cluster randomized trials, will talk about three I'm trials in a moment, but obviously for one group crossover, various different complexities of crossover Williams designs and so on. And for negative by neglect for count data like negative binomial, it's for proportion type data for survival type data. So, obviously, in those cases, things can maybe get a bit more complicated. So for example, let's say with the log rank test, but the principles are generally the same across these different types of tables. So that visit for its survival case, you can see that most of this is similar, like you have your dropout rates and your expected exponential mean rates in each group. And then you have a hard non inferiority hazard ratio versus the actual hazard ratio on most of these work pretty much the same.
So there's not there's not a huge amount of difference he needs here. But if you are interested in non inferiority and equivalence testing in the context of non continuous outcomes like survival, I'd be happy to discuss and talk about those in a future webinar or based on any questions we get today.
And as I mentioned here, that day had that slight sample size adjustment for dropout.
If we just go to the Windows calculator from the Assistance menu, and we take 99, would take 99, we multiply that by two, and then divide that by zero point eight. So 99 multiplied by two, is equal to 198. So that's the total sample size. And then we divide that by zero point eight. So, remember, with dropout, we take one, minus the proportion, drop out, and this K 20% to 1 0 is zero point eight. And so we divide up to get 247.5, which they assumed presumably rounded up to 250, and their example to make a nice round number.
In their case, 248 would have been adequate, but they seem to rounded up in this case.
OK, so that's kinda like a nice introduction to non inferiority and kinda gets everyone, hopefully on the same page. But I think there's a number of things obviously we could discuss about non inferiority tests that are quite interesting. And I think like the big debate is about the non inferiority margin. So, you know, I've been at a number of conferences, where people have talked about dynamic, non inferiority margin is based on, like the variants that you see in the actual study and whenever like that.
I think in general, you know, usually there will be discussion involved with the regulator, one setting the non inferiority margin and you need to have a scientific case for why, you know, going so far below the other treatment would be an acceptable loss and efficacy. That, you know, and, you know, as well as dot idea. I talk about the FDA, primarily, about that fixed fraction of the preexisting or expected F could like its standard treatment effect size.
There are many other considerations that obviously would affect how much, you know, inferiority you'd be willing to tolerate effectively versus standard treatment.
And obviously, you know, a major way might be to safety profile, like if you have a much safer version of a preexisting treatment, then you may make the argument that OK, maybe this might be slightly worse. But, not by a very large amount on, it's very much, incredibly, it's, it's very much safer. Then there could be other secondary endpoints, again, might have improvements, and other treatments and stuff, like data's probably more rare. And then, there might be just, you know, just, like, you know, it's kind of related to safety board, you know, just like, easier administration. So, if I'm talking about a ag, what the standard treatment is currently done via injection, But do you have a new formulation that can be done, orally administered, administrate it? That's a significant saving in terms of the amount of effort required to give the treatment, but in terms of patients' willingness to do it. But also just to either been able to, you know, have patients self medicate in that case, and, obviously, the significant cost and time savings for medical personnel.
But I would say, in general, from a regulatory point of view, the FDA, and certainly the EMA, neoconservative NI margins are generally encouraged. So if you want to take a very aggressive and I margin, in the case of a generic or a medical device on, you, you want to statistically proved that. Like, I think, you know, like the best case scenario are looking for typically is that you want your treatment to do the same thing as the preexisting treat certain for generic medicine. And so obviously in that case, that implies that a fairly conservative non inferiority margin would make sense.
So like I think, you know, other considerations like say that if you have a highly variable drug which is probably more of an, more of an issue with equivalence we'll talk about later on. But even in the case of non inferiority view a high high variance type of treatment that may mean you need a more Liberal and I margin to account for that. But you know I think you know, if you are confident in your treatment you probably don't want to be too liberal what you're anti margin, if you can avoid it.
I think one other issue that we're gonna kind of get into, when we talk about free arm trials, it's not, you know, in the standard to treatment design, where you're just looking at your proposed in non inferior treatment versus standard treatment. You are making a strong assumption that the control effect that you are using, or assuming from the original study, has been retained. In other words, you're assuming that the control or standard treatment actually works, but not just that, it just works. But it works by a specified amount, X in C, Or if you're an unfair margin space on a fraction of that effect. And it's obviously, if the effect is a different size or M one is a different size does not affect what M two would have been, if you had, no not a priori, and that means that you know in the FDA guidance, which is included in the references.
At the end of this, at the end of this slide deck, you may be required to replicate the conditions use to generate that original effect very closely in terms of the administration profile, in terms of, you know, the number of doses given pretty much all of those other things. If you want to ensure that your standard arm, your control arm, it's been administered in a very, very similar way. And obviously, that if you're doing that, to ensure that the blind and control is kept, you probably need to do the same for the treatment arm or for your generic arm.
So that can be a significant burden because, obviously, you know, non inferiority trials for generic medicines Cost is a major reason why your lower cost is a major, reason why you're doing this. And obviously, you may not have the resources or the time or the personnel to administer. The type of, you know programs that exist in the protocol. For phase, three, confirmatory trials, where we know a lot of money is expanded. And a lot of effort is expended to ensure that there is no water tight from an ethical bias and statistical point of view. So, that can be an issue that you need to be very careful of. And you may need additional evidence and data for regulatory approval to ensure that your control effect is lining up with what they were expecting a priori. Like, if you end up having control the fact, that seems different than, obviously, that's going to create questions. You may need additional data to kind of ensure that our regulatory approval goes true.
And I think both of those are big issues, picking the NI margin and this assay sensitivity effect that basically, you know, you're still getting the same effect as you were expecting.
I didn't want to briefly mentioned the existence of the very closely related to the non inferiority hypothesis. Of these super superiority by a margin or super superiority hypotheses on these are basically a case where instead of trying to prove that you're no worse than a treatment, you want to prove that you're better than a treatment by a certain margin. So imagine, effectively, if we go back to the very first slide, imagine that this upper limit here is the two super superiority, or superiority, bigger margin margin. Profit seekers who are already marginalized role. It's a bit easier. If the tongue.
So imagine this is this: To prove the superiority by a margin hypothesis, you would need the entire comforts interval to be above this upper limits. So I just wanna mention that briefly because most of the statistical quite like issues that exist for non inferiority tests are basically the same for that hypothesis type. And it is different from, I suppose, the classic, when you hear superiority you're usually thinking of inequality hypothesis, which is basically that, you know, the constant field does not include the zero value here, boss.
Obviously secrets, we already has a different idea on, it's becoming, there's some interest in it, and definitely, there's been growing interest in, over the last decade or so. But just note, that it's a different thing, but that in practical design terms, and certainly, in terms of sample size calculations, it's very, very similar to non inferiority sample size.
So I've mentioned if you have that assay sensitivity problem, where you don't know, or you need to assume that the control effect has been retained over placebo or maybe the old preexisting treatment.
If you don't, if you're assuming that, that's an assumption, you need to put extra effort in, in terms of replicating previous studies and extra data.
Then, one solution might be to, rather than just assuming it, let's add a placebo group.
And then, because we've added a placebo group, we can directly compare, reference versus placebo, proved a hypothesis that daddy's references superior to placebo, and then compare reference to experimental or experiment, and then it goes to our non inferior, or equivalent. If you're doing equivalence hypothesis, then we have proven that I say we removed the assay sensitivity problem because we now know, not both. They're both better than placebo A, given, whatever, You know, if we, even if even if we've done a different conditions, the conditions have changed, since we're comparing both placebo, we have that confidence, that actually, there is a real effect. And now, we also have proved that equivalent is equal to reference. And for that, reason, this might be considered the gold standard type, try to avoid that offset sensitivity problem. But, of course, that's only allowable, if it's ethical to do something like if we do it.
Standard treatment is obviously clearly superior to the placebo.
Then, it would be unethical to subject subjects to placebo group unless, you know, like having to condition wasn't considered very serious. So, you know, maybe in the cases of chronic diseases, with low risk profiles, or in the case of areas where there's less certainty over the efficacy, over the reference. So, for example, in depression, study, stuff, the clear case for maybe both of those considerations come into mind. Where this type of trial might be preferable.
But basically, it's just to note that if you could do this, it removes that outside sensitivity problem. But obviously, only if you can ethically do that, if you have a, if you, if you're comparing to a very successful drug or treatment, Obviously, you can't do a placebo group in that case. This is really only, you know, this is a better way of doing things, if you can do it, but, obviously, not apply to all situations. And in this case, instead of testing a single non inferiority hypothesis, really you're comparing two hypothesis, which is first that you want to prove that, either the experimental The reference arm are superior to placebo. Usually the reference arm is what you're comparing to placebo.
And only if that is significant, would, you then test is the experiment No worse than the reference. So no worse than the non inferiority margin or NIM in this case.
Put a nice situation here, is that some of the work done here is that you can simplify that time, really, to a ratio of differences tests, where you're comparing the effect of, you're comparing the experiment versus placebo, or E minus P, over reference minus C But when you want that effect to be no worse than some ratio of those differences endeavor, you can have like a nice small type test for the retention of effect using this framework that you commit to use a very similar framework for means, proportion, survivals, current rates, et cetera. There's also some nice match you can do to get the optimal allocation for a given alternative.
So, you know, obviously, this is kind of a two step hypothesis, but can be simplified down to a single ratio of differences hypothesis if you, if you need, if you want to.
And then in terms of, like in terms of extending this, there's a number of different papers for a number of different types of data situations. And obviously, this is assuming a parallel type probably, see, we have three independent groups. Obviously, this doesn't really make sense for a crossover type trial. two of the crossover type trials are common for non inferiority and obviously quite easy. Easy hoodoos like williams type design for a three arms or are true cheap answer. And that's perfectly fine as well.
So this is just an example of a phase three trial you're looking for for oral care, for, for osteoporosis.
So, you know, in this case, we're looking at a fairly simple case, where they've used the normal approximation for a percentage adjusted placebo effect.
So, this is like effect over placebo. So, you'll see here that the mean in the Siebel is obviously zero because they're comparing to placebo, and then they're assuming that the experimental arm amine are aren't going to be the same. But they had a pretty generous non inferiority ratio of 50%, so if the experimental arm would be at least 50%, the effect of the reference arm, then they would find this to be non inferior in terms of using o-auth, coulis, atoning. But that's the case here. Because this is just to sit, or to be a much easier type of approach, one, that maybe they expect patients to be more willing to adhere to, versus the standard nasal spray arm.
Which was kind of the standard treatment in this case, in terms of osteoporosis, post-menopausal, I should say. And so in this case, you can see the NDA ended up at 133 patients on the experimental and reference arms, but only 84 patients in the placebo group.
So, obviously this case, they were pretty confident that, you know, not getting this wouldn't cause any major effects over the course of the study. You can see it's like 1.5, 6 probably isn't a huge effect versus placebo of the osteoporosis is kind of one that it's a chronic disease. It takes time for it to have an effect. So, within the time constraints of a clinical trial, it probably wasn't considered to be too harmful to have a placebo group, in this case, and I'll see removes the effect of the assay effect. And the fact is, is a small effect, maybe means that that's a more significant thing to keep an eye out for.
So in terms of what that looks like, in terms of study design and sample size and nQuery, it's not a huge leap forward. I think the major difference here, it's a cohort now that we're not talking about a non inferiority margin on, the length scale of the original effects, which is like the, the difference for, like, they'd like the differences.
The difference between standard uncontrolled like I like a directed mean difference effect, but instead, the non inferiority margin is on the, on the scale of this ratio of differences. So, the ratio of E minus P versus over you know, or minus P or the reciprocal we find as well. So in this case, you know, they have a significant style of zero point zero 2, 5 on stuff equivalent to using a 95% caught two sided comfortable. If you want to think of it that way.
Under the experimental arm, another arm or 1.5, 6% each, obviously you could argue that using, you know, using a normal approximation for percentage is not the appropriate way to do it but this is what the authors chose to do. In this case, we mentioned the ... equal to zero because both of these effects are compared to placebo. These are busy, a placebo adjusted effects. And then as we mentioned, that not a free origin is equal to zero point five. And you can see here very clearly, it explains that, this is about the ratio of the differences, but you could also think of it as like, like the experimental effect versus placebo, or like 1.5, 6 0, being greater than, you know, this margin multiplied by reference, time minus the placebo effect. And then, in this case, the common variants is 6.25 note that they have the standard deviation in their calculation, but obviously, that's just the variance is just the standard deviation squared. So 2.5 squared S equal to 6.25.
And then we get into the allocation ratio. So in this case, they had an allocation ratio. They don't actually provide it directly in the paper, but we can obviously infer it from the, from the kind of numbers that they give.
In terms of the actual, we know what calculation the calculations gave them at the end. And we know that basically they had ratios of zero point three eight zero point three ace in the experimental and reference arm. So it will be equal sample size and these two arms. And then zero point two, four in the placebo arm ... like smaller than this other arm here by about it's about two thirds of it. effectively off dealer. Theater effect.
They needed the power of 80% and that gave a total sample size of 352 neck note actually. They gave 130 three per group for these two arms. But I verified that's against the, the, the paper and the other like the original methods, paper for the sample size calculation. And it seems like they may have used maybe slightly different values for this or something where they rounded down instead of up, but it's obviously an nQuery we always roundup because that's the safer thing to do to ensure that the power is always above the original target power. But for practical purposes this is effectively the same as those. one small thing to note here is that, you know, I talked about there that there's this optimal allocation, which in this example, it's actually pretty, pretty simple.
So we can actually just do it very quickly here. So you can see that when we selected this row, you can see in the suggestions, it gives the optimal allocation for the normal case with common variants. And you can see that it seems at 50% that people should be assigned to the experimental group And then something related to the non inferiority margin should be the other two groups now in this case, it's very trivial because the non fairly much is actually zero point five.
And that actually means that we have equal allocation for the reference and experimental groups, center point 25 respectively. And you can see that the total sample size in this case is about, you know, 26 lower. But obviously, we now have less people in the reference arm. So if the reference arm is important to you, that may not be appropriate. But in terms of having the lowest sample size for the highest power, this is technically the best thing to do in this, in this situation.
You would like to help cards for different situations like for survival proportions and the other endpoints account endpoints. They have different definitions of the application, of course. So, those are given in the respective help cards there.
OK, so hopefully, that's given you a decent background on non inferiority testing. And of course, none, if you're already testing is usually kind of easier to get your head around because, as I said, it's kind of more or less an extension of one sided hypothesis. And usually you're looking at effects that are real effective. Like in this case, we're looking at, you know, this reduces something that, like, you know, this reduces or increase, is something that we want to increase or reduce, objectively, it's just good stuff. We want this to happen. Like, we don't want to, we don't want to lose luminal. You know, we don't want to lose luminal. Law, we don't want to lose like wouldn't have little loss. We don't want you to have a worse situation here on the the effect on osteoporosis. 
So, you know, these are cases where we think it's pretty obvious, what's happening, where it's for equivalent testing.
It's not always clear now you can do equivalence type testing uncertainly, more people should consider doing equivalence type testing on actual effects, like where you have a real effect. So, we're just moving into equivalence, nothing here now.
And there's certainly many cases, particularly in the non clinical trial area, where people have inferred that a non significant P value on a superiority I pulses, is equivalent to making an equivalent outcome or suggestion. So, you know, a classic mistake that people have made and researches that date, they go, OK, I'm going to test my two treatments, I'm gonna do my standard to sample T test for inequality, or superiority. And if I get a non significant P value, that's the same thing as saying that I think that these two treatments are the same.
And that's not true, like, that's not true.
All that you've proven in that case is that, that you can't reject a null hypothesis of superiority, and is not the same thing as accepting that they are the same.
And if you want to say, I to these two things are the same, then you need to set up an online policies. That's basically the opposite of top, which is where I don't like that. These are equivalent treatments. And you can kinda see that in this first little, Like, this image, here, this chart here, like, we want the alternative hypothesis, if what we're trying to prove is that our two treatments are the same or equivalent. We need the rejection region to be around zero, not the acceptance region, like, you cannot invert. You can't just invert a null hypothesis like that. and use that to prove things in the, in the, in the hypothesis testing idea. That's just not likely. But, of course, you can see here, this is very different to what you're expecting.
Because usually, I suppose we usually expect the acceptance region of the null hypothesis to be bounded.
But in this case, it's like the acceptance region is unbounded on the rejection region of the null hypothesis is bonded here, and that creates all kinds of interesting debates and uncertainty going back in time. If you read the original papers, there was a lot of debate about how one would accurately do this type of testing. On, of course, it became the big issue when we talked about by equivalence, because, as I said in the early days, not a priority and equivalent for kind of used interchangeably, But when bioequivalence testing was brought forward, I probably didn't have any interference background. I live near a major thoroughly.
Bio equivalents is a situation where obviously we know we can't just assume that concentration has been higher, is better. Obviously, concentration higher could be more toxic or could be less effective. Like that's why we do dose finding studies at phase one and phase two for toxicity and efficacy respectively. And, you know, because we're looking at an indirect measure in bioequivalence studies, we're looking at the concentration of the drug or maybe the active part of the drug in blood concentration. We're not looking any more direct effect of the treatment. We're not looking doesn't increase marketers or decrease mortality, does it decrease blood pressure? Doesn't increase, you know, the older measures that we consider to be want to increase.
So if we're going to use is biochem the boat, which of the very standard for the, for the approval of regulatory regulatory context for generic medicines. And this is very important.
So it can be used for direct measures. But obviously, we're going to focus mostly here today on the bioequivalence case, where we're kind of looking at indirect measures.
And so as I mentioned, there was a big debate about how to do this. But I'd say like basically one approach that kind of caught on early was the two one-sided tests approach.
And most of the sample size calculations that you will do in this area used a 2 1 sided test framework and effectively the tof tof framework, TT T O ST from Sherman. It says what it does and attaining It does what it says on the tin. I should say, basically. you're going to test to one side of the palaces firstly is, Are You less than the lower equivalence limit and then test are you greater than the opera equivalence limit. So, let's go back to our very first slide to get a better idea. So, you know, first let's test.
Is this comps interval?
basically above the lower limits or, you know, in hypothesis tests, we're just checking, you know, you know, is the difference? can we reject the idea that this is not below this one sided hypothesis? But then we then test or you can do it in the other order of course, but both have to be helpful to be significant in this case for.
And then we'll test it below its other limits.
So, if it's above the lower limit, but also below the upper limit, then we conclude that it is within the two limits, um.
And you can see that the beta H one, therefore is that, if we can conclude that, if we can reject both of these hypotheses than we can, we can, you know, we can find for the alternative hypothesis, that it is between the lower and upper limits. Here. Like basically delta here just dancing for the difference, our effect size. So as I say, these are both tested, one sided, our level of alpha. But it's important to note that the overall type one error, the overall significance level, is equal to the one-sided level of each test. So this is the general point. But if you have, if you have what we call conjoint hypotheses, which is to say the boat, null hypothesis have to be rejected defined for your alternative, then the alpha for those two tests will be retained.
Rather than, you know, we're not we don't have to apply multiplicity adjustment. So if both have to be true, then the alpha used for both is the overall alpha for our text.
So you know, if I'm doing thought at 5% significance level, than the overall type one error is equal to 5%, boss, it's relatively easy to show that the two one-sided tests approach is equivalent to the confidence interval method approach proposed by Kirkwood. Now, they're technically, it, back in the day, there was a debate between the Kirkwood approach and the westlake approach, where the Western approach you can calculate your confidence vote. But you have the center. And it on the null effects are like on zero. Or basic, there's no difference, but, today, basically, multiple uses, where you just use the actual confidence interval appropriately. You calculate your comfortable, like, you would always, and then see if it's between your two limits book. one important thing to note, as, I mentioned, very much area, but I'll emphasize them here because it's something a bit more confusing for equivalence testing, because obviously, we now have these two limits on your, kind of between two things. You're kinda like, you know, it's a two sided test and it's not a two sided test. It's a one-sided tests overall.
Therefore, you need to take two times your, your alpha for your one-sided tests, to get the equivalent confidence interval. And so the classic case, is that if you have, if you want to type on air of 5% significance level of 5%, then you you need to use a 90% confidence interval.
So, that's very important.
Now, obviously, I'm presenting these as if they're, like, these are the only way to do this type of testing. They aren't. There was a lot of day, and there continue to be some debate about different approaches to establishing equivalence, because it's closed, rejection, region, this idea of these being the same.
It's not very intuitive, increasingly creates a lot of issues.
And I think this, Steve did that the solution here is mostly presented because it's kind of D issue that creates the least thread, like the least problems effectively bought. You know, these other approaches, they're not really widely used, this task, flashed comps into Roche. If, basically, the standard, in this case, on ... here. Like, in terms of bioequivalence in general. I'm not going to talk about the time to talk about it much. In general, we have these terms like ..., but just note, basically, what we're looking at here is, we're looking at, you know, the concentration of drug over time in the bloodstream and then you're comparing versus your two treatments. Then, if you see is basically just the amount of area under the curve, is just like the area of this curve here.
Sometimes measured as the you know, the area after the maximum and C, max is just the maximum concentration. So, that will be here and here, it's late. It's even more like a look here at nQuery. We have a legacy product called ... Test. It's not under active development right now so I won't talk about too much. But you know, this is this is the kind of an example of what this would look like in real life, where you know, we select our maximum concentration here and calculate basically the expected drug concentration profile over time using the parametric modeling approach that's used for, for this type of hypothesis testing, effectively.
That's the little green line. Here. Apologies. If you can see that this is kind of old software So it's not a visually appealing. And you can see you get to your AUC type measurements here, and so on so forth.
Um, there's other measurements such as T max, Like the time It takes to get to see Max put the negative T-shirt accredited two major ones and these are basically, what if you're doing a sample size calculation. Those are basically what you're going to be powering for.
So, in terms of other equivalence issues, I won't cover this in too much detail, because really it could be a whole webinar by itself. But just to make sure everyone's on the same page like, there are different definitions of equivalence. 
Average equivalence is mostly what you're interested in if you're looking to get regulatory approval for for generic medicines and so on. But there is individual and population equivalence and these might be cases where you actually want to administer a, you know, in, in, in the general population. And then, of course, there are options for different measures of equivalence like you see and see Mark's crossover trials are probably the standard way of doing bioequivalence studies. Crossover trials are of the cases where, you know, the same person is given multiple treatments And obviously, you have to have like a washout effect or a wash out period to to ensure there's no carryover. Because you want to ensure that people are back down to the base, let's say, blood concentration before giving them the next stroke. But, you know, the 2 by 2 is probably some of the classic version, but, you know, replicate such as 2 by 3 or 2 by 4 are also quite common. So that's like.
Where you are, You might get, You might have different. You might do the different, you know, rather than having A B, you might have a DBA or somewhat somewhat similar, and then Williams designed to be required, if you're looking at, like, more than two treatments at a time. So replicates are mostly of interest. If we're dealing with this last issue here of highly variable and NTIA type drugs, and these are cases basically where either it is a very large amount of uncertainty, a high amount of variance, in the outcome or for anti ID. We're looking at cases where very small changes in concentrations or profiles cause disproportionate effects on real-world safety or outcome to say, That's a pretty narrow index stroke.
Those, just to note that those ... are highly variable drugs, ... have different requirements for a regular effective in terms of what type of designs are preferred and what, you know, what type of equivalence funds are used on the FDA and EMA Have different perspectives on this. It's very confusing. But if you're interested, there are references, and I'm happy to talk about as well. But for most cases, to standard is, is that we'll have we'll look at that for a bio equivalent study will be looking at the geometric mean ratio for either the AUC on the T max or one or the other for sample size calculation. Certainly, we'll be looking at one of these. We want that to fall between a lower limit for the geometric mean ratio of zero point eight and an upper limit of 1.25. Obviously.
If we're looking at ratios, then we're looking at if we want balanced.
Bottoms, we want them to be the reciprocal of each other, or one over each other?
And I think, you know, just as mentioned, geometric mean really should just means that on the log scale, effectively, So we're logging or ..., you see we're logging or C max and then comparing the ratio of the logs. That, like, if you're looking at the ratio of logs and you're looking at normal distribution effectively because these are assumed to be log normally distributed.
And so when it comes to sample size calculations, it's not hugely different from what we've looked at previously with non inferiority. Except that, you know what will typically be getting its, its its coefficient of variation. Which is obviously just basically how much variation you have versus the mean value, is the classic definition of dopamine. Gets a bit more confused. Because, obviously, we're looking at the log scale, not the original scale. But I nQuery does provide a nice little table to kind of get around the confusion. But let's replicate the bits that are relatively easy, which is that the significance level is zero point zero five.
So we're doing both one-sided tests at the 5% level, which is equivalent to saying, we want our 90, we would want a 90% confidence interval to fall between the lower equivalence limit of zero point eight on the ... limit of one point twenty five for the mean ratio. And, in this case, they wanted the unexpected ratio of one. So, obviously, the ideal situation is, you know, the ratios are the same. Like, you know, the AUC in both drugs is the same, the c-max is the same in both drugs. But, in these calculations, it's often typical day might pick something like zero point ninety five, or its reciprocal. So, just to note that, you know, we're using one here, which is the ideal situation where you have the maximum power, but oftentimes you might be asked to kind of consider what would happen if it was zero point ninety five instead.
And then we get into the crossover anova, so just a note that standard approach for doing this type of trial is to use a crossover trial that's already I'd like to use a either a mixed model or an anova type analysis.
Know what type analysis probably more classic version. And then you, if you have your square root MSE from the anova for the error effect.
And you know, you can calculate that. You know, you can see that there's some notes here about where to thrive from, but you can see there's a direct relationship here between it and the coefficient of variation. But instead of having to calculate it by hand, we can go to the Assistance menu on the standard Deviation option here.
We can go to from coefficient of variation.
Click OK. Then, I will open a little side table here, or like, little table here, where we can translate a coefficient of variation into the estimated standard deviation on log scale. Which, you know, once again, maybe a little bit confusing, but that's basically equal to depth, not this. Like, this is really the standard deviation of the 52 within subject error.
Which we could then talk about in the more standard, Say, like, T test equivalent, standard deviation of the differences, which is different, again.
So, in this case, the coefficient of variation was zero point three, a three, I believe, giving a estimated, a standard deviation of zero point three seven.
And so, like, if you're wondering what like, a H V D is, I believe, that's around zero point five. And it's, on the proportion scale, you'll often see this given on the percentage scale. So, like a 38.3% coefficient of variation, but that nQuery, where nearly always looking at the proportion, not the percentage scale.
So, just note there is a translation, you need to go to proportion and nQuery, not the percentage scale. So, input data there, and you can also see, we automatically calculate the standard deviation of the differences. And then we enter our power of 80% on. In this case, you can see, that will get the same sample size as required by these authors. So, you can see here, this is the coefficient of variation and so on. In this case, they gathered on the proportion scalable.
Remember, this could have been on the percentage scale, like 38.3%, and you can see that they require 25 per group, or 25 per sequence.
I should say there's two sequences is a 2 by 2 trial, and that S equal to 50 subjects of 25 by 2 is 50 subjects, because we have two sequences in a 2 by 2 Compo crossover trial.
So, hopefully that's like, you know, this this Bioequivalence study, compared to ******, you know that it seems to have given equivalent results.
So, you know, in this case like that, this is what they were trying to prove. And, obviously, in this case, if the geometric mean ratio of the AUC on seem actual within that, then you would get that now, technically, if you're doing both.
Really. You should be doing to tests on, you need both of the really significant, than, the power, would technically be lower. Board, I think, for most, like sample size calculations. Because you and for equivalent spy over the studies, usually sample sizes are pretty low, most people just power. I'm one of them. But in theory, if you, if you had a correlation between the A, C, and C max, on average, or the mean ratio of them, versus, like, control, versus treatment, or control versus generic, day in, the power will be lower on average. But if they are independent, I think mostly it's solar power of around 65%, So it's not the worst type of situation here. But just just a small note there, I suppose, because you have if you have a conjoint hypothesis that both of these would be significant, then the power will be lower. So, as I mentioned, the alpha is not affected. But the power is lower. And that's much churn. That's basically true here, as well. Like, a, you know, the fact that we're doing to hypothesis that means that the power will be lower.
Dan, if we just need one of these to be correct.
That kind of trivial, it seems, obvious, bought.
The power is lower, in that situation or chance of no.
Our chances of being insignificant are lower. Doing to having to do two things than one thing. It's kind of intuitively makes sense.
So I think that's pretty much our time today. And I just want to thank you all for attending, so just in terms of some broad conclusions, like non inferiority or an eye on equivalence testing is all about like we have a treatment that we want to prove that it's similar to standard treatment. With non inferiority want to prove it's no worse than an equivalent, so you want to prove that is equal or equivalent to. Not a free already. We're usually looking at direct, monatomic effects, like where, you know, if it keeps increasing, were generally going to be happy, and, you know? But we do have the constraint that we usually need to redo. Something that looks quite similar to the standard trial that was used to justify the standard. And due to the assay sensitivity problem on the NIMH and requires careful consideration. And obviously, discussion with the regulator or any other body that you're dealing with. And, you know, there's kind of a cost benefit analysis there.
in terms of, you know, like Administration, safety, and adopt three arm trials. If they are allowable, Gita ethical considerations allow you to add a direct comparison the placebo which removes the assay sensitivity problem, and there's a flexible framework available in that case. So just that this is a new feature and nQuery out in the last last release cycles. So, if you're interested in looking at this. This is available now. And equivalence is all about trading if your treatment is equivalent. But usually, I think that you can definitely do it on direct effect and there's no problem with that. But usually looking at indirect effects, such as ... Equivalent Studies, where we're looking at a UCC, max, these measures of, you know, the blood concentration profile. Like how long does that last in the blood, Was the concentration on ensuring that, that giving similar, you know, profile to the original drug. But where these issues around this situation with the 90% concentrations of many on some of those issues around the.
And the if you're, if you're not doing bioequivalence and you're not using the standard in standard regulatory, Marge, and then all of the issues about using science and data to prove what the equivalence and should be, then re enter the frame frame as well. Of course.
So, that concludes the webinar. I want to thank you so much for attending. And if you have any questions you have after this webinar, you can e-mail them to ... dot com and I'll answer as many of those as possible. Just one-on-one. Finally, want to mention is that, is that the nQuery, somewhere released released a few months ago, including table 26 new tables across these area? Including multi, multi-stage designs, additional features for phase two trials, multi can again see mod ... trials as mentioned here. Cluster randomized, swept wedge, comfortable approaches for proportions. And then similar on survival type tables, including four. It's an interesting area working on survival is in terms of these.
A non proportional hazards models, just if you, if you, if you don't have an nQuery or you have a license I create, it doesn't include some of the features like adaptive design. You can start a trial, ... dot com forward slash trial, we just need you to if you have an e-mail an unsigned and get started you'll be sent and be able to evaluate nQuery within your browser. So this is you don't need to download anything, you don't need to do anything else. No credit card, anything like that, just send us area, like just filling your e-mail here. We'll send you a link and then you can try nQuery for 14 days within your browser on your page pretty much works. Is if it was downloaded on your, on your laptop, or computer, if you want any more additional information such as our previous webinars training, or tutorials and nQuery, you can go to ... dot com, forward slash star, and here's just some information to kind of stuff that we're covering. an nQuery distance.
As I mentioned, those references at the end of the of this demonstration. So, if there's anything, if there's anything missing here that you'd like more information on, these will also be quite useful.
So, let's take a couple of moments here and look at any questions that came in today, then we'll finish up officially, but if you want to leave now, feel free to do so. And if you, if you did, if you don't get your question answered in this section, that's fine, I'll be getting back to you via e-mail later point, either today or tomorrow, hopefully.
So there's a couple of questions about the like basically the non inferiority problem for other types of data like I mentioned to survive, OK, So as I mentioned, it's not a, it's not, it's not doc difficult because really, you know, when you're doing a sample size derivations for survival or proportions were similar. You know, you already have the problem that you need to derive what this test statistic look like under the alternative hypothesis as you move away from, say, a proportion difference of zero or an odds ratio of one or hazard ratio of one. So, you've already kind of done the work of what the test metrics look like under that. It's just not that you have to know. You're kind of doing it both ways. Now, you don't get the nice, simple, null hypothesis statistic anymore that, usually, like, you know, you're kinda nice, centered Z, or something like that.
Bought one? Like, you know, I thought it's just important to note that wears it basically, like, I actually show this, you know, if we were to take this non inferiority case for the continuous case, I compare that to a two sample T test.
Obviously, we are assuming a T test report, these cases, we were to do a one-sided tests for superiority, where are the actual difference? That means is equal to zero point one six. So remember that's the, that's the lucky number more interested in is really this here. The difference between our non inferiority difference on our expected difference. But if we pull in data and we put it in our standard deviation of zero point forty five, the effect size is the same.
And we put it in the power of 80, you'll get 99 sample size per group. Again, so for the means case, these are like, this is really just a shifted version of the superiority or inequality type T test bullet where basically the denominator which is equal to this minus this. Basically, this parameter here, both, for the proportions Kates for the survival case, you know, you can set the standard deviation here independently of the off here. But for the proportions case, that's not true for the survival case that's not true for the current case like negative binomial. Poisson, that's not true, which means that the fact that under the null hypothesis, the variance is different than it would be. For the nice case, where you know the difference is equal to zero, or the ratio is equal to pull him suddenly don't come into effect, because oftentimes for those other hypotheses, for those other, for those other types of endpoint.
The null case like, the ratio equal to one often has really nice simple distributional qualities, compared to any other non null, or non one, or mean different equal to zero. Or, sorry, different equal to zero situation. But, also, you've lost up now as you need to account for that.
And, Yeah.
And then, there's just one other question here, equivalent setting, either just mentioning, you know, what was going on here? So, busy? You know, this is the concentration of the drug over time. You can see time on the X axis here. This is on the original scale. This is on the log transform scale. And basically, you can see that, obviously, you know, if you administer a drug, the concentration increases.
Then, it reaches a peak, and then it comes back down, and this is a very classic thing to see, basically. Within this software, you're just selecting the maximum of the c-max manually, which is pretty easy to see, you know, it's, it's not hard to see, this is the maximum time nine.
Then, you can see that it calculates automatically these things like the AEC, and you can see there's like a little distribution here between these. Like ..., like the concentration, the relationship of concentration and time is given by this linear relationship here, or, I'm sorry. Log concentration is given by this relationship here and you get all these other distributions, like AUC is anti marxist, and stuff like that there.
So apologies, but this is Ontario. Like, I know, visually, it's not great distance distance, older, sulfur.
So it's harder to kinda get, get, get that involved. There. Was just one final question about the ..., and the ... kind of asking, you know, these other regulatory situation. So, like, yeah, like It's far far too difficult to probably go in the very brief time, I off here. But basically the big difference is that rather than using those standard like regulators like equivalence funds, you use equivalence fans that are scaled to the coefficient of variation, certainly for the ... case. And that also you you wouldn't be using a 2 by 2. You will be using a replica type designs like a two by three or 2.4, 2 by 4. And as I say that the approach of sorts of use is different for the FDA in the US. For the EMA in the EU. So once again, that adds an extra level of complication.
Like if you're interested in this topic, got nQuery covers. Most of you are kind of like most of the common situations like Williams designs and somebody's replica designs. But if you're interested in the topic in general, I do recommend there is a nice package called power T slash T, so power and lowercase T slash T in capitals, which is probably right now, the, the, probably, probably the leader in terms of working actively on trying to kind of deal with the, what seems to me, sometimes quite often, changing my advice on the best way, the power, four equivalent type designs and really wanna get into the weeds of equivalence testing. It is not trivial in any way. 
OK, I think that's probably enough for now. So, once again, for anyone who's still here, thank you so much for attending, and hopefully you have any suggestions for future webinars to let me know in the comments before we finish up, But I think what I need further ado. I'd want to thank you so much for attending and goodbye.

Subscribe by Email