50 min read

Written by nQuery Team

August 26, 2021

In this free webinar, we discuss the challenges of projecting events and how simulation can help provide the best estimate of the key event milestones using either unblinded or blinded interim data.

Using Simulation to Project Event Targets

Predicting event milestones is needed to ensure that survival trials remain on schedule for both interim and final analyses.

Most clinical trials involving survival analysis rely on key event targets being reached before analyses, whether interim or final, can be performed.

While pre-trial estimates can be used to estimate when event milestones will be reached, it is also useful to consider the value of using interim data to make more informed projections while the trial is on-going.

However, event prediction requires the consideration of a variety of possible influences alongside the events process itself such as accrual, the follow-up length per patient and dropout.

Events prediction is further complicated by the high likelihood of only having access to blinded events data during a confirmatory clinical trial.

In this webinar, we discuss the challenges of projecting events and how simulation can help provide the best estimate of the key event milestones using either unblinded or blinded interim data.

This webinar will also use an early preview of the new nQuery Predict feature which will be released later this year for the prediction of enrollment and event milestones.

- Event Milestone Prediction Complications
- Using Simulation to Predict Events using Unblinded Data
- Using Simulation to Predict Events using Blinded Data

the complete recording of this webinar

Nothing showing? Click here to accept marketing cookies

Looking for more resources?

Webinar Transcript

*Please note, this is auto-generated, some spelling and grammatical differences may occur*

0:06

Hello, and welcome to today's webinar, How to Protect Key Events for Survival Analysis Trials. We'll be discussing how you can use simulation to give you accurate projections and predictions of when key event milestones will occur, whether that be the interim analysis, or the end of the study.

0:25

Today's webinar will be demonstrated on nQuery using our new nQuery predict feature, which will be released later this year. So this is a special preview of a feature that isn't in the software right now, but which will be available later this year. So, if you have any questions or features, any requests will be happy to take on the payback. And some of that will likely be included in the initial release, but will obviously continue to improve on what you see today.

0:53

Before we get started, just a few frequently asked questions, firstly, is this webinar being recorded? Yes, it is being recorded, and this recording alongside the slides will be sent to you after this webinar is complete later today.

1:07

In terms of questions and feedback, please feel free to use the questions tab on the right-hand side of your webinar software, and I will try to get to a few of those at the end. However, any that I don't get to, I will reply via e-mail to give you a more detailed response after that.

1:26

OK, so I think that's the frequently asked question. So once again, today's webinar is How to predict key events for Survival Analysis Trials, using simulation to project event targets.

1:38

In terms of myself, my name is Ronan Fitzpatrick. I'm the head of statistics here at nQuery, and I've been the nQuery lead researcher since nQuery. And from Tree Point though I've given talks and workshops like this, the FDA JSM and of course hoping to get out and do more in person activity next year.

1:57

But for now continue to do these webinars and continue to host and uh, engage with material online.

2:03

So, in terms of what we're going to cover today, firstly, I'll give you a brief introduction to survival event protection.

2:09

Secondly, I'll give you an idea of what kind of methods are available for doing this type of prediction, for survival analysis trial. And then we'll use the remaining time to go through worked example, and see some of the complications and issues that might come up in the context. Specifically, of trying to predict event milestones for survival trial. Then some brief time for conclusion discussions.

2:32

Just briefly mentioned, of course, this webinar is being presented on nQuery. NQuery is your complete trial for optimizing your clinical trial design from early stage to post marketing. And, obviously, with a major focus on phase three confirmatory trials. And a large number of organizations who get clinical trials approved by the FDA, have a license for nQuery, and you can see similar views and companies here.

2:58

So, going into the meat of the webinar, let's talk about survival event protection. As it was one thing to note here, is that my last webinar covered milestone prediction in a broader sense, and did touch on the issue of survival event prediction, but also to dedicate a lot of time to enrollment, prediction, and enrollment milestones. So I suppose, this way, this this month's webinar is mostly focused on the survival specific issues that might come up, and the processes that might be of interest there. However, as we'll see, enrollment is very often, or quite often will be a part of making survival prediction, so that will be briefly covered at the relevant parts here. But if you have more interest in the Enrollment prediction milestones, part of the problem, please feel free to get in touch, and we'll be happy to share the recording of last month's webinar.

3:49

So focusing on today's webinar and survival of our predictions, we obviously know that survival or time to event analyzes are focused on inferences about the time too important to clinical events.

4:02

Things like dash, V, taking psych debt progression, things like some kind of critical outcome, like, let's say, a heart attack or a cardiac arrest or something like that. Something which is a definitive moment at which you have been changed. Now. You know survival Model models are often used for some things, which wouldn't quite fall into that category. Some kind of repeating events, like, say, COPD exacerbations, there's sometimes debate about whether a survival type approach versus a conference or incidence rate type approach makes sense to those. But for, now, we'll focus on the simpler cases, where survival is the obvious and widely used methodology for that particular type of clinical trial.

4:40

And, obviously, oncology and the usage of overall survival or progression fish, free survival, these are incredibly common, widely used, and very well understood, in, but in a survival trial, I suppose. The key thing to take away, is it similar to in my survival sample size webinars, I often talk about that. The sample size isn't really the target. When you're doing a sample size calculation here, it's actually the events that matter, it's that there's a fixed number of events that you need to occur before you want to make an inference or to have sufficient power for your trial, and therefore key trial milestone, such an interim analyzes. And the end of the study require you typically, to reach a fixed number of events, so it's not how many people are in the study. It's how people in the study have had the event matters in terms of when to make these decisions about when to do an interim now surrender to study.

5:36

And so, from that perspective, when we're doing prediction, modeling, and milestone modeling for survival study, we're not just thinking about enrollment and often, maybe if enrollments complete, we're not even think about enrollment at all. We're thinking about that survival process, modeling it, and then projecting outwards from what we know so far to how long we take. It's going to take for that number of events to occur.

5:58

And, of course, all my enrollment, which, you know, it's probably easy enough to kind of come out.

6:02

You could come up with some fairly simple analytic formula for how long you think like the enrollments going to take based on how it's gone so far, are based on some future enrollment rates for survival because you're taking into a lot of other considerations such as enrollment, dropout censoring, a cure process, competing risks. All of these are obviously influencing how likely you are to have the event, either depending on where people have to be in the study, before they can have the event, so enrollment matters. But all these other things, like dropout and cure, process, and draw, said, sorry, these are all things that prevent you from having the event, and so all need to be taking the count of, if you want to get an accurate inference about, when you get the actual number of events required, to do the analysis that you actually want to do.

6:49

And it's true of all trials. Because we talked about last webinar about enrollment, milestones often reached, but event milestones often also take longer than expected.

6:59

Not just because of enrollment delays, but also that the effect size or the survival curves may end up being no less aggressive than you expected, for example, albeit sometimes that might be considered to be a positive thing overall.

7:14

Now, of course, the opposite can happen. You know, for example, it's not Survival study part in the Culver 19 Vaccine trials.

7:22

The amount of people who ended up carrying disease was higher than expected, which went to those trials, actually ended up having the interim analyzes quicker than expected as well. So, I think it's important to note that obviously the, the opposite can happen. You could end up going faster than expected bought.

7:36

You know, from our perspective, it's probably the latter problem, which is probably more and more interest than why. Predicting I'm projecting and having an idea of how your trial is going wallets ongoing is important to have an LLC pretrial. You want to make these assumptions as well.

7:52

So, you put all this together, and you have something which for survival trial makes as much sense as enrollment modeling, both with a lot of additional factors that need to be taken into consideration.

8:03

And so, here's just kind of a small selection of some of the, you know, key survival issues that you might come up over all I think, of. And, you'll know the allow, these are actually quite similar to both. the issues that come up in sample size for survival analysis, and the issues that exist for enrollment prediction, or, you know, some of these are unique for the survival prediction case.

8:23

I suppose, firstly, just mentioned that you need to have an idea of, well, what are the primary survival end points that you have of interest? Or perhaps you have some non survival type end points that you might also want to consider at the same time, and of course, the biggest example of this would be overall survival versus progression free survival. So, for example, we know there's a lot of work on interim analyzes, are co primary analyzes, where progression free survival is used early on, because that is, most data tends to kinda give results earlier than overall survival. And then that's used to make decisions, for example, to increase sample size, or to stop the trial early, or other considerations, such as dark. And then, the final analysis isn't based on overall survival. Due to the fact that progression free survival in theory should be highly correlated with overall survival obvious, there's a lot of literature about that particular issue.

9:15

So, as well, it's just like, that's a classic example in oncology, where the consideration of what your primary and point of interest is, may be important to you. And you may want to have, you know, two models effectively: one for progression free survival, one for overall survival, depending on watch which milestone you're talking about. And, of course, there are other considerations, such as adverse events, and other endpoints, or maybe of interest, maybe non survival endpoints.

9:40

I suppose one other important consideration is, when you're actually doing your prediction. So, are you doing this pre trial based on some meat, or external data? Or, you know, maybe the same assumptions that you use for your sample size calculation, or, are you doing this on an ongoing basis? But, even if you're doing on an ongoing basis, there's a question of if this is something that you're kind of doing on a continual basis, like, you know, after you know, every week. Or is this something that you're doing a pre specified times? like for example, an interim analysis. Now, some of the, you know, some people, especially from the more practical side, people are on the ground. They are particularly are a trusted in the side of doing is continually kind of having those. Something that shows you in real time, for lack of a better term, what's going on in your trial? Whereas, you know, due to the intrinsic uncertainty that will exist in these, and this will be split somewhat by the prediction intervals.

10:33

And similar we shown in the software later on, you know, the statistician might be erring towards, well, know, because it is to cast, it can go up and down, it can change depending on how things are going. It's probably better to have, you know, a certain amount of information first occur, and also, probably better to tie it in with consideration. Such an interim analyzes and stuff like that. To ensure that there's no temptation to kind of, you know, try and move the ***** too much on the trial and maybe introduce operational bias that way. I think, you know, the other question of what information available, it's also equally important, like, do you have summary data, like, we know that 100 events occurred so far, or do you access to individual subject level data, And if you're dealing with enrollment, perhaps also site level data.

11:18

And obviously, if you do have that subject level data is on our own blinded or a blind basis. Do you know what treatment group? That person is Z, because obviously, I suppose, from a modeling perspective, in terms of accuracy. We want to have the maximum amount of information possible. So, it will be great to have unblinded data, where we know what treatment group you're in. We have also subject level data, perhaps even other covariates type data that we could use to improve our model. But, of course, the more of the information you have available, the more likely you are to come into issues with operational bias. Statistical bias, like, of course, the big thing about if you have unblinded data, then, obviously, from an operational bias point of view, the standards required will be much higher, for example, in open label trials, for example. Or, you may need to bring in someone like an independent data monitoring committee, to do this type of modeling for you, or B, or the only, the only ones, you see, the results of modeling. If you end up saying they unblinded data, this is very similar types of debates.

12:16

You may have internally if you're looking at the contrast between a blinded or unblinded adaptive design, or a non comparative adaptive design taking into account the FDA's language on adaptive design.

12:32

So, obviously, major consideration there, like I suppose, you know, the ... on the ground and probably want to have access to the information, to see what's going on. Which means that even if a blinded prediction may not be the best prediction, it, because it would have less strict regulatory effects, consequences, then you would be perhaps incentivized to use that, even if it is not quite the best comparison or blind to type of analysis.

12:59

So, once you've made all those decisions, and you're kind of the contextual problems, you need to have, then you actually have the problem from a kind of more statistical point of view. Well, what survival models do you want to consider? There's obviously many different models that you could use to fit your current data if you have current data available. And then, also to project going forward.

13:19

And, of course, you know, you could have a pre-specified model, let's exponential, or weibel, or you could have some kind of algorithm that kind of covers and tries to search through a wide variety of different models to see which one might be best.

13:33

And of course, you know, you might ask, which ones are the best for inference? Like, making prediction intervals and stuff like that.

13:40

And of course, even if we have a model, we do need to think about the key assumptions underlying any of those models, know, and any other competing processes that might affect our ability to model the events.

13:50

So obviously there's a lot of talk right now about non proportional hazards and behave here such as the delayed effect seen for immunotherapies.

14:01

Of course, there's not some debate about whether that lay the fact it's a true delayed effect, or whether it's due to some responder type in our effect. But we'll put that aside, that's a different webinar, and I have covered non promotional ads for sample size recently.

14:13

So you can see that if you're interested, but also there are competing processes that we've mentioned previously, like Dropout Cure and other considerations competing risk models.

14:24

And of course just a small consideration about the censoring process that you use. So I think, you know, in the vast majority of trials, and certainly the examples that I'll be doing today, we would assume that there is a fixed length of study. And basically, that would be defined by when we reach our events targets. So let us say, we want to reach 400 events. We wait till we reach 400 events and then whoever has not had the event, our who still available to have the advantage is right-censored.

14:51

But there are trials where people prefer to have a, depending on the context, depending on the disease profile, to have a fixed follow up. So you would be, every subject is followed up for 12 months and then right censored irrespective of the study is still ongoing or not. And that can be included in the software, I'll be the nQuery version. I'll be Jamison later, but just to say that, suppose the typical cases, we have some fixed target events. Once we reached out then whoever has now had the event, we would censor them. So that's the most common situation. So we will focus on that day.

15:26

So the second part here is focused on OK, We've thought about all these.

15:31

We kind of know the first set of questions, we kind of know what our primary outcome is, we kind of know what type of data we have, and when we're doing it. Let's just focus on kind of those two second questions, namely, what survival model we want to use, and what all the processes do you want to account for when modeling survival.

15:52

And, I think, you know, event prediction models are surprisingly similar to sample size calculations. Again, they're kind of two primary approaches. There's analytic on their simulations.

16:00

So, analytic is basically any process, such as an equation or a Markov model or similar, which will give you the same result, assuming that you put in the fixed, you know, the same fixed parameters into the algorithm.

16:16

Then, simulation is obviously where you use Monte Carlo simulation, you simulate what would happen, and then if you did another simulation you simulate what would happen, you kind of model what would happen. You do that, you know, 10000, hundred thousand times, and then you see what happened on average. And then if you read that simulation, you would expect that we slight variation each time that you do the simulation.

16:38

So, it's important to note that for many problems, you could probably use both of these approaches for the same type of models of using an exponential model or a viable model, or even a piecewise exponential model. It's probably true that you could use both simulation and an analytic approach for doing this type of projection and protection modeling.

17:01

However, we will focus today on simulation primarily, because it gives you a greater deal of flexibility. So simulation, if you make small changes, or start playing around with underlying assumptions.

17:12

Generally, simulation doesn't have too big of a deal with that.

17:15

If you can include that in the data generating model, then it's basically not really that big of a deal, whereas if you using an analytic approach, oftentimes, that won't be a problem. But off, sometimes, you will come against a wall of what is possible within that model. Or adding some of these additional assumptions will call the model to become much more complex. Perhaps, you know, much more watchlist, computationally efficient or just break completely.

17:39

So, from that perspective, analytic is generally better. If you want quick, dirty results. And you don't, you're not planning to move too much beyond your initial model.

17:50

But if you want to tinker and try different things and include things that maybe are a bit more obscure, then simulation is probably the way to go.

18:02

But even given that, there's so many models that you can choose from: exponential, weibel, piecewise exponential, including cure race, censoring, different censoring assumptions, et cetera, et cetera. So how do you find fit and compare these models?

18:14

Well, no, there is no real best way to do that.

18:18

There are ways you could obviously fit the current model, and you can do adopt, But of course, unlike say in a typical machine learning or supervised machine learning process where eventually we will get the correct result, we will know the result of result until the actual study aims. But of course, you know, if you look at the papers on which the various models and methods are dependent, they will obviously do simulations so they can show you how they've been formed, but unfortunately within the context of your specific study, the proof unfortunately will be in the pudding. And so, you just need to trust that the methods that you select are, so that implies that you should probably try a few different things, kinda see what fields are, reasonable, and, you know, play around.

18:57

and do sensitivity analysis to see walk, what might happen.

19:01

Basically, you probably want to sell like scope out scenarios rather than necessarily just kind of going off the first one that you can get the work.

19:09

And, of course, you know, there's a big question, then about the type of bottles that are compatible with the type of data that you have. And the big question there is like, for example, is blinded versus unblinded data.

19:20

If you're using blinded data, then I suppose some may prefer to have some kind of mechanism to try and extract what the date that the groups would have been using say something, like an expectation maximization or similar boards.

19:35

How accurate, how useful are these, particularly maybe smaller, sample sizes slash events that, those questions about dash.

19:43

So, I think, you know, you're, you're, it's not just that you kinda pick the best model, you want to pick the best model for what you actually have available. So, if you only have summary data available, your product, that you would like, you only know that 100 events have occurred after 10 months when 200 people have been recruited that point.

20:00

Then you may want to focus on simpler models, rather than trying to, you know, move into like complex models, to try to extract, I suppose, hypothetically what happened. So, if you have summary data, you're probably using simpler models of, you have subject level data you're probably using. Or maybe you have the capacity to do more complex models. And if you have on blind the data, that, you know, an additional level of complexity that you can probably introduce them to be more accurate.

20:23

And as I mentioned several times already, but just to re-iterate, if you want a model survival, you have to model what else is going on. You need if accrual is ongoing to ... enrollment, you need to think about competition to events but it up dropouts or censoring or competing risks, et cetera.

20:39

I think just a note dot.

20:42

The computation to events, I'm using here is a broad category.

20:45

And when we go through the examples later on, I'll be talking about dropout process.

20:49

But really, it's important to note that what we're talking about here in terms of event prediction, is we're assuming the targeted offense is what we care about.

20:59

We're really only interested in the target events from a practical point of view that we need a certain number of events to occur for us to do our analysis and finished this trial, or get this trial on the on the road to the analysis stage and approval, et cetera.

21:15

So from an inferential point of view, it doesn't really matter if the compete competition to our event, it's a dropout or a cure, or censoring or a competing risk.

21:27

All that matters, from our perspective, is that if one of these competing processes occurs, that person can no longer have the event, and therefore can no longer contribute to the study ending, or to the interim analysis occurring.

21:40

Based on the, you know, study protocol, based on the target that we set for doing that particular, doing that particular action.

21:49

So just to mention that, we will be talking about dropout, and primarily an example.

21:53

But in reality from, for this particular type of problem, the competing process isn't really the main point.

22:01

It's really just about if a process excludes you from being able to have the event going forward.

22:06

That means that, from our point of view, that's a problem, not someone that the not bad, but someone who is unavailable, and therefore, will slow down how long we will have the event.

22:17

So we require the total required events.

22:22

On this slide, it's from the last webinar actually bought.

22:24

It's basically just say, like a quick survey of some of the potential ways that you could do it from parametric modeling. That's kind of analytic most I'm talking about there to similar parametric piece, wise, modeling.

22:35

And in fact, know, many of the sample size calculations that you're familiar with, particularly can basically be re-used and rejigged to become event projection models.

22:52

But, as I said earlier, we're going to focus on simulation using survival models, such as exponential, or piecewise, exponential, or weibel.

23:00

And that's what we'll focus on today simply, because it's not really, it's kind of the easiest one to understand. That gives you the greatest degree of flexibility over, watch you do. So, you're picking the model that suits you, rather than having some algorithms selected for you, or some machine learning process selected for you.

23:18

So, you know, the Step four here, model selection algorithm is really, know, taking the, the potential models from ... tree. And then just picking it from a bunch of what, perhaps, thinking from the first and second category as well. But for now, let's focus on the idea that you have a bunch of different models that can be used. Let us say P. So it's exponential, there's no multiple different change points you could choose, and then these algorithms are by fitting and finding the one. I suppose, it's best to what's happened thus far.

23:48

Just mentioned the non proportional hazards is a slightly different problem, would require slightly different models. But that's a very developing area. There's a couple of papers that are very recently on data if you're interested.

23:59

But for now, we're kind of focusing on your probably mostly focused on your problem of the kind of classic proportional hazards kind of study. We're assuming some kind of constant hazard ratio of like that, said, there are other considerations. So, for example, if you have access to external data, you could use some kind of Bayesian borrowing to try and improve your projection model, try to improve your estimates of the to exponential or why bulb parameters. And for the case of blind the data, in theory, one could use expectation maximization or ML algorithms that try and create effectively your best guess of what the unblinded groups would be based on the blind the data. And based on some assumptions, are obviously, for example, a fixed hazard ratio. So, you know, that's actually quite similar to the blind at sample size re estimation calculations that are have been proposed for survival models, and there's some way that you can basically kind of use that to create better create kind of pseudo parameter estimates for the bill on palm blinded type models.

25:04

OK, so, for the rest of the webinar, we're going to focus on a worked example.

25:11

And so, what we're going to do here is look at a study where we have 374 events at the Target.

25:18

And we have a target, like a target sample size of 460.

25:25

So, in this case, we have 50% of the events have occurred so far to 8830, 474. But enrollment has not finished, or we need to model the remaining enrollment to get an accurate estimate of survival. Otherwise.

25:41

How would we kind of, we need to kinda know when someone arrives to study, to know where in calendar time or from if, we're talking about, like, in relative to very Saturday when they have had the events.

25:53

And then, obviously, how much time to spend on the study before they had the event.

25:56

And in this case, we'll look at two cases. We'll look at the case where the site level data is available. And it's not available, But if, when we get to the site level data, just to note that the 118 of the 127 sites that could be used in this study have been opened at the time of this particular analysis. So remember, our current calendar time. It's around 24 to 25 months.

26:22

Sorry. There's a small mistake there. But that 20, it's about 24 months, rather than 20.7. And in this case, this is this is basically what we're trying to achieve here.

26:34

So, as I said, the kind of main thing that we want to look at today, compared to last month, where we focused mostly on the enrollment process, which is, kinda, just a small note here, that will kind of just, I'll just re-iterate briefly, but not really focus on today, is one will be looking at unblinded versus blinded survival model for predictions. Secondly, looking at the effect of different types of survival models. And then thirdly, briefly, the effect of enrollment.

27:04

So, as mentioned, nQuery predict will be the new module that will focus on these types of milestone prediction problems. This will be, what we will see here is very close to what the initial release will be, But note that this is a beta version. There's still a lot of bug fixing and issues going on, and of course, it's not available in the software right now. If you're interested in knowing about the wider features in the software and growth, rough timelines in terms of cost and release days, you can get in contact at ...

27:37

dot com and we'll be happy to keep you in in the loop in terms of what's happening there.

27:42

But just to say that this is a experimental, beta level implementation of the software that you were giving a preview of today. And if there's anything that you see that's incorrect, or that you would want to see in a future release, please let us know. We're all really, really, really interesting, any feedback that you might have.

27:59

I think for the first case, let's focus, wait.

28:03

I suppose what the best case version of this prediction would be, so in this case, we have subject level data. And we have site level data.

28:14

So if we just scroll this across a bit, you can see that we have subject level data.

28:18

So the subject level data is, we know what, where each of the subject is from, and we have certain information about each subject in the subject level data. We have region, which we're not going to talk about our use today, but, obviously, could be something like you, or to US, orals, like US, State, or whatever. And then we have S I T, which is just the site ID. So, each person came from a site, and we assigned each site ID. We're just using this to link our subject level data to our site level data, which I'll talk about briefly in a moment.

28:50

But the important thing to note on here is that we're doing a survival, analysis type problem.

28:55

Then we need three pieces of information. And then one additional piece of information if we're doing an unblinded survival prediction.

29:03

So, the three pieces of information we need for a blinded and unblinded survival prediction is one we need to know when you arrive to study.

29:11

Because, of course, if we don't know, when you arrive into the study, that we don't know how long you've been on the study, until you hide the event.

29:19

And we also need to know that if we're going to make some inferences about the enrollment process itself.

29:24

So, that's just useful for both modeling enrollment and also ensuring that we have an accurate idea of when, you know, how long you've been in the study, in terms of calendar time, or basically reference to the 0 to 0 time of your study.

29:38

So, for example, you can see here that the first subject is someone who arrived around three months in the study, and they have been followed up for around 21 months at this point. So they're still in the study. This is around 24, 25 months. It's the study, they're still around, arrived very early. They've been around a long time. This is relatively unusual, as we'll see later on. But this is kind of one of the outliers here.

30:02

And then, so we need to know how long you've been in this, like, how long When did you arrive into the study?

30:08

How long you've been in the study?

30:10

But to contextualize how long you've been in the study, we also need to know what your current status is.

30:17

The status can be split into three broad categories: one, you have not had an event, or any other competing process. You're basically still available to have the event.

30:28

Or if you want to give a different way, if a study was to end today, these are the people who will be right sensors.

30:35

So in that case, we would usually assume that something like a zeros, something neutral.

30:39

But this could be, like, you know, it could literally just say, something like, you're available or, you know, available or censored, or something like that.

30:48

But for now, we'll assume it's being assigned to zero.

30:52

Secondly, we need to know if you've had the event, in this case, that's, that's been it by one.

30:57

So if you had the event, then the follow-up time is not the time. Since you arrived to study until the current time on to the maximum time. The time at which this data was taken, it was how long you were in the study, until you had the event. And then, of course, we don't really, we don't really have much interest in, what happened to you after the effect, because, from a, from the perspective of doing a survival analysis, A time to event analysis?

31:24

Once you've had the event, from an influence point of view, that's all we need to know about, the individual subject.

31:31

And then we have one other category, which is the basically older slash dropped by category.

31:39

And that category, is basically, for people who have had something happen to them, that's not the event, which means that they will no longer be available to have the event.

31:52

So a classic example would be if someone's dropped out study, that just no longer in the study, that they decided that the, you know, that they can't continue in the study.

32:00

But, of course, that could be a competing risk. or maybe they're cured of their disability, that they'll need to be the study anymore or other kinds of considerations such as that Ruby to just being censored. For some reason, maybe if you had some kind of fixed level sensor.

32:13

But the important thing, it's really just that, when we're doing our prediction modeling, we want to ensure that anyone who has had something happen to them, that means they are no longer available to have the event. That we're not going to create an event prediction. We have to assume the top of that. That's fixed in time, and that we can no longer do anything with that person, effectively.

32:34

So you could, and in that case, of course, follow a similar to event is not how like, not the time, since they arrived at the current time. It was how long they were in the study until they had that drop out, for example.

32:47

And we'll see that if you want to, you don't have to, but if you want to, you can treat dot dropped by type process as a, kind of, similar to be similar to the event process, if you want to.

33:03

So you don't have to. But if you wanted to do that, that, that's perfectly viable to do.

33:09

Just mentioned briefly, the site level data. We're not really going to focus on this problem today. I talked about this a fair bit more last time, but just the mentioned for site data, we obviously need a site ID to link the sites that exist in the site ID for the subject level data, to our site level data. We need, on enrollment cap, just the maximum where people alive, not cite an open time, just to let you know when that site opened, And then the rate at which the enrollment is going to happen in that site, which, you know, may be based on the observed enrollment rate. Basically, like, summing all the one on ones, For example, in 1 to 102, and then dividing that by the current time. But, it could also just be dates, and some pre study specification, or from external data, or anything like that.

33:53

Just to note that, they'd like this right here, for 101 does happen to correspond to the rate implied by the number of 101 sites in this subject love dataset, but it does not have to be.

34:06

We'll see that briefly later on.

34:08

And just that there's also, in this dataset, a handful of on open sites, which is, you know, optional. We don't really need to have on open sites. But if we do have an open sites, you can see that we just need a window of time during, which will allow them to open.

34:22

In this case, you know, usually around our current time.

34:26

OK, so that's the data, So hopefully that gives you an idea of how, what our data looks like, and what will be going to be putting into our model, because today will be mostly focusing on the problem, where we have access to interim data. We have access to data. Awhile a trial is ongoing, and we're going to make predictions. Using that data are basically making the Best model, or one of the better models possible, using the data to make the better, more informed inferences about future predictions. Just to note there, that, whenever you need any screen, there'll be this help card on the right-hand side, similar to standard nQuery tables, so if you need additional context or information, that can be useful purpose.

35:05

So, returning back to the very first thing, very, very first thing, we come with the problem. Let's focus on the best case scenario. So, we have site level data.

35:14

We have subject level data.

35:16

Such level data includes its survival, required, survival information, but also includes the treatment indicators. So, we do have, are you in the treatment group, Are you in the control group, one for treatment group, zero for control group?

35:29

So, we can actually, if we want to do, in this case, do an unblinded events prediction, which, on average, we would expect to be more accurate than the blind events prediction, Because we're using, we're going to be modeling each groups survival process individually, rather than having to kind of treat them. It's coming from a global process.

35:48

Since we have site level data, why wouldn't we use a basic?

35:53

So, the first thing we need to do in this case is that we just need to select our subject level datasets.

36:03

And then we just need to select the correct values for each of these.

36:10

So as you remember, the arrival time, in this case is equal to arrival, The follow up time.

36:17

The time on study that's equal to follow up, current is the current status, and then treatment is our treatment group. So you can see data. We've got to subject level data. Step two, we have now assign the arrival time to the correct row here. Treatment ID is equal to treatment.

36:32

Status indicators equal to current, The time and study is equal to follow up on the site ID is equal to the city call.

36:40

So each of those columns then gives us the correct value for each subject based on that row value. And you also just note here that there is obviously the ability to control, which one is the control group in the treatment group, by default, zero, And one makes sense for control versus treatment. And for the status indicator, we have 1 -1 and 0 that's going to default.

36:59

But note that this can be entirely flexible, and that if only two of the indicators for use, let's say, for example, no one has dropped out yesterday, not non common scenario, then you can fill in this by. You can fill it in manually.

37:13

So that when you do predictions later on, you could manually introduce a dropout process, even if it hasn't happened yet. So, you know, by default, of course, the dropout process will be assumed to be zero. So, you know, the busy no one's going to dropout port. We could add that in if, for example, we think some dropout is going to happen. After this point, of course, this study is probably going on quite awhile in this hypothetical. So, that's usually not a problem.

37:39

For site data, you know, it's pretty much the same thing. Just to say that we have site ID to link our two datasets together. That's really important. We need to have the rates that we expect in each size, So that's the enrollment, right? Niche site enrollment cap to just the maximum people alive from each site. We need an open time for our open sights. Remember that we're doing this on the basis of interim data.

38:01

So, at least one site must be open, otherwise, where these people coming from, and just mentioned that if we happen to have one Open Sites, then optionally we can include start and end times for those subjects. In this case, we do have some on openside, So, we can do that here.

38:21

So, we get into this case here, and you can see that the current sample size, it's 402, The current number of people who have not had the event, or not dropped out, is 212. We'll see what the specific amount split there is. It's 128,703 as well, just to be spoilers. Boards, you know, we can see that the current, there's 212 people still available to have the event at the current calendar time, which is around 25 wanted to study. So, this study is being, you know, it's coming near closer to the end of the beginning. And you can see by default, we just started to be, don't like the target sample size. Then, were able to recruit 804 pot where we obviously know that in this particular case, we go back to our original slide. We actually want 460 people to be the maximum number of people in our study.

39:07

So we're only gonna be recruiting or modeling on additional 58 new enrollments using simulation. All the rest are fixed because we know them on the base itself.

39:19

The bet on the data that we have the arrival times for the first 402 people, we just need to simulate an additional 58th.

39:26

I just mentioned here that in the case of having site level data, we can create a very complex assignment, where each individual site has its own race and enrollment cap, and we can fix and change those as much as we want. And the new sites can be added to an even greater degree. Because we can decide when they're actually going to open. But as I talked about this a lot more in the last webinar. But just to say that we have a lot of flexibility here on top of the model, that remaining enrollment process.

39:52

But this is really not the focus today. But just to mention that because enrollment is still ongoing in this trial, we do need to model enrollment to be able to model survival on top of that basically. Because if we don't know when someone arrived, then we can't really model how long they've been in the study, relative to all of the other people who've been in study, and therefore decide how long it is, when the study actually ends, after the fixed number of events occurs.

40:21

So then we get to the event and dropout information.

40:25

The default here, because the current number events 187, the default is twice that again, to 374.

40:31

That's actually, in this case, the, the actual amount that required. So this was an interim analysis, after 50% of the required events that occurred.

40:38

And you can see there, once again, that the current sensors 212, and if we click on Dropout Model here, we can see the Tree. People have dropped out in this study.

40:46

So there's 190 people who, at this point, in our subject level data, it will no longer be available to have the event. They can no longer have the event, because they've already had the event at the current time, or they have dropped out, or something that's happening to them. So that they're no longer available to help the event to drop out here. As I say, it's kinda a broad category for all of the other things that could cause you not to be available, to have the event dropouts, the kind of obvious processes here.

41:17

And you can see here that the target sample size is also given. You can have this screen as well, if you're particularly interested, but that's just one small thing to mention. I don't think it will affect this particular study bot.

41:27

If the target number event is reached before we reached the target sample size, So let's say that we increased target sample size to like, 10 times, the likelihood, then, is that we have 374 to target events.

41:41

Enrollment will still be ongoing at the point that we reached 374 events.

41:45

In that case, what would happen basically is dash.

41:49

You know, we would stop the trial after 374 events occurs and whatever that time happens to be when the accrual and the study N time will basically occur.

42:01

That's just a working assumption. We're going to have that, basically.

42:03

The events target is primary, over the sample size target, And you can see here, by default, we just picked the exponential model. And it's actually already done the best fit part, Well, the best one of the benefits for what the exponential model would be for this with an exponential rate in the control group of around zero point zero seven eight, and an exponential rate in the treatment group, zero point zero five unsurprisingly, We're expecting a lower event rate in the treatment group compared to the control group. Were you hoping that we can see that we have a hazard ratio of around zero point seven?

42:37

So, we're, you know, this has automatically basically fitted. So, what are you can do some fairly basically approximations as well? Like, what is the best fit for an exponential model for two exponential models to accidental processes, for these to respect the ones and it gives you the hazard ratio. You can quickly change. And, if you want, And then the dropout process, which we won't focus on too much today. But just to say that the dropout process, you can individually set those rights, as well. Of course, the hazard ratio doesn't really make sense to worry about today.

43:07

So we'll talk about some of the other models available in a moment, But let's for, for now, just assume that the default exponential model, where we're sending a constant event rate makes sense. And then we can also add in some additional data sets, if we're interested here.

43:23

Then, we'll have 10000 simulations on a random. See, if you need to random Seed field here, just pick a random seed for you. You'll also see that there's a percentile stable here. We'll see that in the report.

43:36

So you will see that this just takes a little bit of time to go through here. You can see that the average sample size and average events are fixed integers.

43:44

That usually indicates that you have reached that target in every single simulation. If you happen to give decimal results, that indicates a one, or both of them, aren't reaching the target, in at least one of the simulations. And, as I said, one example of that would be, if we reached the average sample, like the target sample size.

44:04

After we reached the target events, then the target events would define when the study ended, Because we're assuming here that wants to target events, occurs. That's when we're doing the right censoring will be ending the study.

44:17

You can see here that this is, probably isn't a very realistic enrollment model.

44:22

Diseases was kind of tailing off, so perhaps it will make more sense to have the, the rates in all of those sites be significantly lower. But we'll kind of ignore. We'll kind of move, pass off for now, and focus on the events prediction. Where we can see there's, you know, once again, there's probably a similar type of process going on here. But you can see the overall if we were to kind of ignore this kind of slight logistic hit chair, that if recruitment were to continue at the kind of average rate, we would expect this overall model to kind of be kind of a straight. But this is something that will obviously give you questions or pause, or maybe there should be a lower event raped by default, and that's just kinda where the, you know, trying different models and stuff like that will come in. Addition to dropout process, you can see that there.

45:05

Obviously, we provide a report, which kind of summarizes both.

45:11

on the left-hand side, what the inputs into this particular simulation work. So you can see here is just an input summary and what the data was. And then the various inputs that we used for the sites, and the sample sizes, and the events. And you know, what seed would use. But I think, you know, we're probably more interested, typically on the summary on the right-hand side here, which is the results effectively.

45:37

So in this case, we have our average sample size, we have our accrual duration, we have our study duration, etcetera. So we can see here that to get to 450 people, it required about two extra months, so you're very near the end, here 27 once here. Both that the study continued onto 43 months, to actually reached 374 events, can see the trainer 74 months was reached in every single trial.

46:01

But, of course, if, say, the dropout process was much more aggressive, or the event process, was a lot slower than there may be cases where the dropout process basically leads to too few people being available, at that point, to actually get to the target events. In which case, the simulation target reached maybe below 100%, But usually for reasonable assumptions, we would expect about 100.

46:21

Let me see here that the average follow-up was around 12 point five months.

46:25

And of course, just the mentioned that that that follow up includes the people who were censored. So there's some people who didn't have the event were followed up for a fixed monthly in the study. So it's the average of 10 and the amount of time it took for someone ... event or to drop out.

46:40

You can see here that we have the percentile somewhere. So you get the 5%, 25 percentile, the median or 50th percentile, the 75% or 95 percentile, You know, the events and sample size for the same for all of these. But like dropout ranged from 4 to 9. Then accrual generation from 26 to 26.6 to twenty seven point five hundred forty one to 41.4 to 646.2 for the study duration. And the site was 127 for every case there.

47:13

And, of course, the, you know, those plots that we saw there are based on tables. Which you have, you can have, you can see here if you're interested.

47:21

And there's also those additional reports that we selected from the Simulation Controls menu. Namely, we want we could find out what happened in each individual simulation. Like, you know, this is what the set of points, but you can also see when this particular simulation end.

47:38

And you can see, like, this kind of varied from one. For all of these. You can see what happened in any in an individual simulation or multiple individual simulations. So this is, for example, what happened in simulation? one, in terms of the, you know, this was the time that was assigned for the survival and dropout process. Both of which were greater than the, than the, current, like, the overall, the, the, the, the end of the study. So, they ended up being censored in this case, for example.

48:04

And then on a per site basis, you can see what the average accrual rate on the average number of people recruited wasn't each simulation across the average, across the simulations for each site. And then you can actually see what happened in each site individually, and so one of the simulations individually.

48:20

So, for example, in simulation, more than we can see that this particular site accrued 11 people in order or simulations on recruited 12, 13, 9, etcetera. But this case, and recruited 11, into one of our site, 101.

48:36

And that kind of covers the broad way this works. And what I'm gonna do now, basically, is very quickly, kind of go through some of the other scenarios. So, focusing mostly on this kind of second scenario was the comparison, which is the blinded data. It's like case.

48:51

But, like, in terms of the setup, this is, basically, the exact same thing, except that, instead of having a treatment indicator, we're now going to imagine that we don't have access to treatment indicator. That, in fact, that this is blinded data and that we would have to make predictions based on without the useful information of knowing which group is from.

49:12

Note that the site specific and the accrual options are not affected in this case.

49:16

So remember, the treatment indicator is only being used for the purposes of helping us create to individual survival processes. It's hard.

49:27

We're not really using that for the accrual process, because, of course, we are hoping are assuming that if we're using randomization, that you know what treatment career doesn't really have an effect on when you were added into the study. broadly speaking.

49:41

There might be some constrained randomization of stuff a top-up broadly that should be representative or close to what true randomization would be like.

49:50

So you can see the big difference here if we look at it here, is that we no longer have access two to individual, right.

50:00

So we go back to the same step from this process. You can see that we have to individual hazard rates, hazard ratios, so we can quickly go between one or the other. So if I change this to zero point eight, it would automatically update the hazard rate and give you this update. It has a rate.

50:16

But you can see here that we now only have a single Hazzard ratio available here.

50:23

Let's just change it back to what? it was real free, approximately equal to less than. So, you know, rather than having no point, you know, by 0, 7, 8, 0, point 0 5, 4 5, now, we have a number that unsurprisingly lies between those two groups.

50:38

And unsurprisingly, it's kind of biased towards the control group because more of the events are coming from the control group.

50:47

So, unsurprisingly, we set a little bit of bias towards the group that has more events in terms of modeling a global event process.

50:55

So, what's important to note here is that we're assuming, We don't know what group you're in, so let's just treat everyone as if they came from the same global event process, which, on average, we hope, should be roughly equivalent to what we would get for the unblinded process, will see in the dropout process. But just know, the profit process has a similar thing.

51:16

Where we no longer have individual Hazzard rates for each dropout process per group, We now have a single global dropout process.

51:26

This simulation, control's option, it's effectively identical to what we had previously.

51:35

So I run our simulations, and you can already see things are. No, they're not too different. Obviously, the, the target events in the sample size end up in the same, but there has been effect on the average study duration in particular.

51:52

And there's a very small effect on the average dropouts, so, you know, the main thing to take away from this is like, well, obviously we would probably an ideal world, have done prediction one here, the unblinded prediction. But in reality, the blinded situation is probably going to be far more available are far less problematic. From a regulatory point of view, in terms of dealing with, you know, operational bias, and having access to, you know, to the two of, you know, which group people are in, that's obviously something that you probably don't necessarily want to see, have available to trials from the perspective of the, of the regulator.

52:30

So you can see here, that if we go to the various results.

52:36

We see that the average study duration here is around 43.73 months, I'm right here, it's 43 point, treat, treat.

52:42

So this was, you know, this is probably, being honest, slightly underestimating how long this study we would take that's probably cause it's biasing towards the more aggressive control events process, and, therefore, because there's more control events.

52:58

Therefore, you know, obviously, if you have, if we're assuming, a slightly more aggressive process on average, then we're gonna end up finishing a little bit earlier.

53:06

So, I think the big takeaway from that is that if your study is going as expected, or at least as you hope, which is that the treatment effect is exists, and that treatment events occur more slowly down controlled events, then we would expect that a blind process might slightly underestimate your event process compared to the unblinded events process bought. That, because it's an average, it's not too bad, or, not, off by too much, in this particular case.

53:38

You can see there's an effect here.

53:39

And then, you know, in terms of the dropout process, really, no.

53:43

What we're talking about, a very minute change, and really this is probably mostly just because the study ended slightly quicker.

53:52

So from that perspective, you know, most of the additional information is basically the same. Except that we don't have anything related to you know, which group you're in. Whereas if we go to the seminar table, for example, here, we would see that there's some stuff related to controlled sample size control events, and stuff like that. Obviously, we don't have that in the case of the blind it analyzes.

54:16

So, I think with the small amount of remaining time we have, we'll focus on the kind of other issue that might occur, which is, like, what would the effect of using different survival models be?

54:27

And what we'll do here is we'll just kind of replicate the blinded, which subjects cited a case that we just did there.

54:35

We'll keep everything the same except that we will.

54:40

We will deal with the type of model that you might want, and we'll basically look at what would a viable model over the classic exponential as well, it's just the we'll skip the accrual stage without that.

54:53

That's the same, except, of course, we'll set the Target sample size to 460.

55:00

We'll go to the Weibo Model, and the big thing, but the Weibo Model compared to the Exponential model, is that it allows the, basically, the event rates to vary over time. And importantly, if you have pre-existing data, if you're using subject level data like we're doing here.

55:17

If you have been in the trial already, the time that you've been in the trial effects, how lucky you are to have the event going forward. So, remember in an exponential, that's a memory less type of survival model. Basically your chance to have the event tomorrow is the same, regardless of how long you've been in the study. Or, and, you know, the day after that, it'll be the same. day after that, he was saying, Where's the weibel? The chance that you will have the event depends on how long you have survived up to the current time.

55:46

And you can see here that we get a default scale and shape parameter of zero point zero six, and zero point a tree, respectively.

55:55

And we'll see that if we will just get the dropout model the same for now.

56:00

If we click Next and we include this stuff here. You know, the hope is that with the Weibull model, it doesn't usually make a huge difference that the exponential model is often a decent face. But what happens in real clinical trials, But you can see that.

56:13

You know, in this case, it's not an insubstantial effect by going to what might be considered the more flexible and reflective model of the wild model. Like, we might expect that they'll be steel. If you've been in the trial a long time, maybe we should expect a slight increase in your chance of having the event, while also still keeping to the proportional hazards assumption. You can still do that within the ... framework.

56:39

So in this case, you can see that the average study duration has increased around nearly 50 months compared to the previous cases. So we're not talking about a insubstantial.

56:49

In fact, here we are talking about something that could be considered to be, you know, you know, somewhat significant effectively.

56:55

So if we compare the Blind a case, you know, with the exponential model, we're expecting to finish up around 40 tree, probably three months. Now, we're nearly at 51, so we've added an extra half a year here, So, that could be, you know, something that makes a big difference that we might need to consider. But what are the practical implications for our trial policies?

57:21

Yeah, so, we need to consider that.

57:23

You know, it's a more realistic assumption about how our study is going to go, and then what are the practical implications for our study, of that particular decision going there.

57:32

And if you wanted even more flexibility, we could even go one step further, And we could look at doing a piecewise type model.

57:43

So.

57:47

We just set up everything as before.

57:57

If we go to Here, we can see that for the Exponential model, was this option to increase the number of hazop pieces. So if you want to have a Piecewise exponential model, you can actually do that here. So, if you want to just have it such that after, Let's say, the first year, say, after the first, 12 months, after, the current time, the race was to change, let's just say 24 it is probably will be used here, Then, we could say that the hazard rate is decreasing over time, or increasing over time.

58:24

Like, let's say it's approximately twice as likely to happen in after 12 months after the current time says it's just a note that it's referencing, compared to the current time, basically where we are at the point that the interim data was created. And let's just say it was, you know, twice that again after 24 months. And then we could easily just say, well, remove this here just to speed things up. We will then see, OK, what effect would this have on our model, And unsurprisingly, a fairly substantial effect here, again, where we, nearly half, three months cutoff our time. Because we've now assume that the event rates actually accelerating over time.

59:02

Of course, that kind, of course, you could keep the hazard ratio constant in that case or the non proportional hazards assumption isn't broken. But, of course, you know, that kind of increasing or changing race, it's not something we would, we would typically expect, perhaps.

59:18

And I just finally finish up. There are just too small options, which I'm not going to cover today, and the detail. If you hot summary data, which is to say you didn't have access to cite data are subject data. You literally just had access. Either was, you were doing this before the trial occurred, or you just had access to well, 100 events have occurred after 20 months when there's like 300 people in the study. It's very easy in this, in this, just to select summary data and then enter what like your current sample sizes. So let's just set this up as similar to what we have here. So in 402 people at present, we have over 187 events. The present, we treat dropouts are present, and we will get we will take the exact time. We'll just say it's 25 months. So, approximately equal to our previous case. And we could easily set it up to be more or less the same thing. Now, of course, because we don't have access to site level data, nevermind subject level data. We're assuming here that we've kind of referred to a global type of process. Says what that looks like here, but it's kind of assuming.

1:00:16

So, I'm gonna global poisson rate will defined our recruitment going forward.

1:00:21

Then, we could say, you know, we have a high hazard right here to find automatically using the summary data, which is quite recent, too far away from what we have previously.

1:00:29

And then we could do something like this.

1:00:32

By default.

1:00:32

This was, uh, double the sample size. And you can see here, this is actually a very quick example where the average sample size is lower than the 804 that was given by default.

1:00:41

So, just go to the, you see here in a four, but that's because the 374 was reached before we got the hint or for people. So we would start the trial early.

1:00:51

Just to mention here that, the results of the option, that if enrollment is complete, then you can still do a prediction, as you know, on the same basis. But of course, the only difference here is that.

1:01:06

We don't really need to worry about the accrual process. They grew up. Process doesn't matter if enrollment is complete.

1:01:11

So then we're just going to just run the events process, the events modeling, over the people who are current sensor current available. These are the people. Sensors, there's really just another word for available, which is to say that these people are still available to have event. Let's create a survival process for each of these people.

1:01:33

OK, so I think we're pretty much out of time there.

1:01:35

I apologize if I'm over, running any, any, anyway, So I think just in terms of discussions or conclusions, no survival analysis signs will usually target event targets, such as overall survival or progression free, Survival, which, to invert? Where do we are targeting the inverse of data, which is someone dying or someone progressing in their disease. And of course, event milestone prediction is valuable to ensure that we know our trial is still on schedule, but it requires significantly more modeling compared to a simple enrollment type process. Now, normally, quite modeling can be quite more complex, if you bring in stuff, like screening on regional issues, and lots of other stuff like that. But I suppose. For some more statistical point of view, if you're thinking about statistical considerations survival will tend to be more complex.

1:02:23

There are many models and methods available. And, you know, there are questions about how to pick the best one, like it's really about flexibility, usually. Versus. you know, Tractability, but you see the Weibo model, there is probably the most flexible model in terms of like, taking some stuff out of your hands. But if you really wanted to dig deep, you could have gone into the Piecewise exponential. That's probably, you know, that's probably, practically, where people stop in terms of adding flexibility.

1:02:49

Like, if I can put together my whole own Piecewise exponential curve, then really, most standard models, like weibel, are, the ones can mostly be model that way, anyway.

1:03:01

No analytic versus simulation approaches, no, I think both of them have a lot of options within them, and of course, they also have the advantages disadvantages. But we've kinda view simulation here just because it has that additional flexibility, and you kinda get some stuff out of it for free. Like those prediction intervals are just based on the number of predictions that were above and below a certain value, like the percentiles, and that makes sense. Like if you're individually during each of these simulations, then you can kind of treat those as if they came from a prediction interval. And you can treat constant voltage. you want to do them for it's coming from a kind of, you know, any kind of sampling type approach, like a jack knife type of approach, for example.

1:03:38

And then, you know, like, in terms of, like, what are our choices here, you know, just, they should reflect your understanding of what the likely event processes is. It's like a non proportional how difficult to come up.

1:03:49

Then, you should probably bill, you do some models that consider that, But also the available information blinded versus unblinded data being a very important example. But, even having summary data versus subject level data, and maybe even including site level data, if you want to be able to enrollment.

1:04:04

All of those will define why at what level of analysis is available to you.

1:04:10

So, just to mention, you know, this is a new feature that will be coming soon. Hopefully, and certainly, hopefully in this quarter coming up.

1:04:19

But if you have any questions about anything that you saw here, any features you think should we should have, feel free to get in touch with us at info at ... dot com If you want further information, Go to ... dot com, I believe, the marketing material for the material, but this will be up online in the near future.

1:04:36

I don't think it's quite up just yet, But it will be up before release in the near future.

1:04:41

But I just want finally, sets a Thank you so much for attending today's webinar. If you need to leave, now, I feel free to leave, Just to mention that when this releases, there'll be some, like, it'll be part of a new module. But if you want to get a free trial of it, you can go to ... dot com forward slash trial. And if you have don't have nQuery or don't have access to, say, adaptive design, that trial gives you an opportunity to try those features for free, using just your e-mail.

1:05:07

Nothing also acquired that there's any tutorials, older information you would like in any topics that have been covered, in previous webinars or, just in general How to use NQuery, you can go to ... dot com. Forward slash star. Are also references for this webinar at the end of this slide, deck that will be sent to you later on. So, I'm going to see now, is just take a couple of moments here to look at some questions that might have come in. On any that I don't get to, just to re-iterate, I will e-mail you afterwards to do what to do with this.

1:05:51

So, there's a few questions around the kind of availability, and stuff, like that, I will e-mail those, so that that's basically stuff that we're still working on.

1:05:59

And then there's just one question about D The blinded versus unblinded case, unlike the ticket. Just asking, in terms of the accuracy, I thought, Yeah.

1:06:08

Like, look expenditure in the webinar bores Blinded model is probably going to be less accurate than non blinded model. And data just kind of comes with the territory to a certain extent. But, I think, you know, apply that model probably better reflects what you as a trial, this will have available to you. If you want to have access to blind the data. That probably means you're gonna get involved. People like the data monitoring committee, also, kind of independent entity to do the modeling for you. Whereas if you want to check and control the stuff at the time with your own drone fingertips as a word, then unfortunately, that probably means that you will need to do a blinded one, even if it is slightly less accurate.

1:06:46

And, of course, with maybe a case there that, you know, basically the implication of saying using an EM algorithm to kind of extract the implied best guess rates for the unblinded case. Using the blind, the data, Kind of treating them as if they're coming from a mixture model effectively dot one. No, as long as that's not too good to know if that's too good, and there might be operational bias problems. Paul.

1:07:10

Secondly, if that does occur, the hope would be that, you know, that would slightly ameliorate the issue, where, obviously if the, if the trial is going, as you hope, which is that the control group is doing worse than the treatment group, then I would get rid of the slight, I suppose, optimism bias that we talked about earlier on.

1:07:31

Oh, there's a few other questions. I think we're running over time, so I apologize for that So I will go to the other ones. But I will get back to you by e-mail very soon, Betsy, probably later today.

1:07:41

So once again, I just want to thank you so much for attending. I hope you have a very good day and I look forward to talking to you next time at the next webinar.

1:07:49

Thank you so much, and goodbye.

Hello, and welcome to today's webinar, How to Protect Key Events for Survival Analysis Trials. We'll be discussing how you can use simulation to give you accurate projections and predictions of when key event milestones will occur, whether that be the interim analysis, or the end of the study.

0:25

Today's webinar will be demonstrated on nQuery using our new nQuery predict feature, which will be released later this year. So this is a special preview of a feature that isn't in the software right now, but which will be available later this year. So, if you have any questions or features, any requests will be happy to take on the payback. And some of that will likely be included in the initial release, but will obviously continue to improve on what you see today.

0:53

Before we get started, just a few frequently asked questions, firstly, is this webinar being recorded? Yes, it is being recorded, and this recording alongside the slides will be sent to you after this webinar is complete later today.

1:07

In terms of questions and feedback, please feel free to use the questions tab on the right-hand side of your webinar software, and I will try to get to a few of those at the end. However, any that I don't get to, I will reply via e-mail to give you a more detailed response after that.

1:26

OK, so I think that's the frequently asked question. So once again, today's webinar is How to predict key events for Survival Analysis Trials, using simulation to project event targets.

1:38

In terms of myself, my name is Ronan Fitzpatrick. I'm the head of statistics here at nQuery, and I've been the nQuery lead researcher since nQuery. And from Tree Point though I've given talks and workshops like this, the FDA JSM and of course hoping to get out and do more in person activity next year.

1:57

But for now continue to do these webinars and continue to host and uh, engage with material online.

2:03

So, in terms of what we're going to cover today, firstly, I'll give you a brief introduction to survival event protection.

2:09

Secondly, I'll give you an idea of what kind of methods are available for doing this type of prediction, for survival analysis trial. And then we'll use the remaining time to go through worked example, and see some of the complications and issues that might come up in the context. Specifically, of trying to predict event milestones for survival trial. Then some brief time for conclusion discussions.

2:32

Just briefly mentioned, of course, this webinar is being presented on nQuery. NQuery is your complete trial for optimizing your clinical trial design from early stage to post marketing. And, obviously, with a major focus on phase three confirmatory trials. And a large number of organizations who get clinical trials approved by the FDA, have a license for nQuery, and you can see similar views and companies here.

2:58

So, going into the meat of the webinar, let's talk about survival event protection. As it was one thing to note here, is that my last webinar covered milestone prediction in a broader sense, and did touch on the issue of survival event prediction, but also to dedicate a lot of time to enrollment, prediction, and enrollment milestones. So I suppose, this way, this this month's webinar is mostly focused on the survival specific issues that might come up, and the processes that might be of interest there. However, as we'll see, enrollment is very often, or quite often will be a part of making survival prediction, so that will be briefly covered at the relevant parts here. But if you have more interest in the Enrollment prediction milestones, part of the problem, please feel free to get in touch, and we'll be happy to share the recording of last month's webinar.

3:49

So focusing on today's webinar and survival of our predictions, we obviously know that survival or time to event analyzes are focused on inferences about the time too important to clinical events.

4:02

Things like dash, V, taking psych debt progression, things like some kind of critical outcome, like, let's say, a heart attack or a cardiac arrest or something like that. Something which is a definitive moment at which you have been changed. Now. You know survival Model models are often used for some things, which wouldn't quite fall into that category. Some kind of repeating events, like, say, COPD exacerbations, there's sometimes debate about whether a survival type approach versus a conference or incidence rate type approach makes sense to those. But for, now, we'll focus on the simpler cases, where survival is the obvious and widely used methodology for that particular type of clinical trial.

4:40

And, obviously, oncology and the usage of overall survival or progression fish, free survival, these are incredibly common, widely used, and very well understood, in, but in a survival trial, I suppose. The key thing to take away, is it similar to in my survival sample size webinars, I often talk about that. The sample size isn't really the target. When you're doing a sample size calculation here, it's actually the events that matter, it's that there's a fixed number of events that you need to occur before you want to make an inference or to have sufficient power for your trial, and therefore key trial milestone, such an interim analyzes. And the end of the study require you typically, to reach a fixed number of events, so it's not how many people are in the study. It's how people in the study have had the event matters in terms of when to make these decisions about when to do an interim now surrender to study.

5:36

And so, from that perspective, when we're doing prediction, modeling, and milestone modeling for survival study, we're not just thinking about enrollment and often, maybe if enrollments complete, we're not even think about enrollment at all. We're thinking about that survival process, modeling it, and then projecting outwards from what we know so far to how long we take. It's going to take for that number of events to occur.

5:58

And, of course, all my enrollment, which, you know, it's probably easy enough to kind of come out.

6:02

You could come up with some fairly simple analytic formula for how long you think like the enrollments going to take based on how it's gone so far, are based on some future enrollment rates for survival because you're taking into a lot of other considerations such as enrollment, dropout censoring, a cure process, competing risks. All of these are obviously influencing how likely you are to have the event, either depending on where people have to be in the study, before they can have the event, so enrollment matters. But all these other things, like dropout and cure, process, and draw, said, sorry, these are all things that prevent you from having the event, and so all need to be taking the count of, if you want to get an accurate inference about, when you get the actual number of events required, to do the analysis that you actually want to do.

6:49

And it's true of all trials. Because we talked about last webinar about enrollment, milestones often reached, but event milestones often also take longer than expected.

6:59

Not just because of enrollment delays, but also that the effect size or the survival curves may end up being no less aggressive than you expected, for example, albeit sometimes that might be considered to be a positive thing overall.

7:14

Now, of course, the opposite can happen. You know, for example, it's not Survival study part in the Culver 19 Vaccine trials.

7:22

The amount of people who ended up carrying disease was higher than expected, which went to those trials, actually ended up having the interim analyzes quicker than expected as well. So, I think it's important to note that obviously the, the opposite can happen. You could end up going faster than expected bought.

7:36

You know, from our perspective, it's probably the latter problem, which is probably more and more interest than why. Predicting I'm projecting and having an idea of how your trial is going wallets ongoing is important to have an LLC pretrial. You want to make these assumptions as well.

7:52

So, you put all this together, and you have something which for survival trial makes as much sense as enrollment modeling, both with a lot of additional factors that need to be taken into consideration.

8:03

And so, here's just kind of a small selection of some of the, you know, key survival issues that you might come up over all I think, of. And, you'll know the allow, these are actually quite similar to both. the issues that come up in sample size for survival analysis, and the issues that exist for enrollment prediction, or, you know, some of these are unique for the survival prediction case.

8:23

I suppose, firstly, just mentioned that you need to have an idea of, well, what are the primary survival end points that you have of interest? Or perhaps you have some non survival type end points that you might also want to consider at the same time, and of course, the biggest example of this would be overall survival versus progression free survival. So, for example, we know there's a lot of work on interim analyzes, are co primary analyzes, where progression free survival is used early on, because that is, most data tends to kinda give results earlier than overall survival. And then that's used to make decisions, for example, to increase sample size, or to stop the trial early, or other considerations, such as dark. And then, the final analysis isn't based on overall survival. Due to the fact that progression free survival in theory should be highly correlated with overall survival obvious, there's a lot of literature about that particular issue.

9:15

So, as well, it's just like, that's a classic example in oncology, where the consideration of what your primary and point of interest is, may be important to you. And you may want to have, you know, two models effectively: one for progression free survival, one for overall survival, depending on watch which milestone you're talking about. And, of course, there are other considerations, such as adverse events, and other endpoints, or maybe of interest, maybe non survival endpoints.

9:40

I suppose one other important consideration is, when you're actually doing your prediction. So, are you doing this pre trial based on some meat, or external data? Or, you know, maybe the same assumptions that you use for your sample size calculation, or, are you doing this on an ongoing basis? But, even if you're doing on an ongoing basis, there's a question of if this is something that you're kind of doing on a continual basis, like, you know, after you know, every week. Or is this something that you're doing a pre specified times? like for example, an interim analysis. Now, some of the, you know, some people, especially from the more practical side, people are on the ground. They are particularly are a trusted in the side of doing is continually kind of having those. Something that shows you in real time, for lack of a better term, what's going on in your trial? Whereas, you know, due to the intrinsic uncertainty that will exist in these, and this will be split somewhat by the prediction intervals.

10:33

And similar we shown in the software later on, you know, the statistician might be erring towards, well, know, because it is to cast, it can go up and down, it can change depending on how things are going. It's probably better to have, you know, a certain amount of information first occur, and also, probably better to tie it in with consideration. Such an interim analyzes and stuff like that. To ensure that there's no temptation to kind of, you know, try and move the ***** too much on the trial and maybe introduce operational bias that way. I think, you know, the other question of what information available, it's also equally important, like, do you have summary data, like, we know that 100 events occurred so far, or do you access to individual subject level data, And if you're dealing with enrollment, perhaps also site level data.

11:18

And obviously, if you do have that subject level data is on our own blinded or a blind basis. Do you know what treatment group? That person is Z, because obviously, I suppose, from a modeling perspective, in terms of accuracy. We want to have the maximum amount of information possible. So, it will be great to have unblinded data, where we know what treatment group you're in. We have also subject level data, perhaps even other covariates type data that we could use to improve our model. But, of course, the more of the information you have available, the more likely you are to come into issues with operational bias. Statistical bias, like, of course, the big thing about if you have unblinded data, then, obviously, from an operational bias point of view, the standards required will be much higher, for example, in open label trials, for example. Or, you may need to bring in someone like an independent data monitoring committee, to do this type of modeling for you, or B, or the only, the only ones, you see, the results of modeling. If you end up saying they unblinded data, this is very similar types of debates.

12:16

You may have internally if you're looking at the contrast between a blinded or unblinded adaptive design, or a non comparative adaptive design taking into account the FDA's language on adaptive design.

12:32

So, obviously, major consideration there, like I suppose, you know, the ... on the ground and probably want to have access to the information, to see what's going on. Which means that even if a blinded prediction may not be the best prediction, it, because it would have less strict regulatory effects, consequences, then you would be perhaps incentivized to use that, even if it is not quite the best comparison or blind to type of analysis.

12:59

So, once you've made all those decisions, and you're kind of the contextual problems, you need to have, then you actually have the problem from a kind of more statistical point of view. Well, what survival models do you want to consider? There's obviously many different models that you could use to fit your current data if you have current data available. And then, also to project going forward.

13:19

And, of course, you know, you could have a pre-specified model, let's exponential, or weibel, or you could have some kind of algorithm that kind of covers and tries to search through a wide variety of different models to see which one might be best.

13:33

And of course, you know, you might ask, which ones are the best for inference? Like, making prediction intervals and stuff like that.

13:40

And of course, even if we have a model, we do need to think about the key assumptions underlying any of those models, know, and any other competing processes that might affect our ability to model the events.

13:50

So obviously there's a lot of talk right now about non proportional hazards and behave here such as the delayed effect seen for immunotherapies.

14:01

Of course, there's not some debate about whether that lay the fact it's a true delayed effect, or whether it's due to some responder type in our effect. But we'll put that aside, that's a different webinar, and I have covered non promotional ads for sample size recently.

14:13

So you can see that if you're interested, but also there are competing processes that we've mentioned previously, like Dropout Cure and other considerations competing risk models.

14:24

And of course just a small consideration about the censoring process that you use. So I think, you know, in the vast majority of trials, and certainly the examples that I'll be doing today, we would assume that there is a fixed length of study. And basically, that would be defined by when we reach our events targets. So let us say, we want to reach 400 events. We wait till we reach 400 events and then whoever has not had the event, our who still available to have the advantage is right-censored.

14:51

But there are trials where people prefer to have a, depending on the context, depending on the disease profile, to have a fixed follow up. So you would be, every subject is followed up for 12 months and then right censored irrespective of the study is still ongoing or not. And that can be included in the software, I'll be the nQuery version. I'll be Jamison later, but just to say that, suppose the typical cases, we have some fixed target events. Once we reached out then whoever has now had the event, we would censor them. So that's the most common situation. So we will focus on that day.

15:26

So the second part here is focused on OK, We've thought about all these.

15:31

We kind of know the first set of questions, we kind of know what our primary outcome is, we kind of know what type of data we have, and when we're doing it. Let's just focus on kind of those two second questions, namely, what survival model we want to use, and what all the processes do you want to account for when modeling survival.

15:52

And, I think, you know, event prediction models are surprisingly similar to sample size calculations. Again, they're kind of two primary approaches. There's analytic on their simulations.

16:00

So, analytic is basically any process, such as an equation or a Markov model or similar, which will give you the same result, assuming that you put in the fixed, you know, the same fixed parameters into the algorithm.

16:16

Then, simulation is obviously where you use Monte Carlo simulation, you simulate what would happen, and then if you did another simulation you simulate what would happen, you kind of model what would happen. You do that, you know, 10000, hundred thousand times, and then you see what happened on average. And then if you read that simulation, you would expect that we slight variation each time that you do the simulation.

16:38

So, it's important to note that for many problems, you could probably use both of these approaches for the same type of models of using an exponential model or a viable model, or even a piecewise exponential model. It's probably true that you could use both simulation and an analytic approach for doing this type of projection and protection modeling.

17:01

However, we will focus today on simulation primarily, because it gives you a greater deal of flexibility. So simulation, if you make small changes, or start playing around with underlying assumptions.

17:12

Generally, simulation doesn't have too big of a deal with that.

17:15

If you can include that in the data generating model, then it's basically not really that big of a deal, whereas if you using an analytic approach, oftentimes, that won't be a problem. But off, sometimes, you will come against a wall of what is possible within that model. Or adding some of these additional assumptions will call the model to become much more complex. Perhaps, you know, much more watchlist, computationally efficient or just break completely.

17:39

So, from that perspective, analytic is generally better. If you want quick, dirty results. And you don't, you're not planning to move too much beyond your initial model.

17:50

But if you want to tinker and try different things and include things that maybe are a bit more obscure, then simulation is probably the way to go.

18:02

But even given that, there's so many models that you can choose from: exponential, weibel, piecewise exponential, including cure race, censoring, different censoring assumptions, et cetera, et cetera. So how do you find fit and compare these models?

18:14

Well, no, there is no real best way to do that.

18:18

There are ways you could obviously fit the current model, and you can do adopt, But of course, unlike say in a typical machine learning or supervised machine learning process where eventually we will get the correct result, we will know the result of result until the actual study aims. But of course, you know, if you look at the papers on which the various models and methods are dependent, they will obviously do simulations so they can show you how they've been formed, but unfortunately within the context of your specific study, the proof unfortunately will be in the pudding. And so, you just need to trust that the methods that you select are, so that implies that you should probably try a few different things, kinda see what fields are, reasonable, and, you know, play around.

18:57

and do sensitivity analysis to see walk, what might happen.

19:01

Basically, you probably want to sell like scope out scenarios rather than necessarily just kind of going off the first one that you can get the work.

19:09

And, of course, you know, there's a big question, then about the type of bottles that are compatible with the type of data that you have. And the big question there is like, for example, is blinded versus unblinded data.

19:20

If you're using blinded data, then I suppose some may prefer to have some kind of mechanism to try and extract what the date that the groups would have been using say something, like an expectation maximization or similar boards.

19:35

How accurate, how useful are these, particularly maybe smaller, sample sizes slash events that, those questions about dash.

19:43

So, I think, you know, you're, you're, it's not just that you kinda pick the best model, you want to pick the best model for what you actually have available. So, if you only have summary data available, your product, that you would like, you only know that 100 events have occurred after 10 months when 200 people have been recruited that point.

20:00

Then you may want to focus on simpler models, rather than trying to, you know, move into like complex models, to try to extract, I suppose, hypothetically what happened. So, if you have summary data, you're probably using simpler models of, you have subject level data you're probably using. Or maybe you have the capacity to do more complex models. And if you have on blind the data, that, you know, an additional level of complexity that you can probably introduce them to be more accurate.

20:23

And as I mentioned several times already, but just to re-iterate, if you want a model survival, you have to model what else is going on. You need if accrual is ongoing to ... enrollment, you need to think about competition to events but it up dropouts or censoring or competing risks, et cetera.

20:39

I think just a note dot.

20:42

The computation to events, I'm using here is a broad category.

20:45

And when we go through the examples later on, I'll be talking about dropout process.

20:49

But really, it's important to note that what we're talking about here in terms of event prediction, is we're assuming the targeted offense is what we care about.

20:59

We're really only interested in the target events from a practical point of view that we need a certain number of events to occur for us to do our analysis and finished this trial, or get this trial on the on the road to the analysis stage and approval, et cetera.

21:15

So from an inferential point of view, it doesn't really matter if the compete competition to our event, it's a dropout or a cure, or censoring or a competing risk.

21:27

All that matters, from our perspective, is that if one of these competing processes occurs, that person can no longer have the event, and therefore can no longer contribute to the study ending, or to the interim analysis occurring.

21:40

Based on the, you know, study protocol, based on the target that we set for doing that particular, doing that particular action.

21:49

So just to mention that, we will be talking about dropout, and primarily an example.

21:53

But in reality from, for this particular type of problem, the competing process isn't really the main point.

22:01

It's really just about if a process excludes you from being able to have the event going forward.

22:06

That means that, from our point of view, that's a problem, not someone that the not bad, but someone who is unavailable, and therefore, will slow down how long we will have the event.

22:17

So we require the total required events.

22:22

On this slide, it's from the last webinar actually bought.

22:24

It's basically just say, like a quick survey of some of the potential ways that you could do it from parametric modeling. That's kind of analytic most I'm talking about there to similar parametric piece, wise, modeling.

22:35

And in fact, know, many of the sample size calculations that you're familiar with, particularly can basically be re-used and rejigged to become event projection models.

22:52

But, as I said earlier, we're going to focus on simulation using survival models, such as exponential, or piecewise, exponential, or weibel.

23:00

And that's what we'll focus on today simply, because it's not really, it's kind of the easiest one to understand. That gives you the greatest degree of flexibility over, watch you do. So, you're picking the model that suits you, rather than having some algorithms selected for you, or some machine learning process selected for you.

23:18

So, you know, the Step four here, model selection algorithm is really, know, taking the, the potential models from ... tree. And then just picking it from a bunch of what, perhaps, thinking from the first and second category as well. But for now, let's focus on the idea that you have a bunch of different models that can be used. Let us say P. So it's exponential, there's no multiple different change points you could choose, and then these algorithms are by fitting and finding the one. I suppose, it's best to what's happened thus far.

23:48

Just mentioned the non proportional hazards is a slightly different problem, would require slightly different models. But that's a very developing area. There's a couple of papers that are very recently on data if you're interested.

23:59

But for now, we're kind of focusing on your probably mostly focused on your problem of the kind of classic proportional hazards kind of study. We're assuming some kind of constant hazard ratio of like that, said, there are other considerations. So, for example, if you have access to external data, you could use some kind of Bayesian borrowing to try and improve your projection model, try to improve your estimates of the to exponential or why bulb parameters. And for the case of blind the data, in theory, one could use expectation maximization or ML algorithms that try and create effectively your best guess of what the unblinded groups would be based on the blind the data. And based on some assumptions, are obviously, for example, a fixed hazard ratio. So, you know, that's actually quite similar to the blind at sample size re estimation calculations that are have been proposed for survival models, and there's some way that you can basically kind of use that to create better create kind of pseudo parameter estimates for the bill on palm blinded type models.

25:04

OK, so, for the rest of the webinar, we're going to focus on a worked example.

25:11

And so, what we're going to do here is look at a study where we have 374 events at the Target.

25:18

And we have a target, like a target sample size of 460.

25:25

So, in this case, we have 50% of the events have occurred so far to 8830, 474. But enrollment has not finished, or we need to model the remaining enrollment to get an accurate estimate of survival. Otherwise.

25:41

How would we kind of, we need to kinda know when someone arrives to study, to know where in calendar time or from if, we're talking about, like, in relative to very Saturday when they have had the events.

25:53

And then, obviously, how much time to spend on the study before they had the event.

25:56

And in this case, we'll look at two cases. We'll look at the case where the site level data is available. And it's not available, But if, when we get to the site level data, just to note that the 118 of the 127 sites that could be used in this study have been opened at the time of this particular analysis. So remember, our current calendar time. It's around 24 to 25 months.

26:22

Sorry. There's a small mistake there. But that 20, it's about 24 months, rather than 20.7. And in this case, this is this is basically what we're trying to achieve here.

26:34

So, as I said, the kind of main thing that we want to look at today, compared to last month, where we focused mostly on the enrollment process, which is, kinda, just a small note here, that will kind of just, I'll just re-iterate briefly, but not really focus on today, is one will be looking at unblinded versus blinded survival model for predictions. Secondly, looking at the effect of different types of survival models. And then thirdly, briefly, the effect of enrollment.

27:04

So, as mentioned, nQuery predict will be the new module that will focus on these types of milestone prediction problems. This will be, what we will see here is very close to what the initial release will be, But note that this is a beta version. There's still a lot of bug fixing and issues going on, and of course, it's not available in the software right now. If you're interested in knowing about the wider features in the software and growth, rough timelines in terms of cost and release days, you can get in contact at ...

27:37

dot com and we'll be happy to keep you in in the loop in terms of what's happening there.

27:42

But just to say that this is a experimental, beta level implementation of the software that you were giving a preview of today. And if there's anything that you see that's incorrect, or that you would want to see in a future release, please let us know. We're all really, really, really interesting, any feedback that you might have.

27:59

I think for the first case, let's focus, wait.

28:03

I suppose what the best case version of this prediction would be, so in this case, we have subject level data. And we have site level data.

28:14

So if we just scroll this across a bit, you can see that we have subject level data.

28:18

So the subject level data is, we know what, where each of the subject is from, and we have certain information about each subject in the subject level data. We have region, which we're not going to talk about our use today, but, obviously, could be something like you, or to US, orals, like US, State, or whatever. And then we have S I T, which is just the site ID. So, each person came from a site, and we assigned each site ID. We're just using this to link our subject level data to our site level data, which I'll talk about briefly in a moment.

28:50

But the important thing to note on here is that we're doing a survival, analysis type problem.

28:55

Then we need three pieces of information. And then one additional piece of information if we're doing an unblinded survival prediction.

29:03

So, the three pieces of information we need for a blinded and unblinded survival prediction is one we need to know when you arrive to study.

29:11

Because, of course, if we don't know, when you arrive into the study, that we don't know how long you've been on the study, until you hide the event.

29:19

And we also need to know that if we're going to make some inferences about the enrollment process itself.

29:24

So, that's just useful for both modeling enrollment and also ensuring that we have an accurate idea of when, you know, how long you've been in the study, in terms of calendar time, or basically reference to the 0 to 0 time of your study.

29:38

So, for example, you can see here that the first subject is someone who arrived around three months in the study, and they have been followed up for around 21 months at this point. So they're still in the study. This is around 24, 25 months. It's the study, they're still around, arrived very early. They've been around a long time. This is relatively unusual, as we'll see later on. But this is kind of one of the outliers here.

30:02

And then, so we need to know how long you've been in this, like, how long When did you arrive into the study?

30:08

How long you've been in the study?

30:10

But to contextualize how long you've been in the study, we also need to know what your current status is.

30:17

The status can be split into three broad categories: one, you have not had an event, or any other competing process. You're basically still available to have the event.

30:28

Or if you want to give a different way, if a study was to end today, these are the people who will be right sensors.

30:35

So in that case, we would usually assume that something like a zeros, something neutral.

30:39

But this could be, like, you know, it could literally just say, something like, you're available or, you know, available or censored, or something like that.

30:48

But for now, we'll assume it's being assigned to zero.

30:52

Secondly, we need to know if you've had the event, in this case, that's, that's been it by one.

30:57

So if you had the event, then the follow-up time is not the time. Since you arrived to study until the current time on to the maximum time. The time at which this data was taken, it was how long you were in the study, until you had the event. And then, of course, we don't really, we don't really have much interest in, what happened to you after the effect, because, from a, from the perspective of doing a survival analysis, A time to event analysis?

31:24

Once you've had the event, from an influence point of view, that's all we need to know about, the individual subject.

31:31

And then we have one other category, which is the basically older slash dropped by category.

31:39

And that category, is basically, for people who have had something happen to them, that's not the event, which means that they will no longer be available to have the event.

31:52

So a classic example would be if someone's dropped out study, that just no longer in the study, that they decided that the, you know, that they can't continue in the study.

32:00

But, of course, that could be a competing risk. or maybe they're cured of their disability, that they'll need to be the study anymore or other kinds of considerations such as that Ruby to just being censored. For some reason, maybe if you had some kind of fixed level sensor.

32:13

But the important thing, it's really just that, when we're doing our prediction modeling, we want to ensure that anyone who has had something happen to them, that means they are no longer available to have the event. That we're not going to create an event prediction. We have to assume the top of that. That's fixed in time, and that we can no longer do anything with that person, effectively.

32:34

So you could, and in that case, of course, follow a similar to event is not how like, not the time, since they arrived at the current time. It was how long they were in the study until they had that drop out, for example.

32:47

And we'll see that if you want to, you don't have to, but if you want to, you can treat dot dropped by type process as a, kind of, similar to be similar to the event process, if you want to.

33:03

So you don't have to. But if you wanted to do that, that, that's perfectly viable to do.

33:09

Just mentioned briefly, the site level data. We're not really going to focus on this problem today. I talked about this a fair bit more last time, but just the mentioned for site data, we obviously need a site ID to link the sites that exist in the site ID for the subject level data, to our site level data. We need, on enrollment cap, just the maximum where people alive, not cite an open time, just to let you know when that site opened, And then the rate at which the enrollment is going to happen in that site, which, you know, may be based on the observed enrollment rate. Basically, like, summing all the one on ones, For example, in 1 to 102, and then dividing that by the current time. But, it could also just be dates, and some pre study specification, or from external data, or anything like that.

33:53

Just to note that, they'd like this right here, for 101 does happen to correspond to the rate implied by the number of 101 sites in this subject love dataset, but it does not have to be.

34:06

We'll see that briefly later on.

34:08

And just that there's also, in this dataset, a handful of on open sites, which is, you know, optional. We don't really need to have on open sites. But if we do have an open sites, you can see that we just need a window of time during, which will allow them to open.

34:22

In this case, you know, usually around our current time.

34:26

OK, so that's the data, So hopefully that gives you an idea of how, what our data looks like, and what will be going to be putting into our model, because today will be mostly focusing on the problem, where we have access to interim data. We have access to data. Awhile a trial is ongoing, and we're going to make predictions. Using that data are basically making the Best model, or one of the better models possible, using the data to make the better, more informed inferences about future predictions. Just to note there, that, whenever you need any screen, there'll be this help card on the right-hand side, similar to standard nQuery tables, so if you need additional context or information, that can be useful purpose.

35:05

So, returning back to the very first thing, very, very first thing, we come with the problem. Let's focus on the best case scenario. So, we have site level data.

35:14

We have subject level data.

35:16

Such level data includes its survival, required, survival information, but also includes the treatment indicators. So, we do have, are you in the treatment group, Are you in the control group, one for treatment group, zero for control group?

35:29

So, we can actually, if we want to do, in this case, do an unblinded events prediction, which, on average, we would expect to be more accurate than the blind events prediction, Because we're using, we're going to be modeling each groups survival process individually, rather than having to kind of treat them. It's coming from a global process.

35:48

Since we have site level data, why wouldn't we use a basic?

35:53

So, the first thing we need to do in this case is that we just need to select our subject level datasets.

36:03

And then we just need to select the correct values for each of these.

36:10

So as you remember, the arrival time, in this case is equal to arrival, The follow up time.

36:17

The time on study that's equal to follow up, current is the current status, and then treatment is our treatment group. So you can see data. We've got to subject level data. Step two, we have now assign the arrival time to the correct row here. Treatment ID is equal to treatment.

36:32

Status indicators equal to current, The time and study is equal to follow up on the site ID is equal to the city call.

36:40

So each of those columns then gives us the correct value for each subject based on that row value. And you also just note here that there is obviously the ability to control, which one is the control group in the treatment group, by default, zero, And one makes sense for control versus treatment. And for the status indicator, we have 1 -1 and 0 that's going to default.

36:59

But note that this can be entirely flexible, and that if only two of the indicators for use, let's say, for example, no one has dropped out yesterday, not non common scenario, then you can fill in this by. You can fill it in manually.

37:13

So that when you do predictions later on, you could manually introduce a dropout process, even if it hasn't happened yet. So, you know, by default, of course, the dropout process will be assumed to be zero. So, you know, the busy no one's going to dropout port. We could add that in if, for example, we think some dropout is going to happen. After this point, of course, this study is probably going on quite awhile in this hypothetical. So, that's usually not a problem.

37:39

For site data, you know, it's pretty much the same thing. Just to say that we have site ID to link our two datasets together. That's really important. We need to have the rates that we expect in each size, So that's the enrollment, right? Niche site enrollment cap to just the maximum people alive from each site. We need an open time for our open sights. Remember that we're doing this on the basis of interim data.

38:01

So, at least one site must be open, otherwise, where these people coming from, and just mentioned that if we happen to have one Open Sites, then optionally we can include start and end times for those subjects. In this case, we do have some on openside, So, we can do that here.

38:21

So, we get into this case here, and you can see that the current sample size, it's 402, The current number of people who have not had the event, or not dropped out, is 212. We'll see what the specific amount split there is. It's 128,703 as well, just to be spoilers. Boards, you know, we can see that the current, there's 212 people still available to have the event at the current calendar time, which is around 25 wanted to study. So, this study is being, you know, it's coming near closer to the end of the beginning. And you can see by default, we just started to be, don't like the target sample size. Then, were able to recruit 804 pot where we obviously know that in this particular case, we go back to our original slide. We actually want 460 people to be the maximum number of people in our study.

39:07

So we're only gonna be recruiting or modeling on additional 58 new enrollments using simulation. All the rest are fixed because we know them on the base itself.

39:19

The bet on the data that we have the arrival times for the first 402 people, we just need to simulate an additional 58th.

39:26

I just mentioned here that in the case of having site level data, we can create a very complex assignment, where each individual site has its own race and enrollment cap, and we can fix and change those as much as we want. And the new sites can be added to an even greater degree. Because we can decide when they're actually going to open. But as I talked about this a lot more in the last webinar. But just to say that we have a lot of flexibility here on top of the model, that remaining enrollment process.

39:52

But this is really not the focus today. But just to mention that because enrollment is still ongoing in this trial, we do need to model enrollment to be able to model survival on top of that basically. Because if we don't know when someone arrived, then we can't really model how long they've been in the study, relative to all of the other people who've been in study, and therefore decide how long it is, when the study actually ends, after the fixed number of events occurs.

40:21

So then we get to the event and dropout information.

40:25

The default here, because the current number events 187, the default is twice that again, to 374.

40:31

That's actually, in this case, the, the actual amount that required. So this was an interim analysis, after 50% of the required events that occurred.

40:38

And you can see there, once again, that the current sensors 212, and if we click on Dropout Model here, we can see the Tree. People have dropped out in this study.

40:46

So there's 190 people who, at this point, in our subject level data, it will no longer be available to have the event. They can no longer have the event, because they've already had the event at the current time, or they have dropped out, or something that's happening to them. So that they're no longer available to help the event to drop out here. As I say, it's kinda a broad category for all of the other things that could cause you not to be available, to have the event dropouts, the kind of obvious processes here.

41:17

And you can see here that the target sample size is also given. You can have this screen as well, if you're particularly interested, but that's just one small thing to mention. I don't think it will affect this particular study bot.

41:27

If the target number event is reached before we reached the target sample size, So let's say that we increased target sample size to like, 10 times, the likelihood, then, is that we have 374 to target events.

41:41

Enrollment will still be ongoing at the point that we reached 374 events.

41:45

In that case, what would happen basically is dash.

41:49

You know, we would stop the trial after 374 events occurs and whatever that time happens to be when the accrual and the study N time will basically occur.

42:01

That's just a working assumption. We're going to have that, basically.

42:03

The events target is primary, over the sample size target, And you can see here, by default, we just picked the exponential model. And it's actually already done the best fit part, Well, the best one of the benefits for what the exponential model would be for this with an exponential rate in the control group of around zero point zero seven eight, and an exponential rate in the treatment group, zero point zero five unsurprisingly, We're expecting a lower event rate in the treatment group compared to the control group. Were you hoping that we can see that we have a hazard ratio of around zero point seven?

42:37

So, we're, you know, this has automatically basically fitted. So, what are you can do some fairly basically approximations as well? Like, what is the best fit for an exponential model for two exponential models to accidental processes, for these to respect the ones and it gives you the hazard ratio. You can quickly change. And, if you want, And then the dropout process, which we won't focus on too much today. But just to say that the dropout process, you can individually set those rights, as well. Of course, the hazard ratio doesn't really make sense to worry about today.

43:07

So we'll talk about some of the other models available in a moment, But let's for, for now, just assume that the default exponential model, where we're sending a constant event rate makes sense. And then we can also add in some additional data sets, if we're interested here.

43:23

Then, we'll have 10000 simulations on a random. See, if you need to random Seed field here, just pick a random seed for you. You'll also see that there's a percentile stable here. We'll see that in the report.

43:36

So you will see that this just takes a little bit of time to go through here. You can see that the average sample size and average events are fixed integers.

43:44

That usually indicates that you have reached that target in every single simulation. If you happen to give decimal results, that indicates a one, or both of them, aren't reaching the target, in at least one of the simulations. And, as I said, one example of that would be, if we reached the average sample, like the target sample size.

44:04

After we reached the target events, then the target events would define when the study ended, Because we're assuming here that wants to target events, occurs. That's when we're doing the right censoring will be ending the study.

44:17

You can see here that this is, probably isn't a very realistic enrollment model.

44:22

Diseases was kind of tailing off, so perhaps it will make more sense to have the, the rates in all of those sites be significantly lower. But we'll kind of ignore. We'll kind of move, pass off for now, and focus on the events prediction. Where we can see there's, you know, once again, there's probably a similar type of process going on here. But you can see the overall if we were to kind of ignore this kind of slight logistic hit chair, that if recruitment were to continue at the kind of average rate, we would expect this overall model to kind of be kind of a straight. But this is something that will obviously give you questions or pause, or maybe there should be a lower event raped by default, and that's just kinda where the, you know, trying different models and stuff like that will come in. Addition to dropout process, you can see that there.

45:05

Obviously, we provide a report, which kind of summarizes both.

45:11

on the left-hand side, what the inputs into this particular simulation work. So you can see here is just an input summary and what the data was. And then the various inputs that we used for the sites, and the sample sizes, and the events. And you know, what seed would use. But I think, you know, we're probably more interested, typically on the summary on the right-hand side here, which is the results effectively.

45:37

So in this case, we have our average sample size, we have our accrual duration, we have our study duration, etcetera. So we can see here that to get to 450 people, it required about two extra months, so you're very near the end, here 27 once here. Both that the study continued onto 43 months, to actually reached 374 events, can see the trainer 74 months was reached in every single trial.

46:01

But, of course, if, say, the dropout process was much more aggressive, or the event process, was a lot slower than there may be cases where the dropout process basically leads to too few people being available, at that point, to actually get to the target events. In which case, the simulation target reached maybe below 100%, But usually for reasonable assumptions, we would expect about 100.

46:21

Let me see here that the average follow-up was around 12 point five months.

46:25

And of course, just the mentioned that that that follow up includes the people who were censored. So there's some people who didn't have the event were followed up for a fixed monthly in the study. So it's the average of 10 and the amount of time it took for someone ... event or to drop out.

46:40

You can see here that we have the percentile somewhere. So you get the 5%, 25 percentile, the median or 50th percentile, the 75% or 95 percentile, You know, the events and sample size for the same for all of these. But like dropout ranged from 4 to 9. Then accrual generation from 26 to 26.6 to twenty seven point five hundred forty one to 41.4 to 646.2 for the study duration. And the site was 127 for every case there.

47:13

And, of course, the, you know, those plots that we saw there are based on tables. Which you have, you can have, you can see here if you're interested.

47:21

And there's also those additional reports that we selected from the Simulation Controls menu. Namely, we want we could find out what happened in each individual simulation. Like, you know, this is what the set of points, but you can also see when this particular simulation end.

47:38

And you can see, like, this kind of varied from one. For all of these. You can see what happened in any in an individual simulation or multiple individual simulations. So this is, for example, what happened in simulation? one, in terms of the, you know, this was the time that was assigned for the survival and dropout process. Both of which were greater than the, than the, current, like, the overall, the, the, the, the end of the study. So, they ended up being censored in this case, for example.

48:04

And then on a per site basis, you can see what the average accrual rate on the average number of people recruited wasn't each simulation across the average, across the simulations for each site. And then you can actually see what happened in each site individually, and so one of the simulations individually.

48:20

So, for example, in simulation, more than we can see that this particular site accrued 11 people in order or simulations on recruited 12, 13, 9, etcetera. But this case, and recruited 11, into one of our site, 101.

48:36

And that kind of covers the broad way this works. And what I'm gonna do now, basically, is very quickly, kind of go through some of the other scenarios. So, focusing mostly on this kind of second scenario was the comparison, which is the blinded data. It's like case.

48:51

But, like, in terms of the setup, this is, basically, the exact same thing, except that, instead of having a treatment indicator, we're now going to imagine that we don't have access to treatment indicator. That, in fact, that this is blinded data and that we would have to make predictions based on without the useful information of knowing which group is from.

49:12

Note that the site specific and the accrual options are not affected in this case.

49:16

So remember, the treatment indicator is only being used for the purposes of helping us create to individual survival processes. It's hard.

49:27

We're not really using that for the accrual process, because, of course, we are hoping are assuming that if we're using randomization, that you know what treatment career doesn't really have an effect on when you were added into the study. broadly speaking.

49:41

There might be some constrained randomization of stuff a top-up broadly that should be representative or close to what true randomization would be like.

49:50

So you can see the big difference here if we look at it here, is that we no longer have access two to individual, right.

50:00

So we go back to the same step from this process. You can see that we have to individual hazard rates, hazard ratios, so we can quickly go between one or the other. So if I change this to zero point eight, it would automatically update the hazard rate and give you this update. It has a rate.

50:16

But you can see here that we now only have a single Hazzard ratio available here.

50:23

Let's just change it back to what? it was real free, approximately equal to less than. So, you know, rather than having no point, you know, by 0, 7, 8, 0, point 0 5, 4 5, now, we have a number that unsurprisingly lies between those two groups.

50:38

And unsurprisingly, it's kind of biased towards the control group because more of the events are coming from the control group.

50:47

So, unsurprisingly, we set a little bit of bias towards the group that has more events in terms of modeling a global event process.

50:55

So, what's important to note here is that we're assuming, We don't know what group you're in, so let's just treat everyone as if they came from the same global event process, which, on average, we hope, should be roughly equivalent to what we would get for the unblinded process, will see in the dropout process. But just know, the profit process has a similar thing.

51:16

Where we no longer have individual Hazzard rates for each dropout process per group, We now have a single global dropout process.

51:26

This simulation, control's option, it's effectively identical to what we had previously.

51:35

So I run our simulations, and you can already see things are. No, they're not too different. Obviously, the, the target events in the sample size end up in the same, but there has been effect on the average study duration in particular.

51:52

And there's a very small effect on the average dropouts, so, you know, the main thing to take away from this is like, well, obviously we would probably an ideal world, have done prediction one here, the unblinded prediction. But in reality, the blinded situation is probably going to be far more available are far less problematic. From a regulatory point of view, in terms of dealing with, you know, operational bias, and having access to, you know, to the two of, you know, which group people are in, that's obviously something that you probably don't necessarily want to see, have available to trials from the perspective of the, of the regulator.

52:30

So you can see here, that if we go to the various results.

52:36

We see that the average study duration here is around 43.73 months, I'm right here, it's 43 point, treat, treat.

52:42

So this was, you know, this is probably, being honest, slightly underestimating how long this study we would take that's probably cause it's biasing towards the more aggressive control events process, and, therefore, because there's more control events.

52:58

Therefore, you know, obviously, if you have, if we're assuming, a slightly more aggressive process on average, then we're gonna end up finishing a little bit earlier.

53:06

So, I think the big takeaway from that is that if your study is going as expected, or at least as you hope, which is that the treatment effect is exists, and that treatment events occur more slowly down controlled events, then we would expect that a blind process might slightly underestimate your event process compared to the unblinded events process bought. That, because it's an average, it's not too bad, or, not, off by too much, in this particular case.

53:38

You can see there's an effect here.

53:39

And then, you know, in terms of the dropout process, really, no.

53:43

What we're talking about, a very minute change, and really this is probably mostly just because the study ended slightly quicker.

53:52

So from that perspective, you know, most of the additional information is basically the same. Except that we don't have anything related to you know, which group you're in. Whereas if we go to the seminar table, for example, here, we would see that there's some stuff related to controlled sample size control events, and stuff like that. Obviously, we don't have that in the case of the blind it analyzes.

54:16

So, I think with the small amount of remaining time we have, we'll focus on the kind of other issue that might occur, which is, like, what would the effect of using different survival models be?

54:27

And what we'll do here is we'll just kind of replicate the blinded, which subjects cited a case that we just did there.

54:35

We'll keep everything the same except that we will.

54:40

We will deal with the type of model that you might want, and we'll basically look at what would a viable model over the classic exponential as well, it's just the we'll skip the accrual stage without that.

54:53

That's the same, except, of course, we'll set the Target sample size to 460.

55:00

We'll go to the Weibo Model, and the big thing, but the Weibo Model compared to the Exponential model, is that it allows the, basically, the event rates to vary over time. And importantly, if you have pre-existing data, if you're using subject level data like we're doing here.

55:17

If you have been in the trial already, the time that you've been in the trial effects, how lucky you are to have the event going forward. So, remember in an exponential, that's a memory less type of survival model. Basically your chance to have the event tomorrow is the same, regardless of how long you've been in the study. Or, and, you know, the day after that, it'll be the same. day after that, he was saying, Where's the weibel? The chance that you will have the event depends on how long you have survived up to the current time.

55:46

And you can see here that we get a default scale and shape parameter of zero point zero six, and zero point a tree, respectively.

55:55

And we'll see that if we will just get the dropout model the same for now.

56:00

If we click Next and we include this stuff here. You know, the hope is that with the Weibull model, it doesn't usually make a huge difference that the exponential model is often a decent face. But what happens in real clinical trials, But you can see that.

56:13

You know, in this case, it's not an insubstantial effect by going to what might be considered the more flexible and reflective model of the wild model. Like, we might expect that they'll be steel. If you've been in the trial a long time, maybe we should expect a slight increase in your chance of having the event, while also still keeping to the proportional hazards assumption. You can still do that within the ... framework.

56:39

So in this case, you can see that the average study duration has increased around nearly 50 months compared to the previous cases. So we're not talking about a insubstantial.

56:49

In fact, here we are talking about something that could be considered to be, you know, you know, somewhat significant effectively.

56:55

So if we compare the Blind a case, you know, with the exponential model, we're expecting to finish up around 40 tree, probably three months. Now, we're nearly at 51, so we've added an extra half a year here, So, that could be, you know, something that makes a big difference that we might need to consider. But what are the practical implications for our trial policies?

57:21

Yeah, so, we need to consider that.

57:23

You know, it's a more realistic assumption about how our study is going to go, and then what are the practical implications for our study, of that particular decision going there.

57:32

And if you wanted even more flexibility, we could even go one step further, And we could look at doing a piecewise type model.

57:43

So.

57:47

We just set up everything as before.

57:57

If we go to Here, we can see that for the Exponential model, was this option to increase the number of hazop pieces. So if you want to have a Piecewise exponential model, you can actually do that here. So, if you want to just have it such that after, Let's say, the first year, say, after the first, 12 months, after, the current time, the race was to change, let's just say 24 it is probably will be used here, Then, we could say that the hazard rate is decreasing over time, or increasing over time.

58:24

Like, let's say it's approximately twice as likely to happen in after 12 months after the current time says it's just a note that it's referencing, compared to the current time, basically where we are at the point that the interim data was created. And let's just say it was, you know, twice that again after 24 months. And then we could easily just say, well, remove this here just to speed things up. We will then see, OK, what effect would this have on our model, And unsurprisingly, a fairly substantial effect here, again, where we, nearly half, three months cutoff our time. Because we've now assume that the event rates actually accelerating over time.

59:02

Of course, that kind, of course, you could keep the hazard ratio constant in that case or the non proportional hazards assumption isn't broken. But, of course, you know, that kind of increasing or changing race, it's not something we would, we would typically expect, perhaps.

59:18

And I just finally finish up. There are just too small options, which I'm not going to cover today, and the detail. If you hot summary data, which is to say you didn't have access to cite data are subject data. You literally just had access. Either was, you were doing this before the trial occurred, or you just had access to well, 100 events have occurred after 20 months when there's like 300 people in the study. It's very easy in this, in this, just to select summary data and then enter what like your current sample sizes. So let's just set this up as similar to what we have here. So in 402 people at present, we have over 187 events. The present, we treat dropouts are present, and we will get we will take the exact time. We'll just say it's 25 months. So, approximately equal to our previous case. And we could easily set it up to be more or less the same thing. Now, of course, because we don't have access to site level data, nevermind subject level data. We're assuming here that we've kind of referred to a global type of process. Says what that looks like here, but it's kind of assuming.

1:00:16

So, I'm gonna global poisson rate will defined our recruitment going forward.

1:00:21

Then, we could say, you know, we have a high hazard right here to find automatically using the summary data, which is quite recent, too far away from what we have previously.

1:00:29

And then we could do something like this.

1:00:32

By default.

1:00:32

This was, uh, double the sample size. And you can see here, this is actually a very quick example where the average sample size is lower than the 804 that was given by default.

1:00:41

So, just go to the, you see here in a four, but that's because the 374 was reached before we got the hint or for people. So we would start the trial early.

1:00:51

Just to mention here that, the results of the option, that if enrollment is complete, then you can still do a prediction, as you know, on the same basis. But of course, the only difference here is that.

1:01:06

We don't really need to worry about the accrual process. They grew up. Process doesn't matter if enrollment is complete.

1:01:11

So then we're just going to just run the events process, the events modeling, over the people who are current sensor current available. These are the people. Sensors, there's really just another word for available, which is to say that these people are still available to have event. Let's create a survival process for each of these people.

1:01:33

OK, so I think we're pretty much out of time there.

1:01:35

I apologize if I'm over, running any, any, anyway, So I think just in terms of discussions or conclusions, no survival analysis signs will usually target event targets, such as overall survival or progression free, Survival, which, to invert? Where do we are targeting the inverse of data, which is someone dying or someone progressing in their disease. And of course, event milestone prediction is valuable to ensure that we know our trial is still on schedule, but it requires significantly more modeling compared to a simple enrollment type process. Now, normally, quite modeling can be quite more complex, if you bring in stuff, like screening on regional issues, and lots of other stuff like that. But I suppose. For some more statistical point of view, if you're thinking about statistical considerations survival will tend to be more complex.

1:02:23

There are many models and methods available. And, you know, there are questions about how to pick the best one, like it's really about flexibility, usually. Versus. you know, Tractability, but you see the Weibo model, there is probably the most flexible model in terms of like, taking some stuff out of your hands. But if you really wanted to dig deep, you could have gone into the Piecewise exponential. That's probably, you know, that's probably, practically, where people stop in terms of adding flexibility.

1:02:49

Like, if I can put together my whole own Piecewise exponential curve, then really, most standard models, like weibel, are, the ones can mostly be model that way, anyway.

1:03:01

No analytic versus simulation approaches, no, I think both of them have a lot of options within them, and of course, they also have the advantages disadvantages. But we've kinda view simulation here just because it has that additional flexibility, and you kinda get some stuff out of it for free. Like those prediction intervals are just based on the number of predictions that were above and below a certain value, like the percentiles, and that makes sense. Like if you're individually during each of these simulations, then you can kind of treat those as if they came from a prediction interval. And you can treat constant voltage. you want to do them for it's coming from a kind of, you know, any kind of sampling type approach, like a jack knife type of approach, for example.

1:03:38

And then, you know, like, in terms of, like, what are our choices here, you know, just, they should reflect your understanding of what the likely event processes is. It's like a non proportional how difficult to come up.

1:03:49

Then, you should probably bill, you do some models that consider that, But also the available information blinded versus unblinded data being a very important example. But, even having summary data versus subject level data, and maybe even including site level data, if you want to be able to enrollment.

1:04:04

All of those will define why at what level of analysis is available to you.

1:04:10

So, just to mention, you know, this is a new feature that will be coming soon. Hopefully, and certainly, hopefully in this quarter coming up.

1:04:19

But if you have any questions about anything that you saw here, any features you think should we should have, feel free to get in touch with us at info at ... dot com If you want further information, Go to ... dot com, I believe, the marketing material for the material, but this will be up online in the near future.

1:04:36

I don't think it's quite up just yet, But it will be up before release in the near future.

1:04:41

But I just want finally, sets a Thank you so much for attending today's webinar. If you need to leave, now, I feel free to leave, Just to mention that when this releases, there'll be some, like, it'll be part of a new module. But if you want to get a free trial of it, you can go to ... dot com forward slash trial. And if you have don't have nQuery or don't have access to, say, adaptive design, that trial gives you an opportunity to try those features for free, using just your e-mail.

1:05:07

Nothing also acquired that there's any tutorials, older information you would like in any topics that have been covered, in previous webinars or, just in general How to use NQuery, you can go to ... dot com. Forward slash star. Are also references for this webinar at the end of this slide, deck that will be sent to you later on. So, I'm going to see now, is just take a couple of moments here to look at some questions that might have come in. On any that I don't get to, just to re-iterate, I will e-mail you afterwards to do what to do with this.

1:05:51

So, there's a few questions around the kind of availability, and stuff, like that, I will e-mail those, so that that's basically stuff that we're still working on.

1:05:59

And then there's just one question about D The blinded versus unblinded case, unlike the ticket. Just asking, in terms of the accuracy, I thought, Yeah.

1:06:08

Like, look expenditure in the webinar bores Blinded model is probably going to be less accurate than non blinded model. And data just kind of comes with the territory to a certain extent. But, I think, you know, apply that model probably better reflects what you as a trial, this will have available to you. If you want to have access to blind the data. That probably means you're gonna get involved. People like the data monitoring committee, also, kind of independent entity to do the modeling for you. Whereas if you want to check and control the stuff at the time with your own drone fingertips as a word, then unfortunately, that probably means that you will need to do a blinded one, even if it is slightly less accurate.

1:06:46

And, of course, with maybe a case there that, you know, basically the implication of saying using an EM algorithm to kind of extract the implied best guess rates for the unblinded case. Using the blind, the data, Kind of treating them as if they're coming from a mixture model effectively dot one. No, as long as that's not too good to know if that's too good, and there might be operational bias problems. Paul.

1:07:10

Secondly, if that does occur, the hope would be that, you know, that would slightly ameliorate the issue, where, obviously if the, if the trial is going, as you hope, which is that the control group is doing worse than the treatment group, then I would get rid of the slight, I suppose, optimism bias that we talked about earlier on.

1:07:31

Oh, there's a few other questions. I think we're running over time, so I apologize for that So I will go to the other ones. But I will get back to you by e-mail very soon, Betsy, probably later today.

1:07:41

So once again, I just want to thank you so much for attending. I hope you have a very good day and I look forward to talking to you next time at the next webinar.

1:07:49

Thank you so much, and goodbye.

Previous Story

← Predicting Key Study Milestones
These Stories on Guide to Sample Size

May 3, 2022 |

1 min read

Copyright © Statsols. All Rights Reserved. Privacy Policy

## No Comments Yet

Let us know what you think