Explore the Scientific R&D Platform

Try Now
Pricing
MENU
Try Now
Pricing

Predicting Key Study Milestones

August 4, 2021

About the webinar

In this free webinar, we explore how simulation can be a powerful tool to assess whether your trial is on track to reach key milestones.

And if not, how to model the changes needed to get your study back on track.

Predicting Key Study Milestones
Using Simulation for Enrollment and Event Predictions

Clinical trials rely on key study milestones being reached before interim or final analyses can be conducted.

Pre-trial assumptions are used to make informed decisions about when these milestones should occur. However, real trials will often have enrollment trends that vary from initial assumptions, requiring a recalibration of expectations and resources. 

For survival (time-to-event) analysis, where key milestones relate to the number of events and the effect size, these considerations are even more important and also more complex to model.

This webinar also uses an early preview of the new nQuery Predict feature which will be released later this year for the prediction of enrollment and event milestones.

Predicting-key-study-milestones-watch-now

In this free webinar you will learn about:

  • Enrollment and Event Milestone Prediction Issues
  • Using Simulation to Model and Project Enrollment
  • Using Simulation to Model and Project Events in Survival Trials

Play the video below to watch
the complete recording of this webinar


Nothing showing? Click here to accept marketing cookies


DOWNLOAD WEBINAR SLIDESTry nQuery For Free



Webinar Transcript
*Please note, this is auto-generated, some spelling and grammatical differences may occur* 

0:09
So, hello, everyone, and welcome to today's webinar, Predicting Key Study Milestones: Using simulation for enrollment and event prediction.
0:17
So, today's webinar will be providing you an idea of what key study milestones are in the context of clinical trials and how simulations are a very useful tool to allow you to predict when those milestones are likely to be reached based on the pre study, or during study trajectory, that you're assuming or are basing on real data.
1:50
Before we get started, let me introduce myself. My name is Ronan Fitzpatrick and the head of statistics here at nQuery. I've been an nQuery lead, researcher sins, and ... from 3.0 which is about 7 or 8 years ago at this point and I've given talks at places like the FDA and JSM and obviously we're all hoping to do more of those in person meetings in the near future. But for now I will be attending JSM but are purely as an attendee next week.
2:17
If there's anyone is interested get in touch, please get in touch with us. We’ll be happy to facilitate that.
2:24
So in terms of today's agenda, I think there's a few things you want to cover.
First is just kind of making sure we have the context of what we're talking about. When we say what are key trial milestones, and what kind of decisions are inputs are required to do that. And then we'll move on to the methods, basically, getting into the nitty gritty of the statistical methods are options that will be available to make these key milestone predictions.
And then there'll be a hopefully, fairly comprehensive, worked example using the preview version of the software. So just to emphasize, this, will be prepared to be using a feature that's only available internally at the moment. This is a sneak preview, But this feature will be available later this year. So you can't do this right, Darwin nQuery, but it's something we're actively working on in this webinar. Is an ideal opportunity for you to say to kind of stuff that we're working on for the near future. In terms of release.
3:11
And then finally, some conclusions and discussion.
3:14
Obviously, to re-emphasize this webinar is presented by nQuery The Complete Solution, for optimizing your clinical trial, which covers a variety of different sample size calculations, primarily, and adaptive designs for ranging from early-stage to late-stage clinical trials.
3:30
90% of organizations of clinical trials approved by the FDA, have a license for nQuery. It's been around for over 25 years, it's trusted by in industry.
3:41
And hopefully, it will provide you with all the truss Apple Design aspect to your acquire including hopefully somebody is more near aspects such as simulations for predicting key milestones.
3:54
So, what are the key milestone? What are we talking about when we talk about trial design?
3:58
In this context, so, as well as, just to emphasize, that nQuery traditionally focuses on sample size calculations. So, determining how many people we need in our studies fulfill some success, criterion, or probability of success, such as statistical power.
4:11
Of course, you know, that is one of the many aspects that go into designing your trial, ranging from one model to use, how to design it in terms of how many groups suddenly comparisons we want to make. Type one error is a major consideration when getting regulatory approval. Of the zero point zero two five, one sided type. one error is a requirement of phase three, for example. And then, if we want to do things like adaptive design, and stuff like that, And, you know, obviously, we're gonna get all the different aspects that go into designing a trial bought.
4:40
When we get down to brass tacks, and we talk about what's going to happen in that trial, there are certain milestones that we need to reach based on that trial design, including our sample size calculations, that need to occur before we either end our trial, or perhaps do an interim analysis if we're looking at group sequential design.
4:56
So, for example, during the recent covered 19 trials, for vaccine trials, a lot of the initially planned number of events, I believe, that, 150, but most of the interim analyzes that ended up being sufficient for approval, we're more out around the 50 events, uh, milestone. So, that's where interim analysis became very important, in terms of shortening, the length of those trials.
5:20
And, so, when we think about those milestones, those are really in relation to either our enrollment or to our events, if we're talking about survival analysis in the latter case. So, in terms of enrollment, like, there's obviously two aspects to the enrollment. There's one actually enrolling a subject. And then, obviously, the real time that you can do the interim analysis, you know, after the follow up that they have, for most trials, like, if you're looking at the mean or a proportion, that's usually some fixed follow up, like, say, two weeks a month, two months, et cetera.
5:51
So, enrollment and follow up are highly linked effectively. You just need to add, you know, some fixed parameter to get what the actual interim time is. For that particular case, if you want to abstract from enrollment to follow-up. Whereas, for events, as we'll see, I will discuss more in detail later.
6:12
Obviously, you know, knowing when someone will have an event, for example, if they are likely to die, is a much more difficult to predict a priori, because that's basically part of the effect size you're interested in doing. In a calculating in the trial, the concentration, for example, or the exponential event rates for example.
6:34
So I think the main thing to take or not, it's like, you know, we have all these things going into our trial, then we kind of know what. What are the objectives, the milestones, when he reached our trial? But both before and during our trial, it will be useful to kind of look at that, not from the statistical point of view, or the more abstract point of view of years, the number of people we need or hates the type one error, but more about the practical considerations of, well, how long is this going to take.
6:59
How long are we, do we need to do this trial? How many more people do we need to, you know, recruit? How many sites do we need to have open to achieve those objectives within a reasonable timeframe?
7:10
And this is a huge thing that anyone who has done any clinical trials is familiar with where trials ended up being delayed, or they don't reach their recruitment goal. And so knowing that as soon as possible, it's obviously a huge advantage in terms of them being able to react and make the required changes, whether that be opening new sites or choosing the clothes, certain inefficient sites in favor of those sites. Or having just that information and power available to you to talk to your CRO and try and get them to, you know, take more action.
7:43
So but the main thing of course, is do you need to know that something is going wrong And have some, at least, you know, rule of thumb, more heuristic about how it's going, does fire to really get to the side of the information?
7:55
And that's where simulations or other types of prediction models can become very useful. Both before the trial to kinda get an idea of what you think will happen beforehand and then as the trial recruits people or as events happen been able to project what's happening thus far into the future. And then making adjustments that way.
8:15
So, like, What are those considerations from the vast variety of, If not that we have that I like this slide. Just kind of like it's so very pecan makes kind of different options ideas that would be important to you, which is, you know, what are the primary milestones? Are there any secondary milestones? I think usually enrollment milestones would be the most common primary one, but in the case of say, survival analysis that we mentioned, it's really the number of events, the number of deaths that have occurred, that are driving the rice or the ability to have an interim analysis. So that's a, that's a quite an important difference. And like, are, in this particular case, that we're looking to predict, you know, it's an interim analysis when that's going to occur, or is it the study end, or both. And then maybe it's stuff like safety analysis, if a particular interest in that. And of course, some of this will probably be requested by external people. such as the sponsor itself at a mall at a higher level. It could be.
9:09
It could be something of interest to try this themselves, or to the, to a regulator in some rare cases.
9:17
Once you kind of know what your milestones are, or which ones are important. Like, well, how are you going to target dot milestone on? Is there like multiple considerations that need to be taken into account?
9:28
So, you know, for sample size, enrollment, idea that, like, you know, are we trying to just, it's probably easier to just model enrollment? And then make a fixed, kind of follow up on top of that, to get to whatever the actual time of the end of the study would be. But, perhaps you would prefer to explicitly model, the results will come in, obviously for survival, as I mentioned, events, and then adverse events, if you're looking at something like safety. And, of course, I think one of the biggest constraints on what you can and can't do in terms of this prediction is kind of what information is available to you. And who is allowed to protect effectively.
10:04
Like, if there's something that you want to outsource to, to someone junior. Or is there something that needs to be done by someone more senior or someone external to ensure that the study is kept robots and without any ads? And again, without any interference.
10:19
Because obviously some of this knowledge that's required to make the most accurate prediction is not knowledge that we would generally share with the trials themselves. So, for example, as we'll show later on, you know, if you're looking at predicting survival, you're obviously really, in reality, predicting the factory or the event rate, in multiple groups.
10:39
But unless you're on the Independent Data monitoring committee, it's very unlikely. Or she should at least be very unlikely that you would have access to the treatment labels of each of the subjects. And therefore, you know, do you try and take the overall event rates, like, let's say that X number of events have occurred so far, and you have the time duration events, but you don't know which group they are? Do you just treat as a global process? Basically, assuming that they all came from the same event process, even though you do not be true. Or do you try to use some kind of complex approach that kinda tries to extract what the groups would have been, say, making some assumption about the hazard ratio that you had a prior, and you're kind of sample size calculation.
11:20
And, in fact, that kind of choice, it's kind of interesting, because it reflects many of the constraints that I've seen personally and previous work, when looking at the difference between blinded. unblinded, sample size calculations, or how often, in many of the more complex approaches the blind and sample size calculation, re estimate plants are blinded. Sample size re estimation, I should say.
11:44
That, in those cases, there were many complex algorithms put forward that claim, to extract out what the two individual rates are for various types of endpoints, Like, say, for proportions or for means, or for, for survival.
11:59
And they show, you know, that, obviously, when they work, they're great. But in many cases, they, they end up being very inaccurate, really depends on which information you have available. You wouldn't really want to, like 10 or 20 events, You'd probably want to have at least 100, for example, to even have them work at all. So, like, those are all really important considerations. And obviously, then, there's also consideration of, whether you just have access to stuff like summary data.
12:22
Like, here's how many events have occurred, act, you know, time 20, or whether you have access to individual, subject level data, and, or access to information that's happened on a site level data, sites in hospitals, and things like that, of course.
12:38
And, as I mentioned, if you're looking at on blind data, that's probably only gonna be access to the independent data monitoring committee.
12:45
The same way that in an adaptive design blinded adaptive designs, that can be done by people internally, such as say an internal pilot. Raya, sample size re estimation compared to an unblinded, promising zone, sample size re estimation, where the data monitoring committee would usually have control of that. Or they would outsource up to some other entity to do the calculations on their behalf, but it basically wouldn't be accessible because, obviously, the, the unblinded data is usually not accessible. Obviously, I know there are Open label trial of a, top of talking. In general. That data is the what, usually happening off the Regulator with, generally, prefer unless there's no other option.
13:22
And I suppose it's just an interesting point here about, look, if you want, if you have access to this power, to make these predictions, when? how often should you do these?
13:34
And, know, there's various different guidelines out there. I think obviously your mind straightaway goes to like doing this nearly after every single, you know, every week or every month, kinda getting information as it comes.
13:45
But, there's a certain argument to be made, that, you know, if you're continuing doing that, that, kind of, can upset the applecart, to a certain extent, and can allow things to not really settle down into some kind of pattern.
14:00
And then to leave auto overreactions, if this is presented to people who perhaps aren't as familiar with the statistical uncertainty, or just the natural biases that come with seeing information, you know, it's a very common situation, for example, to have, you know, slower recruitment up initially, and then to, you know, that, to ramp up over time.
14:21
But you don't want to do that too early So that you end up having some overreaction, you open up all these sites, and then perhaps you end up in a situation where you kind of overwhelm the trial, at a time, when that isn't really, that the capacity isn't really able to do that. But you're kind of doing it just because there was a certain degree of panic. And of course, for events, it's, it's uh, it's a bit more complex for events. But are events. We are familiar with Cities Immunotherapies, where there's a delayed effect and if you were to say, make projections based on the delayed effect period, then that would give you kind of a very non representative idea of what would actually happen in trial going forward. So if you're looking on, immunotherapy is where you have to delayed effect. And these kind of issues can become much more important. And then, of course, there's that kind of situation that, like, when these happen, and then what changes you think you need to make. Happen. Obviously, will rely on, to a great degree on no interaction with the relevant stakeholders, both the sponsor on the regulator, the regulator, in particular, if you've been looking at an unblinded data, are looking at more detailed information when making this decision.
15:28
Sashes hopefully kinda small kind of smattering of some of the kind of complications or considerations there.
15:37
OK, so let's look at two major prediction areas, one will be enrollment for, for most trials, and one will be events for survival, or time to event trials, And enrollment prediction is probably the most commonly type of protection done in the context of a clinical trial. I think most people have, at some point, seen how their trial is doing we're act like, you know, half the recruitment goal. Can someone do a calculation to tell us how likely it is?
16:06
What time are likely to finished his trial? Basically, there's a variety of different, you know, Brady from very simple to very complex approaches to doing that. There's a very good review paper. It's in the references provided in the slide, Which kind of goes true of the variety of different kind of deterministic test caustic type of models that are available, you know, varying from just the simple linear equation to using, you know, complex simulations.
16:34
So, you know, enrollment has obviously focused on those recruitment milestones, And I will just emphasize again, when we're talking about recruitment, obviously, the actual time that an interim analysis will happen, for example, or the end of the study, will probably be related to the time after they recruited at which they actually get the result, effectively, you know, the follow-up.
16:55
But given that, that's usually fixed for most trials, like if you're looking at, you know, you're looking at the event rate, like the proportion rate, the incidence rate, or typically our electrical mean like a reading for blood pressure, that will usually be happening at a pre specified length of time after the initial recruitment.
17:18
So for all trial endpoints, recruitment is generally a consideration of important, though.
17:23
Obviously, if you're doing better than expected, then that's no, probably not as big an issue. For example, last year, those covered 19 vaccine trials. I kinda talk, but they did these type of Trump's luck. because obviously, they're of interest. And everyone kind of read a lot about them at this point. For us, you know, in those trials, they recruited a lot of people. Those were actually required to have the number of events. You may remember that they had to have a certain number of people have the disease to be able to make their first interim analyzes.
17:53
And, what's interesting, of course, is, that happened a lot quicker than expected, because the period during which, particularly to I believe, the Pfizer trial, was occurring moderna during the summer in the United States. There was a lot of coffee 19, which meant that a lot of information quicker than expected, and trials ended up taking a lot less time than expected, so, that's probably on the optimistic side.
18:15
However, as you can see from the first slide of this presentation, that is not the representatives of, example, of what happens is not representative of what really happens in a lot of trials. We're recruiting people as a much more significant challenge. And things end up going a lot slower than expected.
18:31
And indeed, even reaching the original goal in any feasible timeframe can be way too often be much more difficult than hoped, You know, recruiting subjects is not a trivial task. It is a, you know, when we look at this enrollment prediction and stuff, we're ignoring all of the work and complexity that goes in from so many doctors try lists.
18:52
You know, people who work support and people who, you know, engage with patients, to try and get them to join the trial, and to stay in the trial. It is such a hard amount of work that goes into making that a success. And, you know, people who do that every day to collected, deserve a great degree of credits to make that happen.
19:14
Because, obviously, you know, for a covered 19 trial, you know, it was probably something where people were willing to sign up, quite often, bought for a lot of these other diseases. That's not necessarily true.
19:25
Especially in the early phase trials where you have to get healthy volunteers. But even for later, stage trials, where, you know, you have to get people to, agree to take a treatment that's experimental compared to the current standard treatment.
19:38
So I took, I just mentioned that, But looking at the kind of statistical, diverse, or looking at them, or, I suppose, just like, abstract end of it, where we just, like, these people are coming in, We're seeing that process happening at a high level. And now we're trying to make projections are predictions about how things are likely to go from this point onwards based on that. And there's kind of multiple levels, and obviously, multiple models and approaches that could be used to make those predictions. So I think in terms of level, there's kind of three main level, there's kind of wanted you could treat as a global problem.
20:09
Basically, consider that everyone is coming from the same event, It's our enrollment prediction, the cruel generation model effectively and then ignore the subtleties that might come from looking at how individual regions or sites are doing. That's probably very useful if for back of the envelope calculation, which gives you an idea of when the trial is going to end, roughly based on what you've done right now. I think more often, though, particularly in complex multi-center clinical trials. And obviously, we know these days, that's really global clinical trials recruiting from all over the world see arose, obviously, do a lot of work to kind of make that happen in the last 20 or 30 years.
20:51
What we have here is, then you have regional and then site level considerations. And so, from a regional point of view, you might have a kind of, this is important in terms of ensuring that you have to know how to achieve balance.
21:02
And ensuring you kind of have a good coverage in terms of any potential complications or, or, you know, basically, random effects that occur for certain groups.
21:14
But I think the most common level on which to consider this problem of enrollment is on the site.
21:20
Basically, individual hospitals, individual, you know, regions, individual, things like that, where, effectively, we want to model those individually and kind of then choose, based on how they're doing, whether it's a drop them, or whether to add more in a particular region, Or to increase the enrollment cap in that particular size, and older decisions, like, data would make sense.
21:43
And then, in terms of the approaches for doing that, There's obviously a variety of different ways, ranging from using a simple equations to statistical models, and just using the project, the prediction intervals from those, from simulation approaches to Beijing approaches, which, often, integrate prior data, or expectations. And, of course, the main point to make them these are the main value of these predictions is to provide you the ability to make more informed decisions and make decisions that reflect the performance of enrollment across the variety of different units that might be of interest.
22:17
So this slide is really just A kind of just a list of different approaches right there. In no way it's comprehensive.
22:25
But basically at the top, you have the kind of parametric models. You can imagine, for example, a linear model would simply be, you know, draw a linear line to how the prediction has gone so far cumulatively and then just project that forward. But you could fit a polynomial in the same way. That would be pretty much the same principle. And then, perhaps, you would kind of do something that's modeling slightly more complex, perhaps, looking for change points, say, a piecewise kind of Poisson approach, where each period of time has a different pulse on approach, and each of those would be fixed by a priori.
22:58
But we're not going to focus on those kind of modeling approaches.
23:01
Basically, those kind of those approaches, which basically take a statistical model and extract predictions from them, We're going to focus on the different approach to that, using the same approach of wrong by using modeling based on statistical distributions, which is via simulation. And the main reason that you would use simulation.
23:19
Over the parametric model is primarily because A, it's much more flexible. So making changes of simulation model is very trivial. In reality, and be the extraction of certain types of intervals, like prediction intervals or credible interval digitally. For more Beijing point of view, these are tend to be much easier when dealing with things like simulations. We can use things like bootstrapping instead of, say, for a confidence interval.
23:46
And, you know, you could do that on a global basis, which is basically where you just assume there's a single event generation process.
23:54
What's the exponential and arrival times which is equivalent to a Poisson process. Or you could explicitly model every site individually with some fixed Poisson ratio, or some it'd be perhaps some piece wise cost rate if you really want to get into the weeds.
24:11
Those are the two things we're gonna focus on today, kind of fixed per site simulations. So we're making fixed assumptions about the rate in each site, and then global simulation, where there's a single race altogether.
24:24
But of course, there are other different approaches. Machine learning is variety of different methods being proposed.
24:30
Using machine learning, which perhaps is better with dealing with a discontinuity is non standard types of recruitment profiles. Or perhaps just getting quicker predictions. If you've already done a lot of simulation, for example, you could use the machine learning model to kind of pick what the best, best available estimate is from the previous work you've done rather than having to rerun the entire simulation again.
24:56
Then, there's, like, you could do the effect of size on this, on recruitment, not via explicitly modeling each site individually, bought by having a comprehensive model for the entire site level process.
25:12
So, the most commonly cited version of this is the Poisson gamma, gamma type of approach, either Beijing or non beijing, where, effectively, the rates in each in each in each group follow a gamma distribution.
25:28
So, rather than just kind of kind of like extracting the rate for each group, you actually build the distribution of the rates across sites explicitly into your model, And you model it explicitly, and then use that to generate either simulations, or perhaps sometimes analytically, the predictions that are usually done via simulation, though.
25:48
I suppose, just to mentioned, are many other considerations, ranging from regional effects.
25:53
So, for example, you may want to also model that certain region, certain sites are more self similar because they come from certain regions, for example. Rather than just treating region, it's just kind of like a top level, like, summary type of thing. Optimization.
26:07
Like, how do I allocate different sites to optimize my recruitment profile, to ensure both that things go as fast as possible, while also ensuring that there's a enough coverage across the different regions or sites. And then things like cost considerations can be integrated as well, answer rates.
26:27
So, that's just hopefully, kind of, brief, preview of how that kind of works for enrollment prediction. And then, we have events prediction, which is broadly quite similar, where, effectively, now, we're focused on event milestones, will adopt a debt, our progression free survival, or other types of plots like that.
26:46
You know, basically any survival analysis, their time to event analysis, whatever the event is, such as that.
26:51
That's what we're looking to find out for each subject, and really, we're interested in how long it took them to have that event since they joined the study. But I think the important thing, though, about survival is that you have to model the other considerations alongside just modeling the explicit survival process itself, namely, if a cruel is still ongoing, then you're really having to model a cruel to know when additional people are going to come into the study, to be available, to have the event. And that effectively means that you're coming back to this model here, the enrollment model. And effectively, the survival model basically exists on top of the enrollment model, and that's only if the accrual period at the point that you're doing your prediction is still ongoing. You could obviously do it after accrual is complete, and they'd only have to model the events or the like, the survival function for the people who are still available at that point, to have the event.
27:44
In other words, who haven't either dropped out or had the event or have been cured, or some other aspect that's caused them to no longer be available. And so, that's the other main component to worry about, is dropout. Perhaps there's a cure process. Perhaps there's some kind of responder off by going on. An effect, you know, the main thing with those, just I, obviously any process that would lead to someone not being available to have to event needs to be modeled explicitly or ideally should be model explicitly to ensure that you get an accurate idea of how many people end up with available to have an event at any given time. So that then when you simulate them, know you're not overwriting something that could. That's something else that happened to the other having the event. And obviously, you know, apnea, we're assuming, in most cases that you have a standard survival model where, at the end of the study, you're going to censor everyone once you've reached your target number of events. There's a variety of different models, but how do you kind of compare and contrast the find the best one Well?
28:41
In reality, there's a variety of different approaches that you can use and there's a lot of papers and lowlands are providing the references in the slides, but I think, you know, if we're looking to these situations here, parametric modeling, not that widely used, I would say, you can obviously fit. Your kind of standard.
28:58
Exponential are liable to the time on study thus far. Are the ... or the event times up to this point so far. You know, that's not too difficult to do. There's parametric. You could do it on a piecewise basis as well. For example, the Lakatos paper, which is the basis for sample size calculations are quite common for sometimes quite calculations for survival uses, a non stationary Markov process. That's quite easy to use for the basis of prediction as well, if you're particularly interested in that.
29:30
But I think the most commonly way you do this via some kind of simulation via survival model. So, the exponential model, the Piecewise model, the Weibull model.
29:41
Those are probably your tree, most common.
29:42
But there you could fit in any other Simulated model, like a log normal, if you wanted to.
29:47
And so we'll talk about those more during the example. But, basically, these are your standard survival model for ..., where you've seen a constant ... to weibel, where there is a dependence on how long you've been in the study. And then the Piecewise exponential, which is a very flexible approach, which basically allows you to split your trial into as many sections that you want to then have different event rates for each of those time periods.
30:10
In terms of those, there's a question of, well, how do you select the best one of them? Well, you can fit each of those models, of course, and find the best estimate of the exponential or Weibo parameters, but there are considerations that you be selecting between different models. Like, how do I pick this model over this model?
30:28
You could obviously just use your standard like model face statistics, like AIC, and things like that. But there are more explicit algorithms for this particular purpose, in particular for piecewise exponential models. Where there's kind of a very thorough type of approach where you kind of have, you know, a busy, and like one to X, number of change points proposed.
30:54
Fit each of what are the best change points? And you kind of dynamically find the best change points for each of those assumptions and that fit each of them. And have some penalty for having too many points. And then, pick the best one based on some standard, like P, value based comparison of each of the models in terms of multiple face. And, we won't be going into that today, but, it's very interesting, I'm happy to talk about some other point, machine learning can be used here. Again. if you're particularly interested, non proportional hazards just mentioned are kind of slightly different. As I said, they're only you know, the cause.
31:24
the hazard rate is changing over time, then the projections that you make at some point may not be appropriate for what's going, what's going to happen going forward. Like, for example, as I said, for a delayed effect model, you would probably be overestimating the event rate, assuming the event is bad like that.
31:43
If you did at an early point, that means that you may be, you may end up thinking the trial. It's gonna end sooner than actually is when the delayed effect ends and the actual divergence between the control. An event rates actually happened. So, that's something to be very aware of, and there are some individually aided papers on, say, using a cure type model, or other types of approaches to deal with that reality are dealing with that situation, but radiators are probably the best way to use your judgement call. If you know there's going to be delayed effect, or you think there can be delayed effect, then, you know, you need to be doing probably some kind of intra monitoring or communicating with the data monitoring Bailey to let them know what's going to happen. And, you know, in terms of protections and stuff and samples like in terms of every aspect into the interim analysis and stuff like that, that that needs to be factored in. As I just mentioned, all their considerations are external data been able to use that to kind of use that kind of Bayesian approach to it and make these projections. There's some decent papers on that.
32:38
And then just as I mentioned, there's kind of a nice analogy between this case if you're looking at a blinded versus unblinded projections versus blinded versus unblinded, sample size re estimation, where there's consideration that you could use such an expectation maximization, maximization, or similar algorithm to try and extract what the individual per group rates are. Even though you don't have access to treatment labeled and whether that's a worthwhile process to go, that's actually something that is more based on some stuff I've done individually. But it's an interesting research area if anyone is particularly familiar with both areas, are aspects of the problem.
33:15
OK, so, I think we have a, about 20 minutes remaining, so I'm going to go through a variety of different scenarios effectively. For example, of some data, it's kinda rejiggered real-world data, it's kind of some data that's available light, I've rejiggered to kind of make it more amenable to this. But, basically, we're going to assume we're in a survival trial, at which point, we've reached 50% of overall events goal, and an interim analysis is occurring. And now, we want to kind of know is our trial on track, based on how we've done so far?
33:47
We want, this point, enrollment happens to be about 80% complete, not exactly 80%, but around 80% complete. And we have 127 sites opened when we consider sites and sorry. Yeah, so we have one hundred eighteen out of one hundred twenty seven sites opened. And in this example, we're going to go through a variety of different aspects. Ranging from, like, what happens if you have a global versus a site level enrollment process?
34:09
What happens if we use an unblinded versus a blinded survival model, and then the different types of survival model, that will be out there for modeling survival process. And you can see on the right-hand side, here just a brief summary of all we'll see for where the trial is right now in terms of what time we're out in terms of weeks, our current sample size event and dropouts, and then our targets, and then how many sites are open.
34:37
So, just to say, this is going to be done in experimental version of nQuery that we're using internally. So, just to re-emphasize this is based on features that are coming in a very near future update of nQuery, but which are not available right now, in, and create, just clearly emphasize that upfront. So that you don't go looking for this, But, basically, you know, this is a sneak preview of what this will look like. This is obviously an early build, so apologies if there's any bugs or issues like that, obviously, we're still actively, you know, going through this and, you know, if there's things in there that you think, oh, well, you know, that it's out there, that's there. That kind of feedback is very highly welcome, because I helps us to emphasize what needs to be done, either in this initial release, or in man, one of the many further updates that will occur on this type of feature in the software.
35:30
Just to say, how this works with an nQuery, it's not too dissimilar, from a standard nQuery table, In the sense that, when you're selecting elements, are uncertain screens, you'll have this help card on the right-hand side, which will give you details of where you are in the current process. And then, on the left-hand side, you'll see the workspace summary, which will contain both any data that you've uploaded to the current workspace, the steps that you have done so far in, setting up your prediction, And then we'll see later on, the reports, charts, and other ancillary information that you may have of interest for understanding your prediction based on what's happened so far.
36:12
And when, in this main window, that's where the action will be. This is where it will set up our prediction, where we'll see reports on all of the other important information that we want. And when we're going through this wizard process, you'll see there's a variety of different steps where Step 100 Year Dakota next step. We simply have to select the R, OK, here, or the next key down here.
36:32
And, so, we'll go straight into a simple case where we have a survival.
36:41
Where, as we mentioned, about 400 or 500 people have been recruited, and around half, or, sorry. Half.
36:47
Exactly of the required number of events, 374 has occurred at this point.
36:53
And we initially will assume that we actually, for some reason, have access to the unblinded events, data.
37:00
We know when each person had their event, and also which group they happen to have been in, So which treatment group they happen to be in and initially will use subject data only. So this will say that we're going to use subject level data to make our prediction, But we won't know or won't use our knowledge of how each individual site is doing for the purposes of making future projections of the enrollment.
37:25
So, I think just to make upfront that, in terms of the events process, we're not assuming any site specific effect.
37:32
So, we're assuming that people will have the event, you can't, you know, from some kind of constant survival process, regardless of where they are in the study. I think that's usually a reasonable assumption.
37:44
The assumption that, certainly, you want to be true before you do your analysis, Having site specific survival effect, while not being not being hugely problematic in the sense of, you know, you can't deal with that and do that. It's not something we would usually, a priore, before we get to the end of the study. Make assumptions about it.
38:06
No.
38:06
That kind of did like, site. By effect interaction is, you know, we can usually not worry too much about diabetic. Stevenson talks about one of the references here in terms of controversies over enrollment, that would cover event rates as well.
38:24
So, this is where we're gonna be setting everything up. I think, just to show briefly what the data looks like.
38:28
We could do that within the software, or I could show it in the scene, it's all based on CSV data, at the moment, we have the subject data, which we'll look at initially.
38:37
And basically, in this case, we, we have some region, we're not gonna be using this today, but if you want to know the kind of region that's been assigned, in this case, the EU, the US, and I think it's Australia, it's the last one. And then we have this S ID, which is our site ID.
38:52
So, this is to let us know which site each subject came from. And all the important, when we get back to the site level data and using predictions on a site level basis. We have the arrival time.
39:03
Which is really all we're interested in in predicting if we were looking at example for enrollment prediction. So we can do enroll in prediction, of course.
39:11
But in this case, we happen to have, you know, we have survival type trial, and we have to analyze the data required to do with survival protection, because we know when they arrive, and then we know what time they have their event asked, and what their current status is. So, that's what we'll be focusing on today.
39:28
But, we know their arrival time. and then we also know their follow up time.
39:34
Remember the follow up time at the definition of that depends on their current status.
39:40
So, current here is basically just the status, where you are right now, and in this particular dataset, zero indicates that you have not had the event at the time that this data was taken, which is around 25 weeks, I believe, in this case.
39:56
So, remember, in this case, that, the zero indicates that you haven't had the event or you would have been censored.
40:03
If the study ended at this point, one indicates that you had the event, and then indicates that you dropped out.
40:13
And there's, there's only tree in this dataset, so they're kinda hard to find.
40:18
But minus E to the here, that's if they drop out. So that some other process that leads them, no longer available fall on treatment is here on one equals treatment group, zero equals control group.
40:29
So nothing too complex there.
40:31
But this was just to emphasize, again, the follow up definition depends on your current status, which is to say that if you have not had the event, this is basically how long you've been in the study.
40:43
Since our arrival time, until the current time, which for these people is basically equal to the sum of these two things, which is around 24.
40:52
You can see that's 124.9, 25, bought for people who have had the event in the current data. This is how long they were in the study, until they have the event, or until they dropped out, if you have to drop out status. So, just to note that there's a different definition between these two. This is how long they've been in the study, so far, If you're still available, where, it's just how long you were in the study, until you had on the event or drop. That if you have already have the drop by drop out or event, in this particular, in this particular dataset.
41:25
So, as I said, we're going to do an unblinded events prediction, using subject data only. And then we're going to assume that the enrollment status is ongoing, because we're assuming that only 400 left, the required 500 people have been recruited.
41:37
Since we're using real data, we'd have to select the dataset and then assign the required fields here. The definitions of those are given on the on the right-hand side, if you're interested. But basically, you want to know when each subject arrived into the study, which is, given by arrival.
41:51
In this case, we want to know which treatment group each subject is from. because we're doing an unblinded events, if we're looking at blinded, this would be unavailable, we'll skip this step.
42:00
Basically, we need a status indicator, which is like, what is the status of the subject? And just to note here, that, because in this case, there are three different labels available. It assigns those three labels.
42:12
I think, by default, if you have 1 -1 and 0, it will understand what that means, and put them in this order. If you have just random labels that will try to give them these.
42:20
But you can change these yourself, obviously, by going through this menu.
42:24
But just to say that, if one of these indicators was not in the dataset so far, for example, if no one had dropped out, it's perfectly reasonable for you to enter your own label. Here, like zero, or, you know, D, or whatever other label you want to use to indicate that you actually do want to assume some kind of dropout process happening from this point onwards. So I think that's important to note here, that you can enter that manually. And then we need to know the time on study, which, as we know, depends on whether you've dropped. you've had an event or dropped out, or whether you're still available, since it's called follow up here, which is really only true. In some sense, the follow-up will be exactly that for the people who dropped out.
43:06
But obviously there's a lot more follow up to happen for the subjects who are still available or who've been censored.
43:12
So once we've done that and we're looking at survival process, there's kind of two additional steps that we need to do we need to model, firstly, the remaining accrual process.
43:22
So we want to know, well, how long is it gonna take for us to reach the 500 people or for now, but it's by default, it's set to twice the current sample size. But in this particular example, it was closer to 500 and then follow up option airspace. Just saying that if you want to send to people after a fixed period of time, you can do that bought. In most analyzes, this would usually be at the end of the study, we would censor.
43:47
We would write sensor people at the end of the study and survival analysis, rather than finishing our analysis for each subject after a fixed period.
43:55
This wouldn't be available if we were doing enrollment only. And you can see that the current calendar time is around 25. Here are 24.91 9 2. This is a little gray here. This will be changed and the final release. And you see, there's only one option for the crew model here for Poisson model. But this will probably be more options available in future updates. And then we have our accrual table down here on by default, we basically go whatever the recruitment rate has been so far. That's just the same. That's going to be the same going forward. In this case, it's about 16 per week. So this is the rate per unit time. Depending on the ... times used for the arrival times on the Taiwan study. And just like in any survival analysis, you want to ensure that all time related units have been given on the same time unit scale, but you don't wanna mix months, in which you want them all the months, or weeks, or some other time units.
44:48
We didn't get to our event a dropout information. The default number events has been set to 374, twice the initial number events. That's by default, correct. And then we have a variety of different models that we can use, You can ignore.
45:00
This is just the bulk, I believe, should work, regardless. So there's a variety of different models available in this release. Then, as I mentioned, that a lot of other options available that will be adding going forward in the future. But for now, the simplest case is a simple exponential model where we model the treatment, treatment, and control event rates using exponential rates, assuming is the constant event rate.
45:25
So the length of time, An extra 50% of events, that happen, or is it, you know, it's the same for 25% to happen.
45:35
Call me from 50%, I should say.
45:38
And we have the best fitting assumption of what those exponential rates would be. Those are what's given by default here.
45:45
So we've used the actual data to extract the best average event rate for the survival process. Based on the data though, so far, And that's what is given us default, but you're completely free to change it entities, however you particularly want.
45:58
And, if you want to add a piecewise model, you can also do that very easily by selecting a number of hazard rate greater than one, where you can have different event rates happening.
46:09
Depending on, when you are in the study, and starting a time here is technically from the current time here, so, this is like 10, this will be 10 weeks after the current time, and these obviously, needs to be increasing in time.
46:22
And you, Dan, can chain.
46:23
So, if you wanted to hazard ratio that change over time, say, for example, get more extreme overtime, then you can easily do that.
46:34
By editing this here, and it will automatically calculate the treatment event rate based on those particular hazard ratios.
46:42
There's also Waibel Model, but this is basically extract the fitting why bulb model based on the information so far.
46:50
And of course, you can make those changes are required there as well. But let's for now just assume the simple exponential model with a hazard ratio of around zero point seven.
47:00
Just a note here that you can also model dropout. Basically, in the same way, the dropout model. in this case, because it's only true dropouts, obviously, implies very much lower exponential event rates.
47:11
But if you wanted a piecewise model for dropout, you can do that as well.
47:15
There's no viable model for this at the present moment, But that may be available in that future rates, future version.
47:25
So now that we've selected are accrual model or more accurate, and we've used the default assumption about what's gonna happen in the model, based on what's happened so far, and we've entered, we want 500 people, we need 274 events.
47:39
And just to say that in a survival study, we're assuming that the number of events is primary over the number of subjects. So, if you happen to reach the target number of events before you reached the target sample size, then we would assume you would stop the trial early and not do any more recruitment of people in that particular case. Because, it's the number of events driving things like the interim analysis, or the power at the end of the study or the power, and stuff like that.
48:05
So, we don't really, the sample size is really just that, additional parameter, we need to kind of get the correct assumption of how many events is required. Once we've reached number events, we don't really need the rest of the subjects in App Maker case.
48:18
Because we're doing a simulation, we need stuff like the seed and number of simulations.
48:21
And in this case, we can specify some ancillary, additional information when we have such a summary statistic for how each simulation date the subject level data, for the first simulation.
48:33
So we click Run. We get a nice little loading menu here, telling you how the study has gone so far.
48:40
And then when we're finished here, we'll get a nice looking chart here for the various.
48:44
So by default, we'll get the enrollment chart, and you can see here, it's kind of, you know, there might be an issue with this particular model.
48:50
Because you can see, dots, know, obviously we're starting here with a quite a slower rate, then it goes up very, very quickly.
48:58
So that kind of scale you can see the rate increases rapidly and they can kind of it's constant between around, you know, by five weeks and then up to about 20 weeks And then there seems to be some tapering off on our prediction really is assuming that this kind of rapid event right here is what's more likely to happen from this point onwards.
49:16
And you can see here both the estimates of what's going to happen at each time point going forward, up to around 31 weeks.
49:24
So, we're assuming that the, the amount they run at a time to start taking around 30 weeks or so, and then a 95% prediction interval for each of those time points as well.
49:33
And if you want to see the actual information used to generate that plot, they're available in these tables here, down here, and you can see the information explicitly.
49:42
So, if you want to you see that in Tabular format, you can, then you can see that we have similar plots available for the events process, and for the dropout process, if you're particularly interested.
49:53
But for the events process, you can see once again, know, if you ignore this discontinuity, it kind of makes sense.
50:00
But this discontinuity, obviously, is leading to this exponential model, maybe not being the ideal scenario for this. But I think any case there's a discontinuity, like that's any kind of explicit model.
50:11
Or standard model would probably struggle a little bit about, maybe, unless there's some reason for this to be a major concern.
50:17
Your hope is that it wants to try this back on track, that things will go back to normal, and then this would be a reasonable projection based on that.
50:26
Then there's a detailed report of what's happening in the trial so far, providing you a full information on what's happening on the left hand side of want to watch, went into the model.
50:38
And then on the right side, providing you with the actual results of the simulations saying that on average, 500 people were recruited dot the accrual period to grant attorney one weeks.
50:49
That's in total. So obviously we're talking about an additional six weeks or so above where we were prior to this.
50:56
The study overall, took around 44 weeks to get to the required number of events. And then the average events will usually reach them, were just sitting here that, 100% of time we reached the target event. So, for example, if the drop by process was very aggressive, it could be a case that you run out of time. Basically, run out of people available to have the events in some situations would lead to now having the total required events occur.
51:18
About seven people drop that on average, and you can see that the average follow-up was around 12.89, which is includes people who had the event and people who are being censored. So on average, someone's bedroom turkey months on the study. And that same information is available on a per group basis and you're also provided the percentile summary of each of those important values.
51:40
Basically giving you the, say the range from around 30 months to 32 months for the accrual period had a long dark took. Around 41.8 to about 47 weeks to get to the actual end of the study. They reached the target number of events.
51:57
Actually, I think it, yeah, I'm not sure, he is probably bring my own advice. But I think this is an might actually be a month, but regardless as long as they're all the same time, you know, it's not too much of an issue.
52:09
And then just to mention that, you know, if you wanted to know what happened, ane, each simulation that's available here.
52:17
So you can see this gives you the information on what happened in simulation one, you know.
52:21
So in this case, the accrual duration took around 38 more than 30.8 once, and the average follow-up is around 12 bought. In the per subject simulation data. You can also see the individual simulated data for each of the subjects in the first simulation.
52:37
And you can see here that, for certain cases, there are two columns, and those are the ones that are simulated. And ones where there's only one value provided. That's basically the data that was already in your busy, who are already had the event, or already dropped out in this particular dataset.
52:56
So, if we return to the first step, we go to adding subject plus site data. And, of course, you know, at this point, we could obviously go back, and we could make, you know, lots of changes to the prediction and stuff like that, and we could try different scenarios. I'm not gonna go through that today. I think, you know, we're kind of short on time, so I'm going to show, instead, it's just the effect of modeling sys individually. As well as, you know, dealing with.
53:25
So, basically, recruit dealing with the enrollment process not only as a global process, but basically, on a, as a per site process.
53:34
So, the first step is selecting the subject area level dataset. It's still required, And it's basically exactly the same, but we now need to specify the site ID field that we have previously.
53:44
Then, we need to select a site dataset, and by default, if you only have taken two datasets as it seemed to get them on the site dataset.
53:49
And this case, it happens to know which, it's based on the, if you have the columns in, the same order as the template data sets that will have, This will automatically kinda come out for you. But you can see here that we need to have a site ID so we can link the subject level dataset to the site level dataset. We need to have a estimate of what we think the race of the accrual will be in that site.
54:12
And we need to have a cap. So basically, how many people are we going to allow for that particular site that will be set to, like, something like 999, or a million or something. If you don't want to worry about the cap.
54:22
For sites that are on opened, we need a starting and an ending time for when they could open. But for sites that are open, we need the time at which they actually did open.
54:32
And with that information, we then get an accrual options table, which is a lot more complex than the previous version that we saw, or we just had a very simple table in this spot, and then some summary information up here. But let's assume the target sample size is the same as before.
54:51
And subject unto them the study, with 127 sites here, of which 118, I believe, are open.
54:58
So you can see the open ones here happened to be first.
55:00
And they effectively have a situation where the plan, the cruel race, it's given for each site, isn't taken from your dataset from your site level dataset. So, you can see that here, you have your region, your site ID, your enrollment cap, the maximum number of people allowed in that site, the time that this site opened.
55:20
So, that's in reference to, you know, since the beginning of the study.
55:25
The actual race estimated either estimated or assumed for that site.
55:33
So, let's return to these options.
55:38
We have the site initiation time which is when that site opened and then we have the number of accruals that have actually occurred not site so far, that's important to emphasize that this particular column, it's actually taking them the subject data level dataset because it's counting how many times each site ID has occurred in the site ID column of the subject level dataset.
55:59
And, in theory, a, you know, what, you would expect this opt for sites that are open and that have recruited people that the implied recruitment rate from this number of accruals and start opening like, let's say, what the site initiation time to the current time, the simple accrual. And you could calculate a rate for each site that that will be around equal to the, you know, one given in the dataset.
56:20
But that is not necessarily true and it's this accrual site per accrual rate per site is what's going to be used by the recruitment model. So just a note there that, you know they should be consistent but they don't have to be.
56:33
And you can see that the accrual rate is editable for all sites. So even though this is what we planned and our initial site level dataset, what we put in to the dataset, we are free to edit that as much as we want it, you know, after that, within this particular tool.
56:52
And then just briefly mention, at the bottom, you can find the own open sites, And these are the sites which haven't opened yet, but which have been included in the dataset as being in the original plan.
57:02
So, you can see that we have these ones opening between around 25 months, and 26 months, mostly. So they're gonna open pretty soon.
57:09
And then we can see that the rates are all kind of rough, like standard estimates with various enrollment caps, but these are fully auditable as well, but obviously, we have no information about. Then, they're going to be fully simulated. From the perspective of the algorithm they're going to have, no, basically, fixed parameters of When they started, Or I'm able to if I already had and stuff like that.
57:28
You can also add sites, if you want.
57:32
So if we, if we increase the number of sites from 127 to 128, you'll see there's an additional column here, I'm sorry, additional row here, that we can fill in however you want, but our own site ID, start and end time.
57:44
But for now, we'll just leave that aside for now.
57:50
So, that's basically the only real difference for this simulation compared to the previous one, because, when we get into the event a dropout information, effectively, things are exactly the same, because, as I said, we're not assuming any per site effects on the survival process. because that's something we would usually not assume a priori or to at least know a priori. In our clinical trial, we would usually, naco we think more events are going to happen in the US sites in the EU sites, or if survival process is going to vary across those. And, in most clinical trials were opening, in terms of finding the average treatment effect. That, that's not true. Generally, that the and a variation is mostly accounted for by random variation and not by, you know, where people happen to live or whether from our, you know, in terms of that type of covariate **** The covariate array of site could easily be related to things like gender or ethnicity, or all the ones that could have an influence or age.
58:46
So, obviously, that, that can't happen, but it's not something we're really, usually as interested in modeling in terms of this type of protection.
58:56
The only difference in this stage is that you can also get a summary for each site for how they didn't all the simulations. And then also, you can get how each site did individually in a certain number of simulation runs.
59:13
The simulations for this case take a little bit longer, typically.
59:20
But you can see here that there's basically using the per site data rather than the original global rate, Things have ended up being, you know, slightly quicker here.
59:30
So by default, within this version of the software, that's possible, exchange between now, and when this is available to you.
59:38
By default, it will just overwrite the current simulation. And you can just use a single workspace and override that as many times as you want.
59:44
But you can see here that there has been a reduction in the study duration on the accrual duration compared to the global event, right that we have previously. But you can see that it's not that different in the sense that we're kind of still have the same thing where we're kind of projecting the kind of quickly, rapidly increasing recruitment part of the process.
1:00:09
Then these other ones aren't really down to much different widget on the pricing.
1:00:13
Because obviously, these are taken from the same trial effectively.
1:00:16
So we wouldn't expect the effect to be to disproportionate.
1:00:20
And then, you know, just to mention that you have these per site level information here, so you can see that here for, for each site, you can see about, on average, how many people recruited any sites. So in 101, around 13 people recruited in the site, with the average max time, the last person who's creating the site, around 26.58.
1:00:38
And then the average duration, it was opened around 35 time units. But an average rate of zero point three seven one.
1:00:45
And obviously, we could compare that to the original table, if you're particularly interested, We wouldn't expect them to be exactly the same, because we're talking about quite small sample sizes here. So, for example, we're looking at 15 people. So, there's some variation is on, on, you know, unlikely not happen for such small sample size is only about zero point one zero point one off, what was originally specified.
1:01:08
And if we go into the simulation, some ratings are more or less exactly the same. Except there's some, you know, some additional rows, just giving you some information on how the average, How many sites were opened. For example, in this case, all sites were open. Though.
1:01:20
You were to create a new site where the opening period was between like 30 and 40 weeks, or months, then, depending on the timeliness.
1:01:31
Then, obviously, the flight would unlikely to have opened in this case, because in the vast majority of cases, the cruel duration was our, the cruel was already over at around 28.57 time units, and then the actual study was over on 40.
1:01:46
We can see there's more summary data down here, for this, as well, including the sites, sites, open case.
1:01:52
You know, for most cases, we extract these ....
1:01:56
They'd like these ones to vary, in terms of the dropouts and the cruel duration.
1:02:03
OK, so I think we're pretty much at the end of our time, apologies, if I run over a little bit time, but this in terms of discussion inclusions, delays in clinical trials are very common. This is the most common reason why you would want to make predictions of when key trial milestones like recruitment and event milestones are likely to occur.
1:02:20
There's a number of considerations that you should take into account when doing your prediction level.
1:02:25
Like, what's your target? What model do you hope to use, what information is available to you, when to protect?
1:02:31
The enrollment prediction is focused on the recruitment process.
1:02:35
Know, you commonly want to do that on the site level basis, that's probably the most common situation. And then survival trials, milestones are based on a fixed number of events.
1:02:44
So, you need to model a survival process, while also dealing with the effect that enrollment, dropped by processes, having enrollment Only if accrual process is still ongoing.
1:02:54
So, I think that covers everything I want to cover today. I just want to thank you so much for attending. If you have any questions based on today, including what may or may not be available in this new feature that's coming later this year, you can e-mail us at info at ... dot com.
1:03:10
You want further information on nQuery in general, you go to ... dot com, some initial information. I believe, and this is calm.
1:03:15
It's available at this point, but it's fairly sparse at the moment.
1:03:20
Just to say that, if you want to take a trial of the software, either you don't have the software, or you don't have access to certain features in the software, you can always take a free trial online in your own browser by signing up at ... dot com, forward slash trial. And when this feature becomes available, this will be included in the trial version of the software available.
1:03:42
So just to say you just literally just need your e-mails to be entered and will you can get access to using within your browser If you want to any information in terms of sulfur in general, like tutorials on recordings of previous webinars, You can find them stop cells dot com forward slash stars.
1:03:56
Just to say that, I have a bunch of reference here but I can give you information on like how often delays happen and stuff like that. And all the various enrollment models proposed for recruitment modeling, for events model in the references here. So, these will be sent to you. these slides I sent to you afterwards. So, I'm going to take a moment here to answer some questions.
1:04:15
Obviously, I think we're slightly path, or our not, so anyone needs leave? Obviously, feel free to do so, We're gonna get a moment to look at the questions and answer a couple before I finish up, and then they need, I don't get to, I will answer the e-mail afterwards.
1:04:42
So, there's some more questions about the software, and stuff like that.
1:04:46
I'm not gonna just to emphasize, again, this is a in development feature that will be released.
1:04:52
Our plan is later this year, so there's a sneak preview of that. So, it's not available right now, just to be clear about that.
1:04:58
So, just to be aware of that.
1:05:01
If your suggestions for what should be in it, though you didn't see here, Feel free to get in contact or to ask if there's some feature that may or may not be available, that the feedback is very valuable to us. There is one question here that I do want to cover right now, which was just to say, what would the effect of not having access to the treatment levels BBC having blinded data? So I think I just will briefly show what that looks like in terms of the setup.
1:05:26
We'll go back to the subject, the data, only, just to simplify things, and go to the blind to the events option here. Just to say, what, the only real difference that happens here is that, on the setup part, we obviously don't have a treatment indicator roads.
1:05:38
You'll notice there's less options available here when we're selecting which columns correspond to our acquired inputs.
1:05:44
And then on the recruitment part, this is basically exactly the same.
1:05:49
We basically don't assume the recruitment process varies by treatment of the treatment group.
1:05:53
I think that's probably a reasonable assumption.
1:05:54
We probably don't want to be recruiting, you know, differently for different groups that had the randomization should work. But you will see here that for the Event Modeling process, you can see that the response distribution is now for a global event process. Basically, you don't have a hazard ratio and two individual events.
1:06:13
We just have the, what will be the hazard rate, if we assume that everyone wants from the same treatment group, that the effect rate effect size was the same in each group, that the event rate was the same in each group, C stands for the weibel as well, and the drop by process as well.
1:06:31
So, that's the major difference here.
1:06:33
We're assuming that everyone's coming from this average hazard rate zero point zero six five eight.
1:06:39
With no distinction to, you, know, half of these are coming from a treatment effect. The effect rate, or probably hoping, is substantially different from the factory. That's are the events, right.
1:06:48
We're expecting in the control group, but this is kind of the average of those two.
1:06:54
And obviously we're hoping that at this time, that the mixture between control and the bankruptcies control and treatment groups is roughly similar, but from our process, that's the major difference there.
1:07:04
And if we kind of just simulate kind of roughly what this would look compared to the unblinded events case.
1:07:13
You can see here, ... during the loading screen that, you know, this study duration is definitely a little bit higher, on the accrual durations. It's about the same.
1:07:22
So obviously, we lose a certain amount of efficiency by not being able to model the process on a per group basis.
1:07:29
But I think realistically, this scenario is probably more likely to happen than the unblinded events rate. Like the employment wherever it reinsert. It gives you more no less difficult to get your head around or less controversial.
1:07:41
Because if you have the information, why wouldn't you model each group individually and modeled our property effectively?
1:07:46
But Yelps, you do lose some efficiency, but I think using the blind the data or having access to the blind the data, where you don't have in light basically where this treatment column doesn't exist is a much more likely scenario for the tri list or the people involved in the trial themselves.
1:08:03
Been able to do this type of predictive modeling.
1:08:07
I think, you know, having access to treatment group is just probably much rare effectively.
1:08:13
OK, there's a few other questions, which I will answer via e-mail, but, once again, I'm going to finish up, but I just want to thank you once again so much for attending today.
1:08:20
Hopefully, you've seen something here that might be of interest, and for the future development of our software, Hopefully, the sneak preview of get you excited for what's coming in our future, updates, will also be a free set of new tables added alongside this related to prediction model, Sample size, which if it's a topic you're interested in, please get in touch. Balt, Yeah, I think that's more or less done. And once again, thank you so much, and thank you so much, and goodbye. And I'll talk to you soon.

Subscribe by Email

No Comments Yet

Let us know what you think