Occasionallya website that has inappropriate content slips through the filter or a website that the public needs to access is blocked. If the website has a chatroom or chat capability; these sites are normally blocked.
This form must be submitted to request blocking or unblocking of a website provided through CEN. If you are requesting that a blocked site be unblocked, please be prepared to defend the instructional or professional value of the site. Also, please determine that comparable content is not already available on some other (unblocked) website.
NOTE: Some sites are not able to be unblocked as the entire subsets of the site are affected as a unit. It is also important to note that all sites unblocked may affect the entire building. All requests will be reviewed by the Library and handled appropriately. Controversial site requests may be denied or take longer pending further review and evaluation of the request. In case of an immediate block request, the decision will be expedited with a review to follow.
So with the naive hope that sticking my finger in the dam could actually make a difference, I want to try and clarify a common misconception about RCTs, and explain how it contributes to sub-optimal trial design and analysis.
When we use an RCT to evaluate an intervention, we do so with respect to one or more endpoints (or outcomes) that will be measured in the future, after the period of intervention. It could be blood pressure, death, quality of life, etc.
This is clearly a fantasy, but hope is not lost. Thankfully we can mimic this counterfactual situation by randomizing people into groups, and since we are now talking about groups, we have to start talking about distributions of future outcomes.
So when you intervened in one group but not the other, the distributions of the outcomes were the different. But had you done nothing to either group, their distributions would have been the same (according to the clairvoyant). The intervention worked!
So what exactly did randomization do for us here? Did it guarantee that the two groups would have the same distribution of future outcomes? No, there is no such guarantee. However, we know that the the chances of there being a substantial difference between them will drop as the sample size increases. So what randomization allows us to do is make probabilistic statements about the likely similarity of the two groups, with respect to the outcome.
The randomized controlled trial is a valuable tool for understanding the effects of interventions. Like any study design, RCTs have limitations; and they must be properly designed, conducted, and analyzed to yield useful insights. While those actually involved in RCTs usually seem to understand this all too well, others seem to think they don\u2019t, as I am regularly reminded on social media.
These people are often just well-intentioned researchers working in areas where RCTs aren\u2019t really possible, and so haven\u2019t had the opportunity or need to understand them. So while it\u2019s annoying when someone misexplains to me some well-understood aspect of RCTs for the umpteenth time, it\u2019s fairly harmless behavior.
What we can\u2019t ignore, however, are when misconceptions about RCTs are published in medical and scientific journals. To be clear, I do not have a problem with valid critiques of RCTs, nor of honest discussions of their limitations. Quite the opposite in fact\u200A\u2014\u200Alike many statisticians and trialists, I love talking about that stuff. Just ask my barber. But misconceptions dressed in authority are a real problem, since they can be thoughtlessly used by others to justify their otherwise unfounded distrust of trials. And what will happen when we get lots of these misconceptions echoing though the scientific literature?
The misconception I want to discuss is the claim that we use randomization to balance confounders, a claim that has been published about a million times, by experts and novices alike. It\u2019s so common in fact that you might think I\u2019ve taken leave of my senses to suggest that it\u2019s not true. But it isn\u2019t. Not only is the above statement wrong, it\u2019s wrong twice. We don\u2019t use randomization to balance covariates in general, and we certainly don\u2019t use it to balance confounders.
We want to understand the causal effect of the intervention on that outcome, but this is tricky. That\u2019s because to really understand the effect of the intervention, we would need to give it to someone and measure the outcome to see what happened. Then we would need to reset the universe back to the exact point when the intervention was given, withhold it this time, and see what happened when they were left untreated. The difference in the outcomes between the two scenarios would be our estimate of the causal effect of the intervention.
So imagine that we have a group of patients that will soon enroll on a RCT, and we send them all to a clairvoyant who tells each of them what their future outcome will be if their life just proceeds as usual (i.e. if they never actually enroll on the trial and thus don\u2019t receive any intervention). Everyone writes this information down on a piece of paper, folds it up, and hides it away.
Some people might have a poor future outcome, while others have a better outlook. So while you can\u2019t see any individual\u2019s future (unlike the clairvoyant), you do know there will be a distribution of outcomes among the study participants; and depending on your clinical experience with the outcome in these kinds of patients, you might be able to make some educated guesses about the nature of that distribution, such its average and variance. (In fact, if you can\u2019t make an educated guess about the distribution of the outcome in your clinical population, I would argue that you aren\u2019t qualified to run a trial\u2026but I digress.)
Now lets randomize these participants into two groups, intervene in one and not the other. Then we measure the outcome at the end, and compare the two distributions, finding that they are different (e.g. the mean outcome in the intervention group is substantially better than that of the control). Now you have to answer the question, \u201Cdid the intervention work?\u201D
Well, maybe. So let\u2019s cheat reality and ask all of the patients to pull out the pieces of paper with their futures written on them. You carefully write down the data and plot the distributions for the two groups, finding that they completely overlap\u200A\u2014\u200Athey are, for all intents and purposes, the same\u200A\u2014\u200Athe groups are exchangeable\u200A\u2014\u200Athey had the same baseline risk when the study began.
So back to our misconception. Please note that at no time have I mentioned the word covariates. The only thing I care about is the distribution of future outcomes (which is what people really mean when they say balance). To drive the point home, let\u2019s say that after we saw the two randomized groups shared the same distribution of future outcomes (because we looked at their notes from the clairvoyant), we noticed that one group had all the people with a strong prognostic indicator for the outcome (e.g. family history of hypertension in a trial where blood pressure was the primary endpoint). Should we care? No\u200A\u2014\u200Aconditional on the knowledge that the two groups have the same distribution of future outcomes, it makes absolutely no difference if there are other dissimilarities between the groups. Let this sink in.
At this point you might say, \u201CBut we can\u2019t know whether the groups are exchangeable\u200A\u2014\u200Awe don\u2019t have a clairvoyant!\u201D And of course you would be right. But returning to the point I made before\u2014 the goal of randomization isn\u2019t to create groups that are certainly the same. It\u2019s to help us make probabilistic statements about how similar they might be, with respect to the only thing that matters, the baseline risk (i.e. the distribution of their future outcomes).
Randomization of course does the same thing for all the other characteristics of the patients, measured and unmeasured. That is what the CONSORT statement above is saying\u200A\u2014\u200Aso it is technically correct after all. My objection is that it completely focuses on the importance of \u201Cbalancing\u2026prognostic factors\u201D and doesn\u2019t mention the outcome at all.
This is completely counter to how we actually design trials, which we do with the outcome in mind. If the outcome is highly variable, then you know to run a larger trial and/or use a less variable outcome, in order to drive down the chance that there will be a difference in the baseline risk for the trial arms that affects your interpretation of the trial\u2019s results. Importantly, the sample size you choose won\u2019t necessarily allow for similarly palatable probabilistic statements about differences in the distributions of other covariates (e.g. those that are even noisier than your outcome), but that\u2019s OK, for the reasons we\u2019ve just discussed.
3a8082e126