Q Statistic Meta-analysis

0 views
Skip to first unread message

Elder Raman

unread,
Aug 5, 2024, 1:16:16 PM8/5/24
to mehrcenttheju
Metaanalysis is the statistical combination of the results of multiple studies addressing a similar research question. An important part of this method involves computing a combined effect size across all of the studies. As such, this statistical approach involves extracting effect sizes and variance measures from various studies.[1] Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience.

Meta-analyses are often, but not always, important components of a systematic review procedure, see for instance PRISMA.[2] A meta-analysis can be conducted on several fields to assess the impact of an intervention when there are multiple studies reporting data suitable to measure a combined effect size.


The term "meta-analysis" was coined in 1976 by the statistician Gene Glass,[3][4] who stated "Meta-analysis refers to the analysis of analyses".[5] Glass's work aimed at describing aggregated measures of relationships and effects.[6] While Glass is credited with authoring the first modern meta-analysis, a paper published in 1904 by the statistician Karl Pearson in the British Medical Journal[7] collated data from several studies of typhoid inoculation and is seen as the first time a meta-analytic approach was used to aggregate the outcomes of multiple clinical studies.[8][9] Numerous other examples of early meta-analyses can be found including occupational aptitude testing,[10][11] and agriculture.[12]


The first model meta-analysis was published in 1978 on the effectiveness of psychotherapy outcomes by Mary Lee Smith and Gene Glass.[4][13] After publication of their article there was pushback on the usefulness and validity of meta-analysis as a tool for evidence synthesis. The first example of this was by Han Eysenck who in an 1978 article in response to the work done by Mary Lee Smith and Gene Glass called meta-analysis an "exercise in mega-silliness".[14][15] Later Eysenck would refer to meta-analysis as "statistical alchemy".[16] Despite these criticisms the use of meta-analysis has only grown since its modern introduction. By 1991 there were 334 published meta-analyses;[15] this number grew to 9,135 by 2014.[3][17]


The field of meta-analysis expanded greatly since the 1970s and touches multiple disciplines including psychology, medicine, and ecology.[3] Further the more recent creation of evidence synthesis communities has increased the cross pollination of ideas, methods, and the creation of software tools across disciplines.[18][19][20]


A meta-analysis is usually preceded by a systematic review, as this allows identification and critical appraisal of all the relevant evidence (thereby limiting the risk of bias in summary estimates). The general steps are then as follows:[21]


One of the most important steps of a meta-analysis is data collection. For an efficient database search, appropriate keywords and search limits need to be identified.[23] The use of Boolean operators and search limits can assist the literature search.[24][25] A number of databases are available (e.g., PubMed, Embase, PsychInfo), however, it is up to the researcher to choose the most appropriate sources for their research area.[26] Indeed, many scientists use duplicate search terms within two or more databases to cover multiple sources. The reference lists of eligible studies can also be searched for eligible studies (i.e., snowballing). The initial search may return a large volume of studies. Quite often, the abstract or the title of the manuscript reveals that the study is not eligible for inclusion, based on the pre-specified criteria. These studies can be discarded. However, if it appears that the study may be eligible (or even if there is some doubt) the full paper can be retained for closer inspection. The references lists of eligible articles can also be searched for any relevant articles. These search results need to be detailed in a PRIMSA flow diagram[27] which details the flow of information through all stages of the review. Thus, it is important to note how many studies were returned after using the specified search terms and how many of these studies were discarded, and for what reason.[26] The search terms and strategy should be specific enough for a reader to reproduce the search. The date range of studies, along with the date (or date period) the search was conducted should also be provided.[28]


A data collection form provides a standardized means of collecting data from eligible studies. For a meta-analysis of correlational data, effect size information is usually collected as Pearson's r statistic. Partial correlations are often reported in research, however, these may inflate relationships in comparison to zero-order correlations.[29] Moreover, the partialed out variables will likely vary from study-to-study. As a consequence, many meta-analyses exclude partial correlations from their analysis.[26] As a final resort, plot digitizers can be used to scrape data points from scatterplots (if available) for the calculation of Pearson's r.[30][31] Data reporting important study characteristics that may moderate effects, such as the mean age of participants, should also be collected.[32] A measure of study quality can also be included in these forms to assess the quality of evidence from each study.[33] There are more than 80 tools available to assess the quality and risk of bias in observational studies reflecting the diversity of research approaches between fields.[33][34][35] These tools usually include an assessment of how dependent variables were measured, appropriate selection of participants, and appropriate control for confounding factors. Other quality measures that may be more relevant for correlational studies include sample size, psychometric properties, and reporting of methods.[26]


A final consideration is whether to include studies from the gray literature, which is defined as research that has not been formally published.[36] This type of literature includes conference abstracts,[37] dissertations,[38] and pre-prints.[39] While the inclusion of gray literature reduces the risk of publication bias, the methodological quality of the work is often (but not always) lower than formally published work.[40][41] Reports from conference proceedings, which are the most common source of gray literature,[42] are poorly reported[43] and data in the subsequent publication is often inconsistent, with differences observed in almost 20% of published studies.[44]


AD is more commonly available (e.g. from the literature) and typically represents summary estimates such as odds ratios or relative risks. This can be directly synthesized across conceptually similar studies using several approaches (see below). On the other hand, indirect aggregate data measures the effect of two treatments that were each compared against a similar control group in a meta-analysis. For example, if treatment A and treatment B were directly compared vs placebo in separate meta-analyses, we can use these two pooled results to get an estimate of the effects of A vs B in an indirect comparison as effect A vs Placebo minus effect B vs Placebo.


IPD evidence represents raw data as collected by the study centers. This distinction has raised the need for different meta-analytic methods when evidence synthesis is desired, and has led to the development of one-stage and two-stage methods.[45] In one-stage methods the IPD from all studies are modeled simultaneously whilst accounting for the clustering of participants within studies. Two-stage methods first compute summary statistics for AD from each study and then calculate overall statistics as a weighted average of the study statistics. By reducing IPD to AD, two-stage methods can also be applied when IPD is available; this makes them an appealing choice when performing a meta-analysis. Although it is conventionally believed that one-stage and two-stage methods yield similar results, recent studies have shown that they may occasionally lead to different conclusions.[46][47]


The fixed effect model provides a weighted average of a series of study estimates. The inverse of the estimates' variance is commonly used as study weight, so that larger studies tend to contribute more than smaller studies to the weighted average. Consequently, when studies within a meta-analysis are dominated by a very large study, the findings from smaller studies are practically ignored.[48] Most importantly, the fixed effects model assumes that all included studies investigate the same population, use the same variable and outcome definitions, etc. This assumption is typically unrealistic as research is often prone to several sources of heterogeneity.[49]


A common model used to synthesize heterogeneous research is the random effects model of meta-analysis. This is simply the weighted average of the effect sizes of a group of studies. The weight that is applied in this process of weighted averaging with a random effects meta-analysis is achieved in two steps:[50]


This means that the greater this variability in effect sizes (otherwise known as heterogeneity), the greater the un-weighting and this can reach a point when the random effects meta-analysis result becomes simply the un-weighted average effect size across the studies. At the other extreme, when all effect sizes are similar (or variability does not exceed sampling error), no REVC is applied and the random effects meta-analysis defaults to simply a fixed effect meta-analysis (only inverse variance weighting).


Since neither of these factors automatically indicates a faulty larger study or more reliable smaller studies, the re-distribution of weights under this model will not bear a relationship to what these studies actually might offer. Indeed, it has been demonstrated that redistribution of weights is simply in one direction from larger to smaller studies as heterogeneity increases until eventually all studies have equal weight and no more redistribution is possible.[51]Another issue with the random effects model is that the most commonly used confidence intervals generally do not retain their coverage probability above the specified nominal level and thus substantially underestimate the statistical error and are potentiallyoverconfident in their conclusions.[52][53] Several fixes have been suggested[54][55] but the debate continues on.[53][56] A further concern is that the average treatment effect can sometimes be even less conservative compared to the fixed effect model[57] and therefore misleading in practice. One interpretational fix that has been suggested is to create a prediction interval around the random effects estimate to portray the range of possible effects in practice.[58] However, an assumption behind the calculation of such a prediction interval is that trials are considered more or less homogeneous entities and that included patient populations and comparator treatments should be considered exchangeable[59] and this is usually unattainable in practice.

3a8082e126
Reply all
Reply to author
Forward
0 new messages