Simulation evaluation of statistical properties of methods for indirect and mixed treatment comparisons.  
Jump to Full Text  
MedLine Citation:

PMID: 22970794 Owner: NLM Status: MEDLINE 
Abstract/OtherAbstract:

BACKGROUND: Indirect treatment comparison (ITC) and mixed treatment comparisons (MTC) have been increasingly used in network metaanalyses. This simulation study comprehensively investigated statistical properties and performances of commonly used ITC and MTC methods, including simple ITC (the Bucher method), frequentist and Bayesian MTC methods. METHODS: A simple network of three sets of twoarm trials with a closed loop was simulated. Different simulation scenarios were based on different number of trials, assumed treatment effects, extent of heterogeneity, bias and inconsistency. The performance of the ITC and MTC methods was measured by the type I error, statistical power, observed bias and mean squared error (MSE). RESULTS: When there are no biases in primary studies, all ITC and MTC methods investigated are on average unbiased. Depending on the extent and direction of biases in different sets of studies, ITC and MTC methods may be more or less biased than direct treatment comparisons (DTC). Of the methods investigated, the simple ITC method has the largest mean squared error (MSE). The DTC is superior to the ITC in terms of statistical power and MSE. Under the simulated circumstances in which there are no systematic biases and inconsistencies, the performances of MTC methods are generally better than the performance of the corresponding DTC methods. For inconsistency detection in network metaanalysis, the methods evaluated are on average unbiased. The statistical power of commonly used methods for detecting inconsistency is very low. CONCLUSIONS: The available methods for indirect and mixed treatment comparisons have different advantages and limitations, depending on whether data analysed satisfies underlying assumptions. To choose the most valid statistical methods for research synthesis, an appropriate assessment of primary studies included in evidence network is required. 
Authors:

Fujian Song; Allan Clark; Max O Bachmann; Jim Maas 
Related Documents
:

23046264  2012 white paper on recent issues in bioanalysis and alignment of multiple guidelines. 24782344  A new proportion measure of the treatment effect captured by candidate surrogate endpoi... 24699284  Monocular slam for autonomous robots with enhanced features initialization. 24937084  Towards a more complete understanding of noncovalent interactions involving aromatic r... 19359214  Towards a comprehensive electronic patient record to support an innovative individual c... 23461334  Endurance running and its relevance to scavenging by early hominins. 
Publication Detail:

Type: Journal Article; Research Support, NonU.S. Gov't Date: 20120912 
Journal Detail:

Title: BMC medical research methodology Volume: 12 ISSN: 14712288 ISO Abbreviation: BMC Med Res Methodol Publication Date: 2012 
Date Detail:

Created Date: 20121218 Completed Date: 20130620 Revised Date: 20130711 
Medline Journal Info:

Nlm Unique ID: 100968545 Medline TA: BMC Med Res Methodol Country: England 
Other Details:

Languages: eng Pagination: 138 Citation Subset: IM 
Affiliation:

Norwich Medical School, Faculty of Medicine and Health Science, University of East Anglia, Norwich, Norfolk, UK. Fujian.Song@uea.ac.uk 
Export Citation:

APA/MLA Format Download EndNote Download BibTex 
MeSH Terms  
Descriptor/Qualifier:

Bayes Theorem Bias (Epidemiology) Clinical Protocols* Computer Simulation Data Interpretation, Statistical* Humans Research Design* 
Grant Support  
ID/Acronym/Agency:

G0901479//Medical Research Council 
Comments/Corrections 
Full Text  
Journal Information Journal ID (nlmta): BMC Med Res Methodol Journal ID (isoabbrev): BMC Med Res Methodol ISSN: 14712288 Publisher: BioMed Central 
Article Information Download PDF Copyright ©2012 Song et al.; licensee BioMed Central Ltd. openaccess: Received Day: 15 Month: 6 Year: 2012 Accepted Day: 4 Month: 9 Year: 2012 collection publication date: Year: 2012 Electronic publication date: Day: 12 Month: 9 Year: 2012 Volume: 12First Page: 138 Last Page: 138 PubMed Id: 22970794 ID: 3524036 Publisher Id: 1471228812138 DOI: 10.1186/1471228812138 
Simulation evaluation of statistical properties of methods for indirect and mixed treatment comparisons  
Fujian Song1  Email: Fujian.Song@uea.ac.uk 
Allan Clark1  Email: Allan.Clark@uea.ac.uk 
Max O Bachmann1  Email: M.Bacgmann@uea.ac.uk 
Jim Maas1  Email: J.Maas@uea.ac.uk 
1Norwich Medical School, Faculty of Medicine and Health Science, University of East Anglia, Norwich, Norfolk, NR4 7TJ, UK 
Indirect and mixed treatment comparisons have been increasingly used in health technology assessment reviews [^{1}^{}^{4}]. Indirect treatment comparison (ITC) refers to a comparison of different treatments using data from separate studies, in contrast to a direct treatment comparison (DTC) within randomised controlled trials. Statistical methods have been developed to indirectly compare multiple treatments and to combine evidence from direct and indirect comparisons in mixed treatment comparison (MTC) or network metaanalysis [^{5}^{}^{9}].
The existing simple [^{5}] or complex [^{6}^{}^{8}] statistical methods for ITC and MTC are theoretically valid if certain assumptions can be fulfilled [^{2},^{10}]. The relevant assumptions could be specifically classified according to a conceptual framework that delineates the homogeneity assumption for conventional metaanalysis, the similarity assumption for adjusted ITC, and the consistency assumption for pooling direct and indirect estimates by MTC [^{2},^{11}]. Among the basic assumptions, heterogeneity in metaanalysis and inconsistency between direct and indirect estimates can be quantitatively investigated. The presence of inconsistency between direct and indirect estimates has been empirically investigated in metaepidemiological studies and numerous cases reports [^{12}^{}^{16}]. A range of statistical methods have been suggested to investigate the inconsistency in network metaanalysis [^{5},^{7},^{9},^{17}^{}^{19}].
The statistical properties of simple adjusted ITC [^{5}] have been previously evaluated in simulation studies [^{1},^{20},^{21}]. However, there are no simulation studies that formally evaluate methods for Bayesian network metaanalysis. In this simulation study, we comprehensively evaluated properties and the performance of commonly used ITC and MTC methods. Specifically, the objectives of the study are (1) to investigate bias, Type I error and statistical power of different comparison models for estimating relative treatment effects, and (2) to investigate bias, Type I error and statistical power of different comparison models for quantifying inconsistency between direct and indirect estimates.
We investigated the performance of the following ITC and MTC statistical models.
This frequentist based method is also called as Bucher’s method [^{5}], based on the assumption that indirect evidence is consistent with the direct comparison. Suppose that treatment A and B are compared in RCT1 (with d_{AB} as its result, logOR for example), and treatment A and C compared in RCT2 (with d_{AC}as its result). Then treatment A can be used as a common comparator to adjust the indirect comparison of treatment B and C:
dBCInd=dAB−dAC 
Its variance is:
VardBCInd=VardAB+VardAC 
When there are multiple trials that compared treatment A and B or treatment A and C, results from individual trials can be combined using fixedeffect or randomeffects model. Then the pooled estimates of d_{AB} and d_{AC} are used in the AITC.
The results of frequentist ITC (using the Bucher’s method) can be combined with the result of frequentist DTC in a MTC. The frequentist combination of the DTC and ITC estimate is weighted by the corresponding inverse of variance, as for pooling results from two individual studies in metaanalysis [^{22}].
This MTC is termed ‘consistency MTC’, as it assumes that the result of direct comparison of treatment B and C statistically equals to the result of indirect comparison of B and C based on the common comparator A[^{9}]. Suppose a network of three sets of trials that compared A vs. B, A vs. C, and B vs. C, we only need to estimate two basic parameters d_{AB} and d_{AC}, and the third contrast (functional parameter) can be derived by d_{BC} = d_{AB}d_{AC}.
As the CFMTC, this model is also based on the assumption that ITC is consistent with DTC [^{8}]. Suppose that several treatments (A, B, C, and so on) are compared in a network of trials. We need to select a treatment (treatment A, for example, placebo or control) as the reference treatment. In each study, we also consider a treatment as the base treatment (b). Below is the general model for the consistency MTC:
θkt={μkbb=A,B,C,ift=bμkb+δkbtt=B,C,D,iftisafterb 
δkbt~Ndbt,τ2 
dbt=dAt−dAb 
dAA=0 
Here θ_{kt} is the underlying outcome for treatment t in study k, μ_{kb} is the outcome of treatment b, and δ_{kbt} is the relative effect of treatment t as compared with treatment b in study k. The trial specific relative effect δ_{kbt} is assumed to have a normal distribution with a mean d_{bt} and variance τ^{2} (i.e., between study variance). When τ^{2} = 0, this model provides results as a fixedeffect analysis.
Some authors assumed that inconsistencies (that is, the differences between d_{BC} from direct comparisons and dBCInd based on indirect comparison) have a common normal distribution with mean 0 and variance σω2[^{7},^{9}]. These methods have been termed the “random inconsistency model” [^{23}]. In this study, we evaluated the random inconsistency model by Lu and Ades [^{9}]. This model can be expressed by the following:
dBC=dAB−dAC+ωBC, 
and
ωBC~N0,σω2. 
Here ω_{BC} is termed inconsistency factor (ICF).
In the inconsistency Bayesian metaanalysis (IBMA), each of the mean relative effects (d_{xy}) is separately estimated without using indirect treatment comparison information. The IBMA analysis is equivalent to a series of pairwise DTC metaanalyses, although a common betweenstudy variance (τ^{2}) across different contrasts is assumed [^{24}].
We originally intended to include the Lumley’s frequentist method for network metaanalysis [^{7}]. However, it was excluded because of convergence problems during computer simulations.
Let d_{BC} denote the natural log OR estimated by the DTC, and dBCInddenote the log OR estimated by the ITC. The inconsistency (ω_{BC}) in the results between the direct and indirect comparison of treatment B and C can be calculated by the following:
ωBC=dBC−dBCInd 
When the estimated ω_{BC} is greater than 0, it indicates that the treatment effect is overestimated by the ITC as compared with the DTC. For Bucher’s method [^{5},^{12}], the calculation of inconsistency was based on the pooled estimates of d_{BC} and dBCInd by metaanalyses. The variance of the estimated inconsistency was calculated by:
VarωBC=VardBC+VardBCInd 
where Var(d_{BC}) and Var( dBCInd) are the variance of d_{BC} and dBCIndrespectively. The null hypothesis that the DTC estimate equals to the ITC estimate was tested by Z statistic
ZBC=ωBCVarωBC 
If the absolute value of Z_{BC} is greater than 1.96, the observed inconsistency is considered to be statistically significantly different from zero.
The estimate of inconsistency is not applicable when the consistency Bayesian MTC model [^{8}] is used. With the inconsistency Bayesian metaanalysis (IBMA), the estimate of d_{BC} is naturally available, and dBCInd can be easily estimated based on d_{AB} and d_{AC}, as by the “nodesplitting” method [^{17},^{24}]. The point estimate of inconsistency in Bayesian MTC was the average (mean value) of the simulated results. The significance of the inconsistency was based on the estimated 95% intervals. If the 95% intervals did not contain the zero, the observed inconsistency was considered to be statistically significant.
The random inconsistency Bayesian MTC (RIBMTC) model assumes that the inconsistency within a network of trials is normally distributed with mean ω = 0 and variance σω2[^{9}]. We also recorded the estimated ω and σω2 by the RIBMTC model.
In this study, a simple network of twoarm trials with a closed loop was simulated to separately compare three treatments: treatment 1 (T_{1}, placebo), treatment 2 (T_{2}, an old drug), and treatment 3 (T_{3}, a new drug) (Figure 1). The comparison of T_{2} and T_{3} was considered as the main interest. Trials that compared T_{1} vs. T_{2} and trials that compared T_{1} vs. T_{3} were used for the indirect comparison of T_{2} and T_{3}. Given the available resource, a limited number of simulation scenarios were adopted in this study. The following simulation parameters were decided after considering characteristics of published metaanalyses (also see Table 1).
• The number of patients in each arm of a pairwise trial is 100. The number of trials for each of the three contrasts is 1, 5, 10, 20, 30 and 40. A scenario of imbalanced number of trials (including a single trial for one of the three sets) is also included.
We use odds ratio (OR) to measure the outcome [^{25}]. The assumed true OR_{12} = 0.8, and the true OR_{13} = 0.8 or 0.6. When OR is less than 1 (or log OR < 0), it indicates that the risk of events is reduced by the second of the two treatments compared.
• The true logOR23 is calculated by: logOR23=logOR13−logOR12.
• The baseline risk in the control arm is assumed to be 20% or 10%.
• It is assumed that heterogeneity is constant across different comparisons, and there are four levels of between study variance: τ^{2} = 0.00, 0.05, 0.10, and 0.15 respectively [^{26}].
• The trialspecific natural log OR (d_{kij}) in study k used to generate simulated trials is based on the assumed true log OR and the betweentrial variance: dkij~Ndij,τ2.
• Given the baseline risk (P_{k1}) and the trialspecific OR, the risk in the treatment arm in study k is calculated by:
•
Pkt=Pk1×Expdk1t1−Pk1+Pk1×Expdk1t. 
• Bias in a clinical trial can be defined as a systematic difference between the estimated effect size and the true effect size [^{27}]. It is assumed here that all bias, where it exists, will result in an overestimated treatment effect of active drugs (T_{2} and T_{3}) as compared with placebo (T_{1}), and an overestimated treatment effect of the new drug (T_{3}) relative to the old drug (T_{2}). The extent of bias and inconsistency is measured by ratio of odds ratios (ROR). When ROR = 1, it indicates that there is no bias. When ROR = 0.8, it means that the effect (OR) of a treatment is overestimated by 20%.
A network of trials was randomly generated, using assumed input parameters (Table 1). For each arm of the simulated trial, the number of events was randomly generated according to the binomial distribution:
rki~BinomialNki,Pki 
Here, N_{ki} is the number of patients in the arm of treatment i, and P_{ki} is the risk of events given treatment i in study k. If the simulated number of events is zero, we added 0.5 to the corresponding cells of the 2x2 table for conducting inverse variance weighted metaanalysis.
AITC and MTC were conducted using data from the simulated trials by fixedeffect and randomeffects metaanalyses. For frequentist ITC, we used inverse variance weights to pool results of multiple trials in metaanalysis, and used the DerSimonianLaird method for randomeffects metaanalyses [^{22}].
The performance of the ITC and MTC methods was measured by the type I error rate or statistical power, observed bias and mean squared error (MSE). We estimated the rate of type I error (when the null hypothesis is true) and the statistical power (when the null hypothesis is false) by the proportion of significant estimates (two sided α < 0.05) for the frequentist methods, or the proportion of estimates with a 95% interval that did not contain the zero treatment effect for the Bayesian methods.
We generated 5000 simulated results for each of the simulation scenarios in Table 1, and calculated the bias and mean squared error (MSE) as:
Bias(θ⌢)=15000∑c=15000(θ⌢c−θ) 
MSE(θ⌢)=15000∑c=15000(θ⌢c−θ)2 
where ϑ is the true parameter value, θ⌢_{c} is the estimated value from the c^{th} simulated data set. Monte Carlo 95% intervals for estimated mean bias and inconsistency were based on the 2.5% and 97.5% percentiles of the corresponding estimates.
Bayesian network metaanalyses were implemented by Markov chain Monte Carlo (MCMC) methodology [^{8}]. Vague or noninformative priors were used for MCMC simulations. Each simulation comprised 20,000 ‘burnin’ iterations followed by 40,000 posterior mean sample iterations. Posterior mean samples collected were thinned by a ratio of 5:1 to resulting in 8,000 final posterior mean samples from each MCMC simulation. We used R 2.13.0 [^{28}] and related packages (RJAGS) to generate data and to sample Bayesian posterior distributions. All simulations were carried out on the High Performance Computing Cluster supported by the Research Computing Service at the University of East Anglia.
For the purpose of simplification, we only presented the results of selected representative scenarios below.
As expected, mean squared error (MSE) is positively associated with the small number of studies, and large heterogeneity in metaanalysis (Figure 2). Of the comparison methods investigated, the AITC method has the largest MSE. With the existence of heterogeneity, there are no noticeable differences in MSE between the fixedeffect and randomeffects models.
When there is no bias in simulated trials, the results of the all comparison methods are on average unbiased (Figure 3a). When all trials are similarly biased, the DTC and the inconsistency Bayesian MTC (RIBMTC) are fully biased, while the AITC is not biased (Figure 3b). When only the trials involved in AITC are biased, the DTC and inconsistency MTC models are unbiased (Figure 3c). The extent of bias in the consistency MTC models (both CFMTC and CBMTC) lies between the DTC and ITC. The impacts of biases in primary studies on the validity of different comparison methods are summarised in Table 2.
Assuming zero heterogeneity across studies, there are no clear differences in the rate of type I error between different MTC methods (Figure 4). The extent of heterogeneity was clearly associated with inflated rates of type I error. In the presence of great heterogeneity, the rate of type I error is particularly large when fixedeffect models are applied. The randomeffects models tend to have values closer to 0.05. However, randomeffects models no longer have advantages when there is only a single study available for each of the three comparisons (Figure 4d). When there is only a single study for each of the three contrasts, the rate of type I error is zero by Bayesian randomeffects models (CBMTC and RIBMTC), which seems due to the unchanged vague or noninformative priors [^{26}]. Within the fixedeffect models the different methods have similar type I error rates, as well as within the randomeffects models (Figure 4).
As expected, the higher baseline risk (20%) is associated with the higher rate of type I error as compared with the lower baseline risk (10%) (data not shown).
As expected, the statistical power (1β) is positively associated with the number of studies (Figure 5). As compared with the DTC, the statistical power of AITC is low. The pooling of DTC and AITC evidence in MTC increases the statistical power (Figure 5).
With a larger number of studies, the statistical power of all methods is reduced by the presence of heterogeneity (Figure 5ab). The association between heterogeneity and statistical power becomes unclear when the number of studies is small (Figure 5cd). When there is only a single study, the statistical power of all the methods is extremely low, and it is zero by the Bayesian randomeffects models (again, due to vague or noninformative priors) (Figure 5d).
A expected, the statistical power is reduced when the baseline risk is lowered from 20% to 10% (data no shown).
The estimated inconsistencies by the different comparison methods are on average unbiased, but the 95% intervals are wide (Figure 6). The 95% interval of the estimated inconsistency by the RIBMTC method is much wider than by other methods.
Heterogeneity is positively associated with the rate of type I error for detecting inconsistency by the fixedeffect models, while the number of studies does not noticeably affect the rate of type I error (Figure 7). However, when there is only a single study for each of the three contrasts, the Bayesian randomeffects method has zero type I error (due to the vague or noninformative priors for τ), and the rate of type I error by frequentist randomeffects model was similar to the fixedeffect models (Figure 7e). When there is imbalanced and singleton number of trials, the frequentist randomeffects model has larger type I errors than the Bayesian randomeffects method (Figure 7f).
The statistical power to detect the specified inconsistency (P < 0.05) increases with the increasing number of studies (Figure 8). However, the statistical power is still lower than 70% even when there are 120 studies (200 patients in each study) in the trial network (Figure 8a). By fixedeffect model, the existence of heterogeneity generally increases the power to detect inconsistency. However, the impact of heterogeneity on the power of randomeffects models is unclear. When there is only one study for each of the three contrasts, the power by Bayesian randomeffects model is about zero (given vague or noninformative priors for τ^{2}) (Figure 8e).
Mean squared error (MSE) reflects a combination of both bias and random error, which is clearly associated with the number of studies, heterogeneity, and the baseline risk. When simulated studies are not biased, the AITC method had the largest MSE, as compared with DTC and MTC methods. Given the same comparison approach, there are no noticeable differences in estimated MSE between the fixedeffect and randomeffects models.
When simulated trials are unbiased, the results of all comparison methods investigated are good at predicting the true magnitude and direction of the effect. However, there are simulation scenarios under which AITC could be biased. When all trials are similarly biased, the results of AITC will be less biased than the results of DTC. This finding is consistent with the result of a previous study that evaluated the impacts of biases in trials involved in AITC [^{29}]. Bias by MTC will lie between the bias by DTC and AITC (Table 2).
It should be noted that, in addition to the scenarios simulated in this study, bias in original trials may also be magnified if the two sets of trials for the AITC are biased in opposite directions. For example, it is possible that the relative effect of a treatment versus the common comparator is overestimated in one set of trials, and underestimated in another set of trials. Under this circumstance, the AITC estimate will be biased and the extent of such bias will be greater than the extent of bias in the original studies.
The type I error of ITC and MTC methods are associated with the extent of heterogeneity, whether a fixedeffect or randomeffects metaanalysis is used, and the level of baseline risk. There are no noticeable differences in type I error between different comparison methods.
As expected, the number of studies is clearly associated with the statistical power to detect specified true treatment effect. The AITC method has the lowest statistical power. When there is no assumed inconsistency or bias, the MTC increases the statistical power as compared with the power of DTC alone. There are no noticeable differences in the statistical power between different MTC methods.
We found that the all comparison methods are on average unbiased for estimating the inconsistency between the direct and indirect estimates. The 95% intervals by the RIBMTC method are much wider than that by other methods. Heterogeneity inflates the type I error in the detection of inconsistencies by fixedeffect models. When there are singleton studies in the trial network, the frequentist based randomeffects model has relatively larger type I error than the Bayesian randomeffects model.
As expected, the power to detect inconsistency is positively associated with the number of studies and the use of fixedeffect models. For the inconsistency detection, heterogeneity increases the power of fixedeffect models, but reduces the power of randomeffects models when the number of studies is large.
Methods of frequentist based indirect comparison have been investigated in several previous simulation studies [^{1},^{20},^{21}]. A study found that the Bucher’s method and logistic regression generally provided unbiased estimates [^{1}]. The simulation scenarios evaluated in that study was limited by using data from a single trial. In another study, Wells and colleagues simulated variance, bias and MSE by the DTC and AITC method [^{21}]. It was reported that the observed variance, bias and MSE for the AITC were larger than that for the DTC, particularly when the baseline risk was low [^{21}]. A more recent simulation study by Mills and colleagues reported findings from an investigation of the Bucher’s ITC method [^{20}]. They found that the AITC method lacks statistical power, particularly in the presence of heterogeneity, and has high risk of overestimation when only a single trial is available in one of the two trial sets. However, they did not compare the performance of the AITC and the corresponding DTC or MTC [^{20}].
Bayesian MTC methods have not been investigated in previous simulation studies. In the current study, we investigated the performance of statistical methods for DTC, AITC, frequentist and Bayesian MTC. The simulation results reveal the complex impacts of biases in primary studies on the results of direct, indirect and mixed treatment comparisons. When the simulated primary studies are not systematically biased, the AITC and MTC methods are not systematically biased, although the AITC method has the largest MSE. Depending on the extent and direction of bias in primary studies, the AITC and MTC estimates could be more or less biased than the DTC estimates.
In the existence of heterogeneity and a small number of studies, AITC and MTC methods have indeed the inflated rate of type I error and a low statistical power. It is important to note that the performance of the corresponding DTC is similarly affected. The performance of the DTC method is superior to the performance of the AITC method. However, the statistical power of MTC is generally higher than the corresponding DTC.
It is the first time that the power to detect inconsistency in network metaanalysis has been investigated by simulations. The low power to detect inconsistency in network metaanalysis seems similar to the low power to detect heterogeneity in pairwise metaanalysis [^{30}].
Due to the restriction of available resource, a limited number of simulation scenarios were considered. Clearly, the performance of a model will depend on whether the simulation scenario matches the model’s assumptions. For example, the fixedeffect model should not be used when there is heterogeneity across multiple studies, in order to avoid the inflated type I error.
In this paper, the simple network containing three sets of twoarm trials with a single completed loop is considered. We evaluated the methods for detecting inconsistency, and did not consider models for investigating causes of inconsistency. Therefore, further simulation studies are required to evaluate complicated networks involving more than three different treatments and containing trials with multiple arms. In addition, further simulation studies are required to evaluate the performance of regression models that incorporate studylevel covariates for investigating the causes of heterogeneity and inconsistency in network metaanalysis [^{18},^{19},^{31}].
For MCMC simulations, we used vague or noninformative priors [^{32}]. When the number of studies involved is large, finding of the study were unlikely to be different if more informative priors had been used. However, further research is required to investigate whether an informed prior for betweenstudy variance would be more appropriate when the number of studies involved in a Bayesian metaanalysis is very small [^{26}].
The results of any comparison methods (including direct comparison trials) may be biased as a consequence of bias in primary trials involved. To decide which comparison method may provide more valid or less biased results, it is helpful if we can estimate the extent and direction of possible biases in primary studies. Empirical evidence indicated the existence of bias in randomised controlled trials [^{33}^{}^{35}], particularly in trials that had outcomes subjectively measured without appropriate blinding [^{36},^{37}]. Although it is usually difficult to estimate the magnitude of bias, the likely direction of bias may be estimated. For example, it may be assumed that possible bias was likely to result in an overestimation of treatment effect of active or new drugs when they are compared with placebo or old drugs [^{38}]. More complicated models could also be explored for estimating bias in evidence synthesis [^{39}^{}^{41}].
For detecting inconsistency, the fixedeffect methods have a higher rate of type I errors as well as a higher statistical power as compared with the randomeffects methods. The performances of the Bayesian and frequentist methods are generally similar. When there are singleton trials in evidence network, the rate of type I error by frequentist randomeffects method is larger than by the Bayesian randomeffects method. This is due to the underestimation of betweenstudy variance by the frequentist method, while the Bayesian method provides an estimate of betweenstudy variance using all data available in the whole network of trials [^{32}]. However, when there is a single study for each of the all comparisons, Bayesian randomeffects models should be avoided.
Imbalanced distribution of effectmodifiers across studies may be a common cause of both heterogeneity in pairwise metaanalysis and evidence inconsistency in network metaanalysis [^{17}]. However, it is helpful to distinguish the heterogeneity in pairwise metaanalysis and inconsistency in network metaanalysis. Under the assumption of exchangeability, the results of direct and indirect comparisons could be consistent in the presence of large heterogeneity in metaanalyses. For example, the inflated type I error rate in detecting inconsistency by the fixedeffect models can be corrected by the use of randomeffects models. It is also possible to observe significant inconsistencies between direct and indirect estimates when there is no significant heterogeneity in the corresponding pairwise metaanalyses. The association between heterogeneity and the statistical power to detect inconsistency is complex, depending on whether the fixedeffect or randomeffects model is used and the number of studies involved.
A major concern is the very low power of commonly used methods to detect inconsistency in network metaanalysis when it does exist. Therefore, inconsistency in network metaanalysis should not be ruled out based only on the statistically nonsignificant result of a statistical test. For all network metaanalysis, trial similarity and evidence consistency should be carefully examined [^{2},^{42}].
Of the comparison methods investigated, the indirect comparison has the largest mean squared error and thus the lowest certainty. The direct comparison is superior to the indirect comparison in terms of statistical power and mean squared error. Under the simulated circumstances in which there are no systematic biases and inconsistencies, the performances of mixed treatment comparisons are generally better than the performance of the corresponding direct comparisons.
When there are no systematic biases in primary studies, all methods investigated are on average unbiased. Depending on the extent and direction of biases in different sets of studies, indirect and mixed treatment comparisons may be more or less biased than the direct comparisons. For inconsistency detection in network metaanalysis, the methods evaluated are on average unbiased. The statistical power of commonly used methods for detecting inconsistency in network metaanalysis is low.
In summary, the statistical methods investigated in this study have different advantages and limitations, depending on whether data analysed satisfies the different assumptions underlying these methods. To choose the most valid statistical methods for network metaanalysis, an appropriate assessment of primary studies included in the evidence network is essential.
AITC: Adjusted indirect treatment comparison; CBMTC: Consistency Bayesian mixed treatment comparison; CFMTC: Consistency frequentist mixed treatment comparison; DTC: Direct treatment comparison; IBMA: Inconsistency Bayesian metaanalysis; ITC: Indirect treatment comparison; MCMC: Markov chain Monte Carlo; MSE: Mean squared error; MTC: Mixed treatment comparison; OR: Odds ratio; RCT: Randomised controlled trial; ROR: Ratio of odds ratios; RIBMTC: Random inconsistency Bayesian mixed treatment comparison.
The authors declare that they have no competing interests.
FS, AC and MOB conceived the idea and designed research protocol. JM, AC and FS developed simulation programmes and conducted computer simulations. FS analysed data and prepared the draft manuscript. All authors commented on the manuscript. FS had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
The prepublication history for this paper can be accessed here:
http://www.biomedcentral.com/14712288/12/138/prepub
This study was funded by UK Medical Research Council (Methodological Research Strategic Grant: G0901479). The research presented was carried out on the High Performance Computing Cluster supported by the Research and Specialist Computing Support service (RSCSS) at the University of East Anglia.
References
Glenny AM,Altman DG,Song F,Sakarovitch C,Deeks JJ,D'Amico R,Bradburn M,Eastwood AJ,Indirect comparisons of competing interventionsHealth Technol AssessYear: 20059261134 iiiiv. 16014203  
Song F,Loke YK,Walsh T,Glenny AM,Eastwood AJ,Altman DG,Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: a survey of published systematic reviewsBMJYear: 2009338b114710.113/bmj.b114719346285  
Donegan S,Williamson P,Gamble C,TudurSmith C,Indirect comparisons: a review of reporting and methodological qualityPLoS OneYear: 2010511e1105410.1371/journal.pone.001105421085712  
Edwards SJ,Clarke MJ,Wordsworth S,Borrill J,Indirect comparisons of treatments based on systematic reviews of randomised controlled trialsInt J Clin PractYear: 200963684185410.1111/j.17421241.2009.02072.x19490195  
Bucher HC,Guyatt GH,Griffith LE,Walter SD,The results of direct and indirect treatment comparisons in metaanalysis of randomized controlled trialsJ Clin EpidemiolYear: 199750668369110.1016/S08954356(97)0004989250266  
Higgins JP,Whitehead A,Borrowing strength from external trials in a metaanalysisStat MedYear: 199615242733274910.1002/(SICI)10970258(19961230)15:24<2733::AIDSIM562>3.0.CO;208981683  
Lumley T,Network metaanalysis for indirect treatment comparisonsStat MedYear: 200221162313232410.1002/sim.120112210616  
Lu G,Ades AE,Combination of direct and indirect evidence in mixed treatment comparisonsStat MedYear: 200423203105312410.1002/sim.187515449338  
Lu G,Ades AE,Assessing evidence inconsistency in mixed treatment comparisonsJ Am Stat AssocYear: 200610147444745910.1198/016214505000001302  
Jansen JP,Fleurence R,Devine B,Itzler R,Barrett A,Hawkins N,Lee K,Boersma C,Annemans L,Cappelleri JC,Interpreting indirect treatment comparisons and network metaanalysis for healthcare decision making: report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: part 1Value HealthYear: 201114441742810.1016/j.jval.2011.04.00221669366  
Jansen JP,Schmid CH,Salanti G,Directed acyclic graphs can help understand bias in indirect and mixed treatment comparisonsJ Clin EpidemiolYear: 201265779880710.1016/j.jclinepi.2012.01.00222521579  
Song F,Altman DG,Glenny AM,Deeks JJ,Validity of indirect comparison for estimating efficacy of competing interventions: empirical evidence from published metaanalysesBMJYear: 2003326738747247510.1136/bmj.326.7387.47212609941  
Song F,Xiong T,ParekhBhurke S,Loke YK,Sutton AJ,Eastwood AJ,Holland R,Chen YF,Glenny AM,Deeks JJ,et al. Inconsistency between direct and indirect comparisons of competing interventions: metaepidemiological studyBMJYear: 2011343d490910.1136/bmj.d490921846695  
Chou R,Fu R,Huffman LH,Korthuis PT,Initial highlyactive antiretroviral therapy with a protease inhibitor versus a nonnucleoside reverse transcriptase inhibitor: discrepancies between direct and indirect metaanalysesLancetYear: 200636895461503151510.1016/S01406736(06)69638417071284  
Madan J,Stevenson MD,Cooper KL,Ades AE,Whyte S,Akehurst R,Consistency between direct and indirect trial evidence: is direct evidence always more reliable?Value HealthYear: 201114695396010.1016/j.jval.2011.05.04221914518  
Gartlehner G,Moore CG,Direct versus indirect comparisons: a summary of the evidenceInt J Technol Assess Health CareYear: 200824217017718400120  
Dias S,Welton NJ,Caldwell DM,Ades AE,Checking consistency in mixed treatment comparison metaanalysisStat MedYear: 2010297–893294420213715  
Cooper NJ,Sutton AJ,Morris D,Ades AE,Welton NJ,Addressing betweenstudy heterogeneity and inconsistency in mixed treatment comparisons: application to stroke prevention treatments in individuals with nonrheumatic atrial fibrillationStat MedYear: 200928141861188110.1002/sim.359419399825  
Salanti G,Marinho V,Higgins JP,A case study of multipletreatments metaanalysis demonstrates that covariates should be consideredJ Clin EpidemiolYear: 200962885786410.1016/j.jclinepi.2008.10.00119157778  
Mills EJ,Ghement I,O'Regan C,Thorlund K,Estimating the power of indirect comparisons: a simulation studyPLoS OneYear: 201161e1623710.1371/journal.pone.001623721283698  
Wells GA,Sultan SA,Chen L,Khan M,Coyle D,Indirect evidence: indirect treatment comparisons in metaanalysisYear: 2009Ottawa, Canada: Canadian Agency for Drugs and Technologies in Health  
DerSimonian R,Laird N,Metaanalysis in clinical trialsControlled Clin TrialsYear: 1986717718810.1016/01972456(86)9004623802833  
Salanti G,Higgins JP,Ades AE,Ioannidis JP,Evaluation of networks of randomized trialsStat Methods Med ResYear: 200817327930117925316  
Dias S,Welton NJ,Sutton AJ,Caldwell DM,Lu G,Ades AE,NICE DSU Technical Support Document 4: Inconsistency in Network of Evidence Based on Randomised Controlled TrialsYear: 2011 Available from http://www.nicedsu.org.uk.  
Eckermann S,Coory M,Willan AR,Indirect comparison: relative risk fallacies and odds solutionJ Clin EpidemiolYear: 200962101031103610.1016/j.jclinepi.2008.10.01319179043  
Pullenayegum EM,An informed reference prior for betweenstudy heterogeneity in metaanalyses of binary outcomesStat MedYear: 201130263082309410.1002/sim.432622020726  
Higgins JP,Altman DG,Higgins J, Green SChapter 8: Assessing risk of bias in included studiesCochrane Handbook for Systematic Reviews of InterventionsYear: 2008Chichester: Wiley  
R_Development_Core_Team: RA language and environment for statistical computing. In. Vienna, AustriaYear: 2008Vienna, Austria: R Foundation for Statistical Computing  
Song F,Harvey I,Lilford R,Adjusted indirect comparison may be less biased than direct comparison for evaluating new pharmaceutical interventionsJ Clin EpidemiolYear: 200861545546310.1016/j.jclinepi.2007.06.00618394538  
Hardy RJ,Thompson SG,Detecting and describing heterogeneity in metaanalysisStat MedYear: 199817884185610.1002/(SICI)10970258(19980430)17:8<841::AIDSIM781>3.0.CO;2D9595615  
Nixon RM,Bansback N,Brennan A,Using mixed treatment comparisons and metaregression to perform indirect comparisons to estimate the efficacy of biologic treatments in rheumatoid arthritisStat MedYear: 20072661237125410.1002/sim.262416900557  
Dias S,Welton NJ,Sutton AJ,Ades AE,NICE DSU Technical Support Document 2: A Generalised Linear Modelling Framework for Pairwise and Network MetaAnalysis of Randomised Controlled TrialsYear: 2011 Available from: http://www.nicedsu.org.uk.  
Hartling L,Ospina M,Liang Y,Dryden DM,Hooton N,Krebs Seida J,Klassen TP,Risk of bias versus quality assessment of randomised controlled trials: cross sectional studyBMJYear: 2009339b401210.1136/bmj.b401219841007  
Juni P,Altman DG,Egger M,Systematic reviews in health care: assessing the quality of controlled clinical trialsBMJYear: 20013237303424610.1136/bmj.323.7303.4211440947  
Schulz KF,Chalmers I,Hayes RJ,Altman DG,Empirical evidence of biasDimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMAYear: 19952735408412  
Wood L,Egger M,Gluud LL,Schulz KF,Juni P,Altman DG,Gluud C,Martin RM,Wood AJ,Sterne JA,Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: metaepidemiological studyBMJYear: 2008336764460160510.1136/bmj.39465.451748.AD18316340  
Hrobjartsson A,Thomsen AS,Emanuelsson F,Tendal B,Hilden J,Boutron I,Ravaud P,Brorson S,Observer bias in randomised clinical trials with binary outcomes: systematic review of trials with both blinded and nonblinded outcome assessorsBMJYear: 2012344e111910.1136/bmj.e111922371859  
Chalmers I,Matthews R,What are the implications of optimism bias in clinical research?LancetYear: 2006367950944945010.1016/S01406736(06)68153116473106  
Thompson S,Ekelund U,Jebb S,Lindroos AK,Mander A,Sharp S,Turner R,Wilks D,A proposed method of bias adjustment for metaanalyses of published observational studiesInt J EpidemiolYear: 201140376577710.1093/ije/dyq24821186183  
Turner RM,Spiegelhalter DJ,Smith GC,Thompson SG,Bias modelling in evidence synthesisJ R Stat Soc AYear: 20091721214710.1111/j.1467985X.2008.00547.x  
Welton NJ,Ades AE,Carlin JB,Altman DG,Sterne JA,Models for potentially biased evidence in metaanalysis using empirically based priorsJ R Stat Soc AYear: 2009172Part 1119136  
Xiong T,ParekhBurke S,Loke YK,Abdelhamid A,Sutton AJ,Eastwood AJ,Holland R,Chen YF,Walsh T,Glenny AM,et al. Assessment of trial similarity and evidence consistency for indirect treatment comparison: an empirical investigationJ Clin EpidemiolYear: 2012 In press. 
Figures
Tables
Simulation input parameters
Parameters  Values 

Number of studies

3×40; 3×20; 3×10; 3×5; 3×1; 5/1/5

Number of patients per study

2×100

Between trial heterogeneity: τ^{2}

0.00; 0.05; 0.10; 0.15

Treatment effect: log OR, θ_{12}

log(0.8)

Treatment effect: log OR, θ_{13}

log(0.8); log(0.6)

Bias: ROR_{12}

0.00; 0.80

Bias: ROR_{13}

0.00; 0.80

Bias: ROR_{23}

0.00; 0.80

Baseline risk: P_{1}  10%; 20% 
(Note: these input values could be combined differently for a large number of possible simulation scenarios).
Impact of simulated biases on the results of different comparison methods
Comparison methods 
Actual true biases



Trials not biased  All trials similarly biased  One set of AIC trials biased  DC trials biased  
Direct comparison (DTC)

Not biased

Fully biased

Not biased

Fully biased

Indirect comparison (AITC)

Not biased

Not biased

Fully biased

Not biased

Consistency frequentist MTC

Not biased

Moderately biased

Moderately biased

Moderately biased

Consistency Bayesian MTC

Not biased

Moderately biased

Moderately biased

Moderately biased

Inconsistency Bayesian metaanalysis

Not biased

Fully biased

Not biased

Fully biased

Random inconsistency Bayesian MTC (RIBMTC)  Not biased  Fully biased  Not biased  Fully biased 
(Note: “Fully biased” – the bias equals the bias in trials; “Moderately biased” – as a result of combining biased direct estimate and unbiased indirect estimate, or a result of combining unbiased direct estimate and biased indirect estimate).
Article Categories:
Keywords: Indirect comparison, Mixed treatment comparison, Network metaanalysis, Inconsistency, Bias, Type I error, Statistical power, Simulation evaluation. 
Previous Document: Enrichment of previously uncultured bacteria from natural complex communities by adhesion to solid s...
Next Document: Phosphatidylethanolamine enhances amyloid fiberdependent membrane fragmentation.