Adjustment for reporting bias in network metaanalysis of antidepressant trials.  
Jump to Full Text  
MedLine Citation:

PMID: 23016799 Owner: NLM Status: MEDLINE 
Abstract/OtherAbstract:

BACKGROUND: Network metaanalysis (NMA), a generalization of conventional MA, allows for assessing the relative effectiveness of multiple interventions. Reporting bias is a major threat to the validity of MA and NMA. Numerous methods are available to assess the robustness of MA results to reporting bias. We aimed to extend such methods to NMA. METHODS: We introduced 2 adjustment models for Bayesian NMA. First, we extended a metaregression model that allows the effect size to depend on its standard error. Second, we used a selection model that estimates the propensity of trial results being published and in which trials with lower propensity are weighted up in the NMA model. Both models rely on the assumption that biases are exchangeable across the network. We applied the models to 2 networks of placebocontrolled trials of 12 antidepressants, with 74 trials in the US Food and Drug Administration (FDA) database but only 51 with published results. NMA and adjustment models were used to estimate the effects of the 12 drugs relative to placebo, the 66 effect sizes for all possible pairwise comparisons between drugs, probabilities of being the best drug and ranking of drugs. We compared the results from the 2 adjustment models applied to published data and NMAs of published data and NMAs of FDA data, considered as representing the totality of the data. RESULTS: Both adjustment models showed reduced estimated effects for the 12 drugs relative to the placebo as compared with NMA of published data. Pairwise effect sizes between drugs, probabilities of being the best drug and ranking of drugs were modified. Estimated drug effects relative to the placebo from both adjustment models were corrected (i.e., similar to those from NMA of FDA data) for some drugs but not others, which resulted in differences in pairwise effect sizes between drugs and ranking. CONCLUSIONS: In this case study, adjustment models showed that NMA of published data was not robust to reporting bias and provided estimates closer to that of NMA of FDA data, although not optimal. The validity of such methods depends on the number of trials in the network and the assumption that conventional MAs in the network share a common mean bias mechanism. 
Authors:

Ludovic Trinquart; Gilles Chatellier; Philippe Ravaud 
Related Documents
:

24984689  Factors contributing to the rise of buprenorphine misuse: 20082013. 16549309  High school students' misuse of overthecounter drugs: a populationbased study in an ... 20051089  Academic achievement and adolescent drug use: an examination of reciprocal effects and ... 8540779  High school students who use crack and other drugs. 23857029  In silico study on the inhibitory interaction of drugs with wildtype cyp2d6.1 and the ... 25050839  Retrospective analysis of synthetic cannabinoids in serum samples  epidemiology and co... 20577969  External referencing and pharmaceutical price negotiation. 1676629  Effect of h2 antagonists on the differential secretion of triamterene and its sulfate c... 11132509  Comparative pharmacokinetics of submucosal vs. intravenous flumazenil (romazicon) in an... 
Publication Detail:

Type: Journal Article; Research Support, NonU.S. Gov't Date: 20120927 
Journal Detail:

Title: BMC medical research methodology Volume: 12 ISSN: 14712288 ISO Abbreviation: BMC Med Res Methodol Publication Date: 2012 
Date Detail:

Created Date: 20130107 Completed Date: 20130620 Revised Date: 20130711 
Medline Journal Info:

Nlm Unique ID: 100968545 Medline TA: BMC Med Res Methodol Country: England 
Other Details:

Languages: eng Pagination: 150 Citation Subset: IM 
Affiliation:

Centre Cochrane Français, Paris, France. ludovic.trinquart@htd.aphp.fr 
Export Citation:

APA/MLA Format Download EndNote Download BibTex 
MeSH Terms  
Descriptor/Qualifier:

Antidepressive Agents
/
therapeutic use* Bias (Epidemiology) CaseControl Studies Clinical Trials as Topic Humans MetaAnalysis as Topic* Publication Bias* Regression Analysis Research Design* 
Chemical  
Reg. No./Substance:

0/Antidepressive Agents 
Comments/Corrections 
Full Text  
Journal Information Journal ID (nlmta): BMC Med Res Methodol Journal ID (isoabbrev): BMC Med Res Methodol ISSN: 14712288 Publisher: BioMed Central 
Article Information Download PDF Copyright ©2012 Trinquart et al.; licensee BioMed Central Ltd. openaccess: Received Day: 12 Month: 6 Year: 2012 Accepted Day: 19 Month: 9 Year: 2012 collection publication date: Year: 2012 Electronic publication date: Day: 27 Month: 9 Year: 2012 Volume: 12First Page: 150 Last Page: 150 PubMed Id: 23016799 ID: 3537713 Publisher Id: 1471228812150 DOI: 10.1186/1471228812150 
Adjustment for reporting bias in network metaanalysis of antidepressant trials  
Ludovic Trinquart12345  Email: ludovic.trinquart@htd.aphp.fr 
Gilles Chatellier256  Email: gilles.chatellier@egp.aphp.fr 
Philippe Ravaud1234  Email: philippe.ravaud@htd.aphp.fr 
1Centre Cochrane Français, Paris, France 

2Université Paris Descartes  Sorbonne Paris Cité, Paris, France 

3INSERM U738, Paris, France 

4Assistance PubliqueHôpitaux de Paris, Hôpital HôtelDieu, Centre d'Epidémiologie Clinique, Paris, France 

5INSERM CIE 4, Paris, France 

6Assistance PubliqueHôpitaux de Paris, Hôpital Européen Georges Pompidou, Unité de Recherche Clinique, Paris, France 
Network metaanalyses (NMAs) are increasingly being used to evaluate the best intervention among different existing interventions for a specific condition. The essence of the approach is that intervention A is compared with a comparator C, then intervention B with C, and adjusted indirect comparison allows for comparing A and B, despite the lack of any headtohead randomized trial comparing A and B. An NMA, or multipletreatments metaanalysis (MA), allows for synthesizing comparative evidence for multiple interventions by combining direct and indirect comparisons [^{1}^{}^{3}]. The purpose is to estimate effect sizes for all possible pairwise comparisons of interventions, although some comparisons have no available trial.
Reporting bias is a major threat to the validity of results of conventional systematic reviews or MAs [^{4},^{5}]. Accounting for reporting biases in NMA is challenging, because unequal availability of findings across the network of evidence may jeopardize NMA validity [^{6},^{7}]. We previously empirically assessed the impact of reporting bias on the results of NMAs of antidepressant trials and showed that it may bias estimates of treatment efficacy [^{8}].
Numerous methods have been used as sensitivity analyses to assess the robustness of conventional MAs to publication bias and related smallstudy effects [^{9}^{}^{20}]. Modeling methods include regressionbased approaches and selection models. We extend these approaches to NMAs in the Bayesian framework.
First, we extended a metaregression model of the effect size on its standard error, recently described for MAs [^{21},^{22}]. In this approach, the regression slope reflects the magnitude of the association of effect size and precision (ie, the “smallstudy effect”), and the intercept provides an adjusted pooled effect size (ie, the predicted effect size of a trial with infinite precision). Second, we introduced a selection model, which models the probability of a trial being selected and is taken into account with inverse weighting in the NMA. Both adjustment models rely on the assumption that biases are exchangeable across the network, ie, biases, if present, operate in a similar way in trials across the network. Third, we applied these adjustment models to datasets created from US Food and Drug Administration (FDA) reviews of antidepressant trials and from their matching publications. These datasets were shown to differ because of reporting bias [^{23}]. We compared the results of the adjustment models applied to published data and standard NMA for published and for FDA data, the latter considered the reference standard.
A previous review by Turner et al. assessed the selective publication of antidepressant trials [^{23}]. The authors identified all randomized placebocontrolled trials of 12 antidepressant drugs approved by the FDA and then publications matching these trials by searching literature databases and contacting trial sponsors. From the FDA database, the authors identified 74 trials, among which results for 23 trials were unpublished. The proportion of trials with unpublished results varied across drugs, from 0% for fluoxetine and paroxetine CR to 60% and 67% for sertraline and bupropion (Additional file 1: Appendix 1). These entire trials remained unpublished depending on the nature of the results. Moreover, in some journal articles, specific analyses were reported selectively and effect sizes differed from that in FDA reviews. The outcome was the change from baseline to followup in depression severity score. The measure of effect was a standardized mean difference (SMD). Separate MAs of FDA data showed decreased efficacy for all drugs as compared to published data, the decrease in effect size ranging from 10% and 11% for fluoxetine and paroxetine CR to 39% and 41% for mirtazapine and nefazodone (Additional file 1: Appendix 1). Figure 1 shows the funnel plots of published data. Visual inspection does not suggest stronger treatment benefit in small trials (ie, funnel plot asymmetry) for any of the 12 comparisons of each drug and placebo.
The standard model for NMA was formalized by Lu and Ades [^{2},^{24},^{25}]. We assume that each trial i assessed treatments j and k among the T interventions in the network. Each trial provided an estimated intervention effect size y_{ijk} of j over k and its variance v_{ijk}. We assume that y_{ijk} > 0 indicates superiority of j over k. Assuming normal likelihood and according to a randomeffects model, y_{ijk} ~ N(θ_{ijk}, v_{ijk}) and θijk~NΘjk,τ2, where θ_{ijk} is the true effect underlying each randomized comparison between treatments j and k and Θjk is the mean of the randomeffects effect sizes over randomized comparisons between treatments j and k. The model assumes homogeneous variance (ie, τ_{jk}^{2} = τ^{2}). This assumption can be relaxed [^{2},^{26}]. The model also assumes consistency between direct and indirect evidence: if we consider treatment b as the overall network baseline treatment, the treatment effects of j, k, etc. relative to treatment b, Θjb, Θkb, etc., are considered basic parameters, and the remaining contrasts, the functional parameters, are derived from the consistency equations Θjk=Θjb−Θkb for every j, k ≠ b.
We used a network metaregression model extending a regressionbased approach for adjusting for smallstudy effects in conventional MAs [^{21},^{22},^{27}^{}^{29}]. This regressionbased approach takes into account a possible smallstudy effect by allowing the effect size to depend on a measure of its precision. Here, we assume a linear relationship between the effect size and its standard error and the model involves extrapolation beyond the observed data to a hypothetical study of infinite precision. The extended model for NMA is as follows:
yijk~Nγijk,vijk 
γijk=θijk+Iijk·βjk·vijk 
βjk~Nβ,σ2 
θijk~NΘjk,τ2 
Θjk=Θjb−Θkb for every j, k ≠ b
Figure A in Additional file 2 shows a graphical representation of the model. In the regression equation, θ_{ijk} is the treatment effect adjusted for smallstudy effects underlying each randomized comparison between treatments j and k; β_{jk} represents the potential smallstudy effect (ie, the slope associated with funnel plot asymmetry for the randomized comparisons between treatments j and k). The model assumes that these comparisonspecific regression slopes follow a common normal distribution, with mean slope β and common betweenslopes variance σ^{2}. This is equivalent to the assumption that comparisonspecific smallstudy biases are exchangeable within the network. Since we assumed that y_{ijk} > 0 indicates superiority of j over k, β > 0 would mean an overall tendency for a smallstudy effect (ie, treatment contrasts tend to be overestimated in smaller trials). Finally, Iijk is equal to 1 if a smallstudy effect is expected to favor treatment j over k, equal to −1 if a smallstudy effect is expected to favor treatment k over j, and equal to 0 when one has no reason to believe that there is bias in either direction (e.g., for equally novel active vs. active treatment). In trials comparing active and inactive treatments (e.g., placebo, no intervention), we can reasonably expect the active treatment to be always favored by smallstudy bias.
We use a model that adjusts for publication bias using a weight function to represent the process of selection. The model includes an effect size model (ie, the standard NMA model that specifies what the distributions of the effect size estimates would be with no selection) and a selection model that specifies how these effect size distributions are modified by the process of selection [^{14},^{30}]. We assume that the probability of selection depends on the standard error of the effect size, as a decreasing function of it. We adopt an approach based on a logistic selection model, as previously used in conventional MAs [^{18},^{31}].
yijk~Nγijk,vijk 
γijk=θijk/wi 
logitwi=β0jk+β1jk·Iijk·vijk 
β_{0jk} ~ N(β_{0}, σ_{0}^{2}) and β_{1jk} ~ N(β_{1}, σ_{1}^{2})
θijk~NΘjk,τ2 
Θjk=Θjb−Θkb for every j, k ≠ b
Figure B in Additional file 2 shows a graphical representation of the model. In the logistic regression equation, w_{i} represents the propensity of the trial results to be published, β_{0jk} sets the overall probability of observing a randomized comparison between treatments j and k, and β_{1jk} controls how fast this probability evolves as the standard error increases. We expect β_{1jk} to be negative, so trial results yielding larger standard errors have lower propensity to be published. The model assumes exchangeability of the β_{0jk} and β_{1jk} coefficients within the network. By setting γ_{ijk} = θ_{ijk}/w_{i}, we define a simple scheme that weights up trial results with lower propensity of being published so that they have a disproportionate influence in the NMA model. θ_{ijk} is the treatment contrast corrected for the selection process underlying each randomized comparison between treatments j and k. Finally, Iijk is defined in the same way as in the preceding section.
We estimated 4 models: standard NMA model of published data, 2 adjustment models of published data and a standard NMA model of FDA data. In each case, model estimation involved Markov chain Monte Carlo methods with Gibbs sampling. Placebo was chosen as the overall baseline treatment to compare all other treatments. Consequently, the 12 effects of drugs relative to placebo are the basic parameters. For 2 treatments j and k, SMD_{jk} > 0 indicate that j is superior to k. In both the metaregression and selection models, we assumed that the active treatments would always be favored by smallstudy bias as compared to placebo; consequently, Iijk is always equal to 1.
In the standard NMA model, we defined prior distributions for the basic parameters Θjb and the common variance τ^{2}: Θjb~N0,1002 and τ~Uniform0,10. In the metaregression model, we further chose vague priors for the mean slope β and common betweenslopes variance σ^{2}: β~N0,1002 and σ~Uniform0,10. In the selection model, we chose weakly informative priors for the central location and dispersion parameters (β_{0}, σ_{0}^{2}) and (β_{1}, σ_{1}^{2}). We considered p_{min} and p_{max} the probability of publication when the standard error takes its minimum and maximum values across the network of published data and specified beta priors for these probabilities [^{32}]. The latter was achieved indirectly by specifying prior guesses for the median and 5th or 95th percentile [^{33}]. For trials with standard error equal to the minimum observed value, we assumed that the chances of p_{min} being < 50% were 5% and the chances of p_{min} being < 80% were 50%. For trials with standard error equal to the maximum observed value, our guess was that the chances of p_{max} being < 40% were 50% and the chances of p_{max} being < 70% were 95%. We discuss these choices further in the Discussion. From this information, we determined Beta(7.52, 2.63) and Beta(3.56, 4.84) as prior distributions for p_{min} and p_{max}, respectively. Finally, we expressed β_{0} and β_{1} in terms of p_{min} and p_{max} and chose uniform distributions in the range (0,2) on the standard deviations σ_{0} and σ_{1}. For each analysis, we constructed posterior distributions from 2 chains of 500,000 simulations, after convergence achieved from an initial 500,000 simulations for each (burnin). Analysis involved use of WinBUGS v1.4.3 (Imperial College and MRC, London, UK) to estimate all Bayesian models and R v2.12.2 (R Development Core Team, Vienna, Austria) to summarize inferences and convergence. Codes are reported in the Additional file 1: Appendix 2.
We compared the results of the 2 adjustment models applied to published data and results of the standard NMA model applied to published data and the FDA data, the latter considered the reference standard. First, we compared posterior means and 95% credibility intervals for the 12 basic parameters and common variance, as well as for the 66 functional parameters (ie, all 12 × 11/2 = 66 possible pairwise comparisons of the 12 drugs). Second, we compared the rankings of the competing treatments. We assessed the probability that each treatment was best, then second best and third best, etc. We plotted the cumulative probabilities and computed the surface under the cumulative ranking (SUCRA) line for each treatment [^{34}]. Third, to compare the different models applied to published data, we used the posterior mean of the residual deviance and the deviance information criteria [^{35}].
In the metaregression model applied to published data, the posterior mean slope β was 1.7 (95% credible interval −0.3–3.6), which suggests an overall tendency for a smallstudy effect in the network. The 12 regression slopes were similar, with posterior means ranging from 1.4 to 1.9. In the selection model applied to published data, the mean slope β_{1} was −10.0 (−18.0 – 2.50), so trials yielding larger standard errors tended overall to have lower propensity to be published. In both models, all estimates were subject to large uncertainty (Additional file 1: Appendix 3).
Table 1 shows the estimates of the 12 basic parameters between each drug and placebo according to the 4 models. As compared with the NMA of published data, both adjustment models of published data showed that the whole 12 estimated drug effects relative to placebo were reduced. For the metaregression model, the decrease in efficacy ranged from 48% for venlafaxine XR to 99% for fluoxetine. For the selection model, the decrease ranged from 13% for escitalopram to 26% for paroxetine. When considering the functional parameters (ie, the 66 possible pairwise comparisons between drugs), we found differences between the results of adjustment models and the standard NMA model applied to published trials (Figure 2). The median relative difference, in absolute value, between pairwise effect sizes from the regression model and the standard NMA model was 57.3% (25% – 75% percentile 30.3% – 97.6%); the median relative difference between the selection model and the standard NMA model was 29.2% (15.1% – 46.1%).
Figure 3 summarizes the probabilities of being the best antidepressant. Compared to the standard NMA of published data, adjustment models of published data yielded decreased probabilities of the drug being the best for paroxetine (from 41.5% to 20.7% with the regression model or 25.7% with the selection model) and mirtazapine (from 30.3% to 15.7% or 21.9%). They yielded increased probabilities of the drug being the best for venlafaxine (from 7.9% to 10.6% or 12.8%) and venlafaxine XR (from 14.1% to 21.0% or 23.5%).
Figure 4 shows cumulative probability plots and SUCRAs. For the standard NMA of published data, paroxetine and mirtazapine tied for first place and venlafaxine XR and venlafaxine tied for third. The selection model applied to published data yielded a slightly different ranking, with paroxetine, mirtazapine and venlafaxine XR tying for first and venlafaxine was fourth. In the regression model applied to published data, venlafaxine XR was first, venlafaxine and paroxetine tied for second and mirtazapine was fifth.
In adjustment models applied to published data, betweentrial heterogeneity and fit were comparable to those obtained with standard NMA of published data (Tables 1 and 2).
The estimated drug effects relative to placebo from the regression and selection models were similar to those from the NMA of FDA data for some drugs (Table 1). There were differences when considering the 66 possible pairwise comparisons between drugs (Figure 5). Results also differed by models regarding the probability of being the best drug and the ranking of drugs. In the standard NMA of FDA data, the probability of being the best drug was 7.3% for mirtazapine, 33.9% for paroxetine, 19.3% for venlafaxine, and 25.7% for venlafaxine XR (Figure 3); paroxetine ranked first, and venlafaxine and venlafaxine XR tied for second (Figure 4).
We extended two adjustment methods for reporting bias from MAs to NMAs. The first method combined NMA and metaregression models, with effect sizes regressed against their precision. The second one combined the NMA model with a logistic selection model estimating the probability that a trial was published or selected in the network. The former method basically adjusts for funnel plot asymmetry or small study effects, which may arise from causes other than publication bias. The latter adjusts for publication bias (ie, the suppression of an entire trial depending on results). The two models borrow strength from other trials in the network with the assumption that biases operate in a similar way in trials across the domain.
In a specific network of placebocontrolled trials of antidepressants, based on data already described and published previously by Turner et al., comparing the results of adjustment models applied to published data and those of the standard NMA model applied to published data allowed for assessing the robustness of efficacy estimates and ranking to publication bias or related smallstudy effects. Both models showed a decrease in all basic parameters (ie, the 12 effect sizes of drugs relative to placebo). The 66 contrasts for all possible pairwise comparisons between drugs, the probabilities of being the best drug and the ranking were modified as well. The NMA of published data was not robust to publication bias and related smallstudy effects.
This specific dataset offered the opportunity to perform NMAs on both published and FDA data. The latter may be considered "an unbiased (but not the complete) body of evidence" for placebocontrolled trials of antidepressants [^{28}]. The comparison of the results of the 2 models applied to published data and the standard NMA model applied to FDA data showed that the effect sizes of drugs relative to placebo were corrected for some but not all drugs. This observation led to differences in the 66 possible pairwise comparisons between drugs, the probabilities of being the best drug and the ranking. It suggests that the 2 models should not be considered optimal; that is, the objective is not to produce definitive estimates adjusted for publication bias and related smallstudy effects but rather to assess the robustness of results to the assumption of bias.
Similar approaches have been used by other authors. Network metaregression models fitted within a Bayesian framework were previously developed to assess the impact of novelty bias and risk of bias within trials [^{36},^{37}]. Network metaregression to assess the impact of smallstudy effect was specifically used by Dias et al. in a reanalysis of a network of published headtohead randomized trials of selective serotonin reuptake inhibitors [^{38}]. Along the line of the regressionbased approach of Moreno et al. in conventional MA, the authors introduced a measure of study size as a regression variable within the NMA model and identified a mean bias in pairwise effect sizes. More recently, Moreno et al. used a similar approach to adjust for smallstudy effects in several conventional MAs of similar interventions and outcomes and illustrated their method using the dataset of Turner et al. [^{39}]. Our approach differed in that we extended this metaregression approach to NMAs. We used the standard error of treatment effect estimate as the regressor. As well, we specified an additive betweentrial variance rather than a multiplicative overdispersion parameter. With the latter, the estimated multiplicative parameter may be < 1, which implies less heterogeneity than would be expected by chance alone. Selection model approaches have been considered recently. Chootrakool et al. introduced an approximated normal model based on empirical logodds ratio for NMAs within a frequentist framework and applied Copas selection models for some groups of trials in the network selected according to funnel plot asymmetry [^{40}]. Mavridis et al. presented a Bayesian implementation of the Copas selection model extended to NMA and applied their method on the network of Turner et al. [^{41}]. In the Copas selection model, the selection probability depends on both the estimates of the treatment effects and their standard errors. In the extension to NMA, an extra correlation parameter ρ, assumed equal for all comparisons, needs to be estimated. When applied to published data of the network of Turner et al., the selection model we proposed and the treatmentspecific selection model of Mavridis et al. yielded close results.
The 2 adjustment models rely on the assumption of exchangeability of selection processes across the network; that is, biases, if present, operate in a similar way in trials across the network. In this case study, all studies were, by construction, industrysponsored, placebocontrolled trials registered with the FDA, and for all drugs, results of entire studies remained unreported depending on the results [^{23}]. Thus, the assumption of exchangeability of selection processes is plausible. More generally, if we have no information to distinguish different reporting bias mechanisms across the network, an exchangeable prior distribution is plausible, "ignorance implies exchangeability" [^{42},^{43}]. However, the assumption may not be tenable in other contexts in which reporting biases may affect the network in an unbalanced way. It may operate differently in placebocontrolled and headtohead trials [^{44}], in older and more recent trials (because of trial registries), and for drug and nondrug interventions [^{7}]. In more complex networks involving headtohead trials, the 2 adjustment models could be generalized to allow the expected publication bias or smallstudy bias for activeactive trials to differ from that of the expected bias in trials comparing active and inactive treatments [^{36}]. In headtohead trials, the direction of bias is uncertain but assumptions in defining Iijk could be that the sponsored treatment is favored (sponsorship bias) [^{45},^{46}] or that the newest treatment is favored (optimism bias) [^{37},^{47},^{48}]. If treatment j is the drug provided by the pharmaceutical that sponsored the trial and treatment k is not, Iijk would be equal to 1. Or Iijk would be equal to 1 if treatment j is newer than treatment k. However, disentangling the sources of bias operating on direct and indirect evidence would be difficult, especially if reporting bias and inconsistency are twisted together or if the assumed bias directions are in conflict on a loop.
The models we described have limitations. First, they would result in poor estimation of bias and effect sizes when the conventional MAs within the network include small numbers of trials [^{21}]. Second, for the selection model, we specified the weight function. If the underlying assumptions (ie, a logistic link form and the chance of a trial being selected related to standard error) are wrong, the estimated selection model will be wrong. However, alternative weight functions (e.g., probit link) or conditioning (e.g., on the magnitude of effect size) could be considered. Finally, it was implemented with a weakly informative prior, which mainly suggested that the propensity for results to be published may decrease with increasing standard error. There is a risk that prior information overwhelms observed data, especially if the number of trials is low. Although they were somewhat arbitrarily set, our priors for the selection model parameters were in line with the values in previous studies using the Copas selection model [^{12},^{49}]. Different patterns of selection bias could be tested, for instance, by considering various prior modes for p_{min} and p_{max}, the probabilities of publication when the standard error takes its minimum and maximum values across the network [^{15}].
In conclusion, addressing publication bias and related smallstudy effects in NMAs was feasible in this case study. Validity may be conditioned by sufficient numbers of trials in the network and assuming that conventional MAs constituting the network share a common mean bias. Simulation analyses are required to determine under which condition such adjustment models are valid. Application of such adjustment models should be replicated on more complex networks, ideally representing the totality of the data as in Turner's, but our results confirm that authors and readers should interpret NMAs with caution when reporting bias has not been addressed.
The authors declared that they have no competing interest.
LT provided substantial contributions to conception and design, analysis and interpretation of data, drafted the article and revised it critically for important intellectual content. GC and PR provided substantial contributions to design and interpretation of data, and revised the article critically for important intellectual content. All authors read and approved the final manuscript.
Grant support was from the French Ministry of Health Programme Hospitalier de Recherche Clinique National (PHRC 2011 MIN0163) and European Union Seventh Framework Programme (FP7 – HEALTH.2011.4.12) under grant agreement n° 285453 (http://www.openproject.eu). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
The prepublication history for this paper can be accessed here:
http://www.biomedcentral.com/14712288/12/150/prepub
Appendix 1. Summary effect sizes for the 12 comparisons of each antidepressant agent and placebo. Appendix 2. Winbugs codes. Appendix 3. Estimated parameters in the adjustment models applied to published data.
Click here for additional data file (1471228812150S1.docx)
Additional file 2
Figures. Graphical representation of the adjustment models (A) regression model and (B) selection model. A solid arrow indicates a stochastic dependence and a hollow arrow indicates a logical function.
Click here for additional data file (1471228812150S2.ppt)
The authors thank Laura Smales (BioMedEditing, Toronto, Canada) for editing the manuscript.
References
Lumley T,Network metaanalysis for indirect treatment comparisonsStat MedYear: 200221162313232410.1002/sim.120112210616  
Lu G,Ades AE,Combination of direct and indirect evidence in mixed treatment comparisonsStat MedYear: 200423203105312410.1002/sim.187515449338  
Salanti G,Higgins JPT,Ades AE,Ioannidis JPA,Evaluation of networks of randomized trialsStat Methods Med ResYear: 200817327930117925316  
Dwan K,Altman DG,Arnaiz JA,Bloom J,Chan AW,Cronin E,Decullier E,Easterbrook PJ,Von Elm E,Gamble C,et al. Systematic review of the empirical evidence of study publication bias and outcome reporting biasPLoS OneYear: 200838e308110.1371/journal.pone.000308118769481  
Song F,Parekh S,Hooper L,Loke YK,Ryder J,Sutton AJ,Hing C,Kwok CS,Pang C,Harvey I,Dissemination and publication of research findings: an updated review of related biasesHealth Technol AssessYear: 201014819321208550  
Salanti G,Kavvoura FK,Ioannidis JP,Exploring the geometry of treatment networksAnn Intern MedYear: 2008148754455318378949  
Li T,Puhan MA,Vedula SS,Singh S,Dickersin K,Network metaanalysishighly attractive but more methodological research is neededBMC MedYear: 201197910.1186/1741701597921707969  
Trinquart L,Abbé A,Ravaud P,Impact of reporting bias in network metaanalysis of antidepressant placebocontrolled trialsPLoS OneYear: 201274e3521910.1371/journal.pone.003521922536359  
Hedges LV,Modeling publication selection effects in metaanalysisStat SciYear: 19927224625510.1214/ss/1177011364  
Silliman NP,Hierarchical selection models with applications in metaanalysisJASAYear: 199792439926936  
Larose DT,Dey DK,Modeling publication bias using weighted distributions in a Bayesian frameworkComput Stat Data AnalYear: 19982627930210.1016/S01679473(97)00039X  
Copas J,Shi JQ,Metaanalysis, funnel plots and sensitivity analysisBiostatisticsYear: 20001324726210.1093/biostatistics/1.3.24712933507  
Duval S,Tweedie R,Trim and fill: a simple funnelplotbased method of testing and adjusting for publication bias in metaanalysisBiometricsYear: 200056245546310.1111/j.0006341X.2000.00455.x10877304  
Sutton AJ,Song F,Gilbody SM,Abrams KR,Modelling publication bias in metaanalysis: a reviewStat Methods Med ResYear: 20009542144510.1191/09622800070155524411191259  
Copas JB,Shi JQ,A sensitivity analysis for publication bias in systematic reviewsStat Methods Med ResYear: 200110425126510.1191/09622800167822777611491412  
Preston C,Ashby D,Smyth R,Adjusting for publication bias: modelling the selection processJ Eval Clin PractYear: 200410231332210.1111/j.13652753.2003.00457.x15189397  
Bowden J,Jackson D,Thompson SG,Modelling multiple sources of dissemination bias in metaanalysisStat MedYear: 2010297–894595520213702  
Carpenter J,Rucker G,Schwarzer G,Assessing the sensitivity of metaanalysis to selection bias: a multiple imputation approachBiometricsYear: 20116731066107210.1111/j.15410420.2010.01498.x21039395  
Rucker G,Carpenter JR,Schwarzer G,Detecting and adjusting for smallstudy effects in metaanalysisBiom JYear: 201153235136810.1002/bimj.20100015121374698  
Rufibach K,Selection models with monotone weight functions in meta analysisBiom JYear: 201153468970410.1002/bimj.20100024021567443  
Moreno SG,Sutton AJ,Ades AE,Stanley TD,Abrams KR,Peters JL,Cooper NJ,Assessment of regressionbased methods to adjust for publication bias through a comprehensive simulation studyBMC Med Res MethodolYear: 20099210.1186/147122889219138428  
Moreno SG,Sutton AJ,Thompson JR,Ades AE,Abrams KRYear: 2012Cooper NJ: A generalized weighting regressionderived metaanalysis estimator robust to smallstudy effects and heterogeneity. Stat Med  
Turner EH,Matthews AM,Linardatos E,Tell RA,Rosenthal R,Selective publication of antidepressant trials and its influence on apparent efficacyN Engl J MedYear: 2008358325226010.1056/NEJMsa06577918199864  
Higgins JP,Whitehead A,Borrowing strength from external trials in a metaanalysisStat MedYear: 199615242733274910.1002/(SICI)10970258(19961230)15:24<2733::AIDSIM562>3.0.CO;208981683  
Lu G,Ades AE,Assessing evidence inconsistency in mixed treatment comparisonsJASAYear: 2006101474447459  
Lu G,Ades A,Modeling betweentrial variance structure in mixed treatment comparisonsBiostatisticsYear: 200910479280510.1093/biostatistics/kxp03219687150  
Stanley TD,Metaregression methods for detecting and estimating empirical effects in the presence of publication selection*Oxf Bull Econ StatYear: 2008701103127  
Moreno SG,Sutton AJ,Turner EH,Abrams KR,Cooper NJ,Palmer TM,Ades AE,Novel methods to deal with publication biases: secondary analysis of antidepressant trials in the FDA trial registry database and related journal publicationsBMJYear: 2009339b298110.1136/bmj.b298119666685  
Rucker G,Schwarzer G,Carpenter JR,Binder H,Schumacher M,Treatmenteffect estimates adjusted for smallstudy effects via a limit metaanalysisBiostatisticsYear: 201112112214210.1093/biostatistics/kxq04620656692  
Song F,Eastwood AJ,Gilbody S,Duley L,Sutton AJ,Publication and related biasesHealth Technol AssessYear: 20004101115  
Hedges LV,[Selection models and the file drawer problem]: commentStat SciYear: 19883111812010.1214/ss/1177013013  
Bedrick EJ,Christensen R,Johnson W,A new perspective on priors for generalized linear modelsJASAYear: 19969143614501460  
Wu Y,Shih WJ,Moore DF,Elicitation of a beta prior for Bayesian inference in clinical trialsBiom JYear: 200850221222310.1002/bimj.20071039018085660  
Salanti G,Ades AE,Ioannidis JP,Graphical methods and numerical summaries for presenting results from multipletreatment metaanalysis: an overview and tutorialJ Clin EpidemiolYear: 201064216317120688472  
Spiegelhalter DJ,Best NG,Carlin BP,Van Der Linde A,Bayesian measures of model complexity and fitJ R Stat Soc Ser B Stat MethodolYear: 200264458363910.1111/14679868.00353  
Dias S,Welton NJ,Marinho VCC,Salanti G,Higgins JPT,Ades AE,Estimation and adjustment of bias in randomized evidence by using mixed treatment comparison metaanalysisJ R Stat Soc Ser A Stat SocYear: 2010173361362910.1111/j.1467985X.2010.00639.x  
Salanti G,Dias S,Welton NJ,Ades AE,Golfinopoulos V,Kyrgiou M,Mauri D,Ioannidis JP,Evaluating novel agent effects in multipletreatments metaregressionStat MedYear: 201029232369238320687172  
Dias S,Welton NJ,Ades AE,Study designs to detect sponsorship and other biases in systematic reviewsJ Clin EpidemiolYear: 201063658758810.1016/j.jclinepi.2010.01.00520434021  
Moreno SG,Sutton AJ,Ades AE,Cooper NJ,Abrams KR,Adjusting for publication biases across similar interventions performed well when compared with gold standard dataJ Clin EpidemiolYear: 201164111230124110.1016/j.jclinepi.2011.01.00921530169  
Chootrakool H,Shi JQ,Yue R,Metaanalysis and sensitivity analysis for multiarm trials with selection biasStat MedYear: 201130111183119821538449  
Mavridis D,Sutton A,Cipriani A,Salanti G,A fully Bayesian application of the Copas selection model for publication bias extended to network metaanalysisStat MedYear: 2012 [Epub ahead of print].. 10.1002/sim.5494  
McCandless LC,Gustafson P,Levy AR,Richardson S,Hierarchical priors for bias parameters in Bayesian sensitivity analysis for unmeasured confoundingStat MedYear: 201231438339610.1002/sim.445322253142  
Gelman A,Carlin JB,Stern HS,Rubin DB,Bayesian Data AnalysisYear: 20042New York: Chapman Hall/CRC  
Rising K,Bacchetti P,Bero L,Reporting bias in drug trials submitted to the food and drug administration: review of publication and presentationPLoS MedYear: 2008511e217 discussion e217.. 10.1371/journal.pmed.005021719067477  
Lathyris DN,Patsopoulos NA,Salanti G,Ioannidis JP,Industry sponsorship and selection of comparators in randomized clinical trialsEur J Clin InvestYear: 201040217218210.1111/j.13652362.2009.02240.x20050879  
Lexchin J,Bero LA,Djulbegovic B,Clark O,Pharmaceutical industry sponsorship and research outcome and quality: systematic reviewBMJYear: 200332674001167117010.1136/bmj.326.7400.116712775614  
Bero L,Oostvogel F,Bacchetti P,Lee K,Factors associated with findings of published trials of drugdrug comparisons: why some statins appear more efficacious than othersPLoS MedYear: 200746e18410.1371/journal.pmed.004018417550302  
Chalmers I,Matthews R,What are the implications of optimism bias in clinical research?LancetYear: 2006367950944945010.1016/S01406736(06)68153116473106  
Rucker G,Schwarzer G,Carpenter J,Arcsine test for publication bias in metaanalyses with binary outcomesStat MedYear: 200827574676310.1002/sim.297117592831 
Figures
Tables
Comparison of network metaanalysis(NMA)based estimates between the2 adjustment models appliedto published data andthe standard NMA modelapplied to US Foodand Drug Administration (FDA)data and to publisheddata

FDA data

Published data




Standard NMA model

Regression model

Selection model

Standard NMA model

Mean (SD)  Mean (SD)  Mean (SD)  Mean (SD)  
Θ_{BUP}

0.176 (0.081)

0.043 (0.256)

0.229 (0.121)

0.271 (0.139)

Θ_{CIT}

0.240 (0.074)

0.081 (0.171)

0.254 (0.073)

0.306 (0.076)

Θ_{DUL}

0.300 (0.054)

0.166 (0.190)

0.340 (0.066)

0.402 (0.058)

Θ_{ESC}

0.310 (0.067)

0.165 (0.193)

0.311 (0.070)

0.357 (0.068)

Θ_{FLU}

0.256 (0.081)

0.004 (0.160)

0.215 (0.068)

0.271 (0.074)

Θ_{MIR}

0.351 (0.070)

0.206 (0.331)

0.424 (0.110)

0.567 (0.092)

Θ_{NEF}

0.256 (0.076)

0.112 (0.260)

0.348 (0.094)

0.437 (0.094)

Θ_{PAR}

0.426 (0.063)

0.267 (0.346)

0.438 (0.105)

0.593 (0.078)

Θ_{PAR CR}

0.323 (0.101)

0.174 (0.187)

0.309 (0.083)

0.354 (0.085)

Θ_{SER}

0.252 (0.077)

0.210 (0.231)

0.359 (0.094)

0.419 (0.094)

Θ_{VEN}

0.395 (0.071)

0.199 (0.224)

0.403 (0.092)

0.504 (0.075)

Θ_{VEN XR}

0.398 (0.094)

0.261 (0.273)

0.423 (0.110)

0.506 (0.107)

τ  0.060 (0.037)  0.031 (0.024)  0.024 (0.019)  0.032 (0.025) 
Data are posterior means and standard deviations of the basic parameters (Θ), the betweentrial heterogeneity (τ).
Comparison of fit andcomplexity between the 2adjustment models and thestandard NMA model, allapplied to published data
Regression model  Selection model  NMA model  

Mean posterior residual deviance (D¯res)

31.4

31.5

34.4

Effective number of parameters (pD)

15.9

14.7

13.9

Deviance Information Criterion (DIC)  47.3  46.2  48.3 
Lower values of D¯res indicate a better fit to the data. Lower values of the DIC indicate a better compromise between model fit and model complexity. A difference in DICs of 5 or more can be considered substantial (http://www.mrcbsu.cam.ac.uk/bugs/winbugs/dicpage.shtml).
Article Categories:
Keywords: Network metaanalysis, Publication bias, Smallstudy effect. 
Previous Document: Endometriosis and Infertility  a consensus statement from ACCEPT (Australasian CREI Consensus Exper...
Next Document: Coach behaviours and practice structures in youth soccer: Implications for talent development.