Relationship between financial speculation and food prices or price volatility: applying the principles of evidence-based medicine to current debates in Germany

There is an unresolved debate about the potential effects of financial speculation on food prices and price volatility. Germany’s largest financial institution and leading global investment bank recently decided to continue investing in agricultural commodities, stating that there is little empirical evidence to support the notion that the growth of agricultural-based financial products has caused price increases or volatility. The statement is supported by a recently published literature review, which concludes that financial speculation does not have an adverse effect on the functioning of the agricultural commodities market. As public health professionals concerned with global food insecurity, we have appraised the methodological quality of the review using a validated and reliable appraisal tool. The appraisal revealed major shortcomings in the methodological quality of the review. These were particularly related to intransparencies in the search strategy and in the selection/presentation of studies and findings; the neglect of the possibility of publication bias; a lack of objective or rigorous criteria for assessing the scientific quality of included studies and for the formulation of conclusions. Based on the results of our appraisal, we conclude that it is not justified to reject the hypothesis that financial speculation might have adverse effects on food prices/price volatility. We hope to initiate reflections about scientific standards beyond the boundaries of disciplines and call for high quality, rigorous systematic reviews on the effects of financial speculation on food prices or price volatility.


Methodological aspects of the appraisal General aspects
The appraisal team consisted of seven scientists with a background in public health, medicine and/or nutrition. The initial appraisals of all authors fed into the final analysis without any 'post-hoc' revisions based on face-to-face or tele-communication, which is in line with the usual AMSTAR procedures.
Where indicators left space for interpretation, the members of the appraisal team provided information about the reasoning behind their choice of respective response options where necessary. All initial assessments were sent electronically to the first author (KB), who transferred the ratings into a spreadsheet for further analysis in STATA ® 12.1. and SPSS 16.0. Where members of the appraisal team made ambiguous judgements on an AMSTAR indicator (e.g. by ticking 'yes' and 'can't answer', because some essential aspects were reported but others not), the lead author (KB) chose the judgement in favour of Will et al. (though this was the case only once).

Application of AMSTAR indicators
While some indicators were more or less self-explanatory (e.g. AMSTAR2-6) and easily transferable to the review in question, others left space for interpretation. Therefore, we provide a brief narrative account on the rationale behind some of the ratings.
For example the negative ratings in AMSTAR7 (Figure 1, main text) were not given to the review due to a lack of 'randomized, double-blind, placebo controlled studies' (as listed in the notes of AMSTAR).
These criteria are neither applicable to most economic studies nor to observational studies in the field of medicine or public health. The negative rating was due to the general absence of any kind of 'a priori' method that indicated (transparently and in a replicable manner) how the quality of the included studies was assessed. The 'ad-hoc' assessment of the quality of included studies in Will et al. is prone to bias. This is, in our view, reflected in the unbalanced discussion of strengths and weaknesses of included studies, giving more weight to studies that show no effects, while discrediting the findings of those studies which provided evidence for an effect.
Consequently, AMSTAR8 was rated negatively as well, since this indicator (according to the AMSTAR online notes) should not score 'yes' in absence of any transparent method for quality assessment (i.e. if AMSTAR7 scored 'no', AMSTAR8 should not score 'yes'). An exception was made by one reviewer, who allowed AMSTAR8 to be rated 'yes' because of potentially different standards of assessing and weighing 'quality' of economic studies.
The largest discrepancy in ratings was found for AMSTAR9 (see Figure 1, main text). This indicator was rated 'not applicable' by the majority of reviewers since they felt it goes beyond their expertise to judge whether or not the study findings could have been pooled. A smaller proportion rated this item negatively, which was mainly due to the absence of a reference to heterogeneity in included studies (see the online AMSTAR note: "Indicate "yes" if they mention or describe heterogeneity, i.e., if they explain that they cannot pool because of heterogeneity/variability.."). Due to the controversial judgements we excluded AMSTAR9 from the analysis (see main text).
Finally, AMSTAR10, which refers to the possibility of publication bias received unanimously negative ratings. These ratings were due to the absence of any kind of reference to the possibility of publication or selection bias (especially given the narrow focus of the review on publications published in a period of two years only).