Martini A., “How counterfactuals got lost on the way to Brussels”, Proceedings of the Symposium “Policy and program evaluation in Europe: cultures and prospects”, Strasbourg, July 3-4, 2008.
In this paper we address one of the issues raised by the Call for Papers: “Are European Commission evaluation practices giving birth to a European standardisation?” We are mainly concerned with the approach adopted by the European Commission (EC) for the evaluation of the impact of Structural Funds. The EC evaluation guidelines largely ignore the counterfactual methods that the social science community has produced to deal with issues of causal attribution. Counterfactual analysis has become the standard approach for most research institutions and international organizations over the last two decades, with the notable exception of the EC. We offer two main arguments to support the claim that the EC standard approach cannot deal satisfactorily with the estimation of impacts. First, EC evaluation guidelines widely recommend the use of impact indicators: we contend that indicators alone do not identify nor estimate any impact in a meaningful way. Only a properly conducted counterfactual analysis allows the quantification of impacts, provided that suitable data are available and some (often stringent) conditions are met. Second, we argue that the emphasis on indicators is a symptom of an overriding concern with accountability for progress toward objectives, which is different than estimation of causal impacts. We make the case for a partial shift of attention, away from measuring progress toward objectives, and in favour of learning “what works”— that is, gathering evidence on whether the Structural Funds do produce the changes they hold as objectives.