A casual survey of the 2009 index reveals some potential empirical inconsistencies. Do we really believe that North Korea is at roughly the same risk of failure as states like Haiti and Ethiopia? Are China’s communist party and Israel’s democracy really as vulnerable as the regimes in places like Kyrgyzstan, Belarus, or even Russia for that matter? Regional comparisons are also problematic. For example, Uzbekistan, under the control of Islam Karimov’s authoritarian regime since independence, is rated at greater risk of failure than both Tajikstan, which is less than ten years removed from civil war, and Kyrgyzstan, which less than five years removed from the Tulip Revolution, and is arguably characterized by organized crime groups competing for power.
Part of the reason for these discrepancies is that Failed States Index is a bit of a misnomer. I suspect many casual analysts, including myself at times, have used the FSI as an indicator of state strength or weakness. In fact, Foreign Policy argues that the index is a measure of a state’s risk of failure (loosely defined as vulnerability to collapse or conflict). The index is composed of the aggregation of 1-10 rankings on 12 indicators of social, economic, and political cohesion and performance. As such, the FSI is an implied explanation for why states fail, and a poor one at that.
If your aim is to produce an explanation (or prediction) for state failure, social and economic indicators should be analytically separated from the political. Strong political institutions may be able to absorb the demands arising from the economic and social pressures the FSI incorporates. Furthermore, state strength must not be defined in terms of its ability to incorporate these demands. A better way to explain state failure would be to establish independent measures of state capacity or efficacy (say an index including degree of monopoly on force, degree of autonomy/meritocracy of the bureaucracy, quality of public goods provisions, degree of “stateness” problems, etc). The FSI political indicators actually provide a rough approximation of what this measure should look like (arranging the rankings according to these indicators results in some interesting reorderings. Then, having established variation on this index, the degree to which social and economic factors explain this variation is an empirical question. Regressing the political indicators on the social and economic indicators would provide a better idea of the significance and relative weights of these factors’ effects on state failure. An alternate approach would be a survival model using these 12 indicators as independent variables. I’m sure there is no shortage of academic analyses of this type (the State Failure Project/Political Instability Task Force comes to mind), but the inability of social science to make their work accessible to the policy community is another matter.
The way that you conceptualize and explain failed states has real and important implications for policy. For example, the argument that al Qaeda was nourished in an environment of relative state capacity strikes me as revisionism. Going forward, this debate requires the explicit identification of what constitutes a strong state capacity, and the careful identifications of the most important factors that cause variation in capacity. In short, the FSI succeeds in its goal of providing a “starting point for the discussion of why states fail.” However, the uncritical use of these types of indices in policy analysis and decision making is likely to lead to the misallocation of scarce resources to either A) prop up states that don’t need it, or B) correct problems that really have no effect on state failure.