Every June, Foreign Policy magazine puts out its “Failed States” index and every June, I am sorely tempted to write a jeremaid about its awfulness. However, laziness always intercedes, even today, so I will simply highlight two glaring flaws:
The biggest offense, at least to the empiricist in me, is the basic approach. The editors have taken what is essentially a binary label (“failed” vs “not failed”) and contorted it into a continuum, which is then stretched to apply not to select cases but the entire world. They compound this error by introducing and emphasizing an ordinal element, predicated on a series of weighted inputs like the US News college ranking taken to the nth level of absurdity. It’s ridiculous to contemplate which country is more “failed”: the US or Canada, Germany or France, because these states are not failed nor anywhere near becoming so in the near future. Approaching the data like this might make future regressions easier, but they will ultimately not offer any information worth gleaning. The whole exercise is like trying to determine the prevalence of cancer in a sample population by taking the whole lot of them and ranking individually them based on how cancery everyone is. Those without cancer have no sensible place on a cancer continuum, because they don’t have cancer, and those with cancer, though they might have different kinds of cancer at different stages of development, have a sufficiently large difference of kind between them and the rest of the population that it is necessary to seperate them from the total population first before distinguishing between the maladies.
This statistical innumerancy would be more excusable if the criteria themselves for determining state failure weren’t so scattershot. In the literature there are a variety of different definitions and indicators of state collapse proposed, but they all roughly coalesce within some set parameters. FP, however, seem to equate state failure with simply “not good.” Mixed in with indicators of state capacity, intrastate violence, and declining individual welfare are measures of democracy, civil liberities, and political repression. These elements are important for researchers to track for sure, but they have little to do with state failure, either causally or symptomatically. Few observers would defend the behavior or political structure of North Korea, but it is hardly considered a “failed state.” Indeed, I spent a incommensurate chunk of my honors defense defending an aside in which I labeled the DPRK as simply “weak.” Defining states that deviate from the liberal democratic ideal as “failed” leads the FP Index to make some highly dubious assertions. Most obviously, Libya, right now exhibiting a near textbook definition of state failure, ranks better than Turkey, India, China on FP’s list. Ireland, in the midst of a massive economic crunch and unfathomable government debt inherited from collapsed banks, rates as “less failed” (whatever that means) than the rest of non-Scandinavian Europe, largely due to liberal trade and taxation policies. Granted, a lot of the more specious outcomes one can dredge up (Indonesia worse than Turkmenistan; Oman tied with Italy and Poland) stem from the ordinal approach FP prefers to take, but the poor inputs only compound issues
This is hardly an exhaustive accounting of the sloppiness of FP’s report. Most of the states below rank sixty are in massive ties, unreflected by their putative rank. More bizarrely, the FSI routinely seems to ignore the British North America Act of 1949, every year counting Newfoundland as an English dependency rather than a Canadian province.
A better list would have far fewer states (confined to those currently in failure or near so) and a pared down criteria, but I imagine that is less interesting than aping Freedom House.