Will Moore, Kentaro Fukumoto, and I have been working on a random walk negative binomial model for time-series of counts, based on earlier work by Kentaro on a negative binomial integrated (NB I(1)) model. We just presented a related poster in which we look at monthly civilian deaths in Iraq at Peace Science in Savannah, Georgia. Here is the actual pdf poster (it’s a big file, be warned), but the basic point is that ARIMA or classical count-models are not a good way to deal with time-series of counts, like monthly deaths in a conflict, and that we have a tested model for non-stationary counts that has some attractive features.
We are working on a draft paper, so I don’t want to go through the whole story, but if you’d like to try it out yourself and know how to use JAGS, all the R and JAGS code is available on github.
Almost all states, at least at some point between 1995 and 2005.
The Ill-Treatment and Torture (ITT) project by Courtenay Conrad and Will Moore codes Amnesty International (AI) allegations of government torture, including the perpetrator, motive, and judicial response. The aggregated, country-year version of their data shows whether AI made allegations against a country in a given year and if so, what the extent of alleged torture or ill-treatment was, on a 5-point scale from “infrequent” to “systematic”.
Here is a video showing the AI torture allegations from 1995 to 2005 using their country-year data and shape files for world borders from Thematic Mapping.
The initial impression I had from this is the sheer extent of (alleged) torture and ill-treatment. It looks like pretty much all major states engaged in torture at some point between 1995 and 2005. Only 8 out of 151 states had no allegations of torture at all (Costa Rica, Uruguay, Finland, Benin, Gabon, Quatar, Singapore, and New Zealand), and in those remaining states with AI allegations of torture, on average there were allegations for 7 out of 10 years. More than a quarter of states were accused of torture or ill-treatment in all 10 years covered by the data.
That doesn’t necessarily mean that a lot of torture or ill-treatment is going on in any specific country, nor that it is systematic. It doesn’t reflect what the specific acts of torture or ill-treatment were, e.g. whether someone was tortured to death or water-boarded (which may not be different). But, nevertheless, unpleasant stuff happens.
R code and source. This produces images for each year that I strung together in iMovie.
In 1991 a census was conducted in Bosnia and Herzegovina, which then was still part of the disintegrating federal state of Yugoslavia. Bosnia was the most diverse republic in the former Yugoslavia, with significant populations of Bosnian Muslims (or Bosniaks, 43 percent), Serbs (31 percent), Croats (17 percent), and others. Bosniaks, Serbs, and Croats were more or less well-established identities with historical roots. Unlike in most multiethnic countries however, the census respondents also had the option to identify themselves as Yugoslavs, rather than a particular ethnic or national group. It turns out that this played an interesting role in the way violence occurred in the Bosnian War from 1992 to 1995.
In 2003 we invaded Iraq with the hope that people would cheer in the streets. They didn’t. We occupied the country anyways and our presences was and generally is hated by the people.
Starting with the protests on National Police Day, January 25th, 2011, the Egyptian people went to the streets, largely peacefully, and removed Mubarak from power. Now the Egyptian military, hesitant, will have to get involved in running the country. But how many of Egypt’s last leaders have come from the military? Nasser, Sadat, Mubarak, … Read the rest of this entry »
In many circumstances political scientists study binary dependent variables that have been measured with bias. For example, in surveys the strategic interests of actors can lead them to misrepresent an attitude or behavior to the surveyor in a non-random fashion. Data on terror or torture that are coded using media reports likely suffer from a similar bias related to factors such as freedom of the press in a country.
To give you an idea of what this new model allows one to do, consider the issue of self-reported infidelity between romantic partners. Using survey data, the reported rate of infidelity is about 13% of the sample. Yet common sense would suggest that this rate be higher, at least due to social desirability bias that would lead respondents who did in fact cheat lie about it to avoid the negative stigma. The split population logit model allows us to separate respondents’ rates of honesty and infidelity separately, as shown in the table excerpt from our paper. It shows for example that 41% of the sample likely cheated on their partner, but also that around three-quarters chose to lie about it when surveyed. Quite a difference from the 13% reported in the observed data.
Here are replication files for the simulations we used to evaluate our estimator and replication files for the infidelity example. The simulations were run through Florida State University High Performance Computing.
Paper to follow in a few weeks.