I blogged earlier at Predictive Heuristics about the Thailand coup and some forecasting work I’ve recently been part of:
This morning (East Coast time), the Thai military staged a coup against the caretaker government that had been in power for the past several weeks, after months of protests and political turmoil directed at the government of Yingluck Shinawatra, who herself had been ordered to resign on 7 May by the judiciary. This follows a military coup in 2006, and more than a dozen successful or attempted coups before then.
We predicted this event last month, in a report commissioned by the CIA-funded Political Instability Task Force (which we can’t quite share yet). In the report, we forecast irregular regime changes, which include coups but also successful protest campaigns and armed rebellions, for 168 countries around the world for the 6-month period from April to September 2014. Thailand was number 4 on our list, shown below alongside our top 20 forecasts. It was number 10 on Jay Ulfelder’s 2014 coup forecasts. So much for our inability to forecast (very rare) political events, and the irrelevance of what we do.
Some time ago I posted on how to find geographic coordinates given a list of village or city names in R. Somebody emailed me about how to do the reverse: the person had a list of villages in France along with the population in 2010, and wanted to find which administrative unit each village was located in. The problem boils down to associating points, the village coordinates, with polygons, the administrative division which they are a part of.
The village data look like this:
library(foreign) library(gdata) library(sp) munic <- read.xls("France-Population.xlsx") head(munic)
Name long lat pop_2010 1 Aast -0.0887339 43.28919 182.5416 2 Abainville 5.4947440 48.53057 327.2407 3 Abancourt 1.7649060 49.69672 687.2479 4 Abancourt 3.2127010 50.23528 448.1252 5 Abaucourt 6.2579230 48.89637 285.9438 6 Abaucourt-Hautecourt 5.5405000 49.19700 93.0353
After more than half a decade at this, it has finally dawned upon me that instead of downloading the Correlates of War state system membership table, or the Gleditsch and Ward refinement of it, every time I wonder what country “338” is, it might be easier to upload them to Google:
If you had to take a look at the chart below, what would you say about the overall trend in US defense spending? There’s a bump fairly early on for World War 2, but otherwise it seems to generally increase over time. I’m actually surprised to see that we spend more, in terms of constant US dollars, today than we did at the height of the Korean War, and in fact at any point in US history save World War 2.
The short version:
The longer version:
In 2013, my toolbox looks like this:
- Python for text processing and miscellaneous scripting;
- Python (NumPy/SciPy) for numerical computing;
- Python (Neurosynth, NiPy etc.) for neuroimaging data analysis;
- Python (NumPy/SciPy/pandas/statsmodels) for statistical analysis;
- Python (scikit-learn) for machine learning;
- Excursions into other languages have dropped markedly.
I can’t speak on the relative merits of Python over R, other than a general impression that R has stronger stats but some quirks as a language (pdf), while Python is generally more powerful, but less capable beyond basic statistical tools. I did spend some time trying to learn Python during my last year in graduate school, but it was while I was really still becoming comfortable with R and so I didn’t put much effort into it. Seems like it’s time to head back in that direction again.
I work as a Postdoctoral Fellow in the Ward Lab here at Duke University. The Lab currently consists of Mike Ward, me, and a group of very smart graduate students. There are a lot of exciting projects within the lab, like ICEWS and other work for the US government, but also a broader set of projects by our lab members. One of the things we wanted to do this semester is to publicize this work a little bit more, and to this end we’re taking a new blog live today: Predictive Heuristics.