Amazing. Apparently, they sweep the bicycle paths at the Veluwezoom.
Many people use Surveymonkey to conduct online surveys. You can get standard pdf reports of your data, but often you’ll want to do some more analysis or have more control over the design of the charts. An obvious option is to read the data into R. But there’s a practical problem: Surveymonkey uses the second row of it’s output file for answer categories and puts some other information in that row as well. This has the additional effect that R will treat numerical variables as factors.
I wrote a few lines of code which, I think, deal with that problem and turned that into an R package. Until recently it’d never have occured to me to create an R package, but then I read this post by Hillary Parker who describes the process so clearly that it actually appeared doable. I took some additional cues from this video by trestletech. The steps are described here.
I thought of adding a function to read data from Limesurvey, an open source alternative to Surveymonkey. But apparently, that functionality is already available (I haven’t tested it).
The package is available on Github.
With the help of posts by Hillary Parker and trestletech I managed to create my first R package in RStudio (here’s why) . It wasn’t as difficult as I thought and it seems to work. Below is a basic step-by-step description of how I did it (this assumes you have one or more R functions to include in your package, preferably in separate R-script files):
If you want, you can upload the package to Github. Other people will then be able to install it:
Last autumn, Amsterdam politicians discussed on Twitter whether the relations between coalition and opposition have changed since the March 2014 election, which resulted in a new coalition.
One way to look at this is to analyse voting behaviour on motions and amendments over the past two years. From a political perspective, proposals with broad support may not be very interesting:
For example, a party can propose a large number of motions that get very broad support, but materially change little in the stance, let alone the policy, of the government. In the litterature, this is sometimes referred to as «hurrah voting»: everybody yells «hurrah!», but is there any real influence? (Tom Louwerse)
In a sense, it could be argued that the same applies to proposals supported by the entire coalition. More interesting are what I’ll call x proposals: proposals that do not have the support of the entire coalition, but are adopted nevertheless. In the Amsterdam situation these are often proposals opposed by the right-wing VVD. The explanation is simple: Amsterdam coalitions tend to lean to the right (relative to the composition of the city council). As a result, left-wing coalition parties have more allies outside the coalition.
Let’s start with the situation before the March 2014 election. The social-democrat PvdA was the largest party. The coalition consisted of green party GroenLinks, PvdA and VVD, but the larger left-wing parties PvdA, GroenLinks and socialist party SP had a comfortable majority. The chart below shows the parties that introduced x proposals. The arrows show who they got support from to get these proposals adopted.
The size of the circles corresponds to the size of the parties; pink circles represent coalition parties. The thickness of arrows corresponds to the number of times one party supported another party’s x proposal. The direction of the arrows is not only shown by the arrow heads but also by the curvature: arrows bend to the right.
The image is clear: PvdA and especially GroenLinks were the main mediators who managed to gain support for x proposals.
And now the situation after March 2014. By now neoliberal party D66 is the largest party and the coalition consists of SP, D66 and VVD. This means that PvdA and GroenLinks are now opposition parties, but it turns out they still play a key role in getting x proposals adopted. GroenLinks initiated as many as half the x proposals.
The most active mediator is Jorrit Nuijens (GroenLinks), followed by Maarten Poorter (PvdA) and Femke Roosma (GroenLinks).
Data is from the archive of the Amsterdam city council. Votes on motions and ammendments as of January 2013 can be downloaded as an Excel file. The file (downloaded on 31 January 2015) contains data on 1,165 (versions) of proposals, put to a vote until 17 December 2014.
A few things can be said about the Excel file. On the one hand, it’s great this information is being made available. On the other hand, the file is a bit of a beast that takes quite a few lines of code to control. The way in which voting is described varies (e.g., «rejected with the votes of the SP in favour», «adopted with the votes of the council members Drooge and De Goede against»); the structure of the title changed in November 2014; Partij voor de Dieren is sometimes abbreviated and sometimes not; and sometimes the text describing voting has been truncated, apparently because it didn’t fit into a cell. Given the complexity of the file, it can’t be exluded completely that proposals may have been classified incorrectly.
The analysis (by necessity) focuses on visible influence. The first name on the list of persons introducing a proposal is considered as the initiator. In reality, it will probably sometimes occur that an initiator will let someone else take credit for a proposal.
Anyone mildly interested in data visualisation must have come across examples of shamelessly deceptive Fox News charts. Truncated y-axes, distorted x-axes, messing with units - nothing’s too bold when it comes to manipulating the audience. But does this kind of deception actually work? Anshul Vikram Pandey and his colleagues at New York University decided (pdf) to find out. They showed subjects either control or deceptive versions of a number of charts.
The deceptive versions were: a bar chart with truncated y-axis; a bubble chart with one bubble too large relative to the other; a line chart with a more spread-out y-axis, resulting in a less steep rise than in the control version and a chart with an inverted y-axis (inspired by Reuters’ famous Gun Deaths in Florida chart - interesting discussion here). In all cases, the correct numbers were included in the chart.
Of course a truncated y-axis can sometimes be defensible and needn’t be deceptive, as long as it is made clear what’s going on. More problematic is the aspect ratio chart. The authors claim the chart to the right is deceptive and the one to the left not, but how can you tell? You can’t. There’s no rule that says what the number of pixels per year on the x-axis should be.
Be that as it may, the authors found substantial differences in how the deceptive charts were interpreted compared to the control charts. Note that in most cases, they didn’t measure whether deceptive charts were interpreted incorrectly, just whether they were interpreted differently than the control charts. For example, participants were asked how much better access to drinking water was in Silvatown, represented by the bar to the right of the bar plot, relative to Willowtown, represented by the bar to the left (on a 5-point Likert scale ranging from slightly better to substantially better). When shown the control bar chart, the average score was 1.45; with the truncated y-axis the average score was 2.77.
The authors also tried to find out whether factors such as education and familiarity with charts had an influence on how charts were interpreted. It appears that people who are familiar with charts are less easily fooled by a truncated y-axis. Perhaps because truncated y-axes are second on the list of phenomena chart geeks love to hate and criticise (after 3D exploding pie charts, of course).
On Friday, the New York Times published an interesting article by Justin Wolfers about the kind of experts the paper mentions. Don’t worry, he’s aware of the methodological issues:
While the idea of measuring influence through newspaper mentions will elicit howls of protest from tweed-clad boffins sprawled across faculty lounges around the country, the results are fascinating.
To summarize: by his measure, economists have become the most influential profession among the social sciences and their influence rises during economic crises. Or at least so in the New York Times. I looked up data for the Dutch newspaper NRC Handelsblad, which has data available from 1990.
Some conclusions can be drawn:
- The current ranking is the same as for the NYT, with economists heading the list and demographers at the bottom;
- Apparently, NRC Handelsblad has always had a pretty high regard for historians, but due to the crisis they lost their top position to economists;
- There was a peak in mentions of psychologist in 2012, but some of that can be ascribed to reports of scientific fraud by psychologist Diederik Stapel.
For comparison, I tried reproducing Wolfers’ NYT chart for the years 1990 - 2014. Here’s what I got:
The sudden increase for all professions in 2014 is unexpected - see Method for possible explanations. If we leave 2014 aside, what emerges is that «peak economist» (to borrow an expression from Wolfers) seems to have happened earlier in the NYT than in NRC Handelsblad. Perhaps something to do with the fact that the crisis hit the US earlier than Europe.
The NYT data were downloaded from the NYT Chronicle Tool (I had to separately download the data for each search term). Data from NRC Handelsblad were downloaded using the website’s search function. In order to get the total numbers per year I also did a search using «de» («the») as a search term («de» is the most frequently used word in written Dutch).
As indicated in the article, I got a steep rise in the percentages for all professions in the NYT in 2014. I manually checked some of the percentages I got against those in the chart of the NYT Chronicle Tool, and these appear to be correct. The spike is not visible in Wolfers’ chart, but that may be due to the fact that he uses three-year averages.
There may be an issue with the denominator, i.e. the total number of articles. The number for
total_articles_published in the data I downloaded from the NYT was pretty stable at about 100,000 between 1990 and 2005. Then it rose to about 250,000 in 2013 (perhaps something to do with changed archiving practices, or with online publishing?). However, in 2014, it dropped to about one-third of the 2013 level.
The NRC Handelsblad data also has some fluctuations in the total number of articles per year, but less extreme and at first sight they don’t seem to coincide with unexpected fluctuations in the percentages of articles mentioning professions.
Code is available here.
This weekend, the Dutch social-democrat PvdA will decide on the list of candidates for the Senate election this spring. The party isn’t doing too well in the polls, but it may be facing an additional problem, as the charts below illustrate.
Since the beginning of the 1980s, the PvdA has nearly always had a weaker position in the Senate than in the Lower House. The main exception is 2002, when the Lower House election took place within days after the murder of rightwing populist Pim Fortuyn and the PvdA, seen by many as a symbol of the establishment, temporarily lost half its seats.
The relatively weak position of the PvdA in the Senate may be a coincidence, but it could also be related to turnout. In elections for the provincial councils, which in turn elect the Senate, almost half the voters stay at home (compared to a 75–80% turnout in Lower House elections). It may well be that the way in which the Senate is elected has a negative impact on the outcome for the PvdA.
In het ledenblad van de jubilerende Fietsersbond staat een mooi artikel over fietsonderdelen die vroeger vanzelfsprekend waren, zoals witte spatborden, stuurblokjes, buiscommandeurs, pompnokjes en banddynamo’s. Leden krijgen het blad in de bus; wie geen lid is kan dat hier in orde maken.
Update 11 January: Spotify data added.
According to the English Wikipedia page, «Generally speaking, a sevillana is very light hea[r]ted, happy music». There’s certainly some bland stuff around, but many sevillanas are explosive and raw. In fact, sevillanas are the punk of Spanish music.
I wanted to back this claim up by pointing to the length of the songs on the legendary Sevillanas de los Cuarenta album. It’s a known fact that punk is a genre with very short songs: on average 2:58 according to this analysis by blogger Dale Swanson. It’s the shortest of all the genres he analysed. Well, the average song length on the Sevillanas de los Cuarenta album is 2:44.
However, there may be some problems with this argument. First, some of the songs on the album have a haunting quality about them (for example, A flamenca no me ganas by Gracia de Triana), which makes you wonder if they haven’t been played too fast when they were recorded for CD. This may be an issue, but even if you correct for this the songs on Sevillanas de los Cuarenta would still be shorter than punk songs (for details see below, Method).
More problematic is the fact that short songs appear to have been normal in the 1940s. According to this analysis by Rhett Allain, average song lengths rarely exceeded 3 minutes until the end of the 1960s (see also the debate in the comments on possible explanations). So the shortness of the songs on the Sevillanas de los Cuarenta album isn’t that impressive. In fact, a (possibly non-representative) sample of 1970s sevillanas has an average song length of 3:22, which appears to be quite typical for the 1970s judging by Allain’s data.
The Musicbrainz database used by Allain doesn’t seem to contain many sevillanas. However, the Discogs website, which has data on millions of songs, does contain a few hundred sevillanas. Since posting the first version of the article, I realised metadata can also be obtained from Spotify. Spotify has over 2,500 songs with «sevillanas» in the title but only a few hundred songs per genre for other genres (probably the genre tags aren’t applied consistently). Below is the song length of a number of genres in the Discogs and Spotify databases.
For especially jazz and house, Spotify has other durations than Discogs. Other than that, median song durations are very similar. This is actually quite remarkable given the differences between the datasets. In both datasets, sevillanas tend to be somewhat longer than punk songs, but shorter than the other genres in the analysis.
An analysis by year might be interesting, but tricky: first because the release year in the Discogs data may refer to the year in which an album or song was re-released and second because the number of sevillanas tracks with sufficient information isn’t large enough for that level of precision. The Spotify dataset has no information on the release year of tracks (I guess if I really wanted I could have looked up the release date of the album each track is on).
All in all, the average sevillanas may be somewhat longer than a punk song. But you can still argue that a sevillanas song is in fact a series of even shorter songs, as illustrated by the plot of ¡Ay Sevilla! by Los de la Trocha shown above. The typical sevillanas is a series of short bursts of music that can be as abrupt as any punk song.
Scripts for the analyses are available here.
Songs on Sevillanas de los Cuarenta too fast?
Spotify has three versions of A flamenca no me ganas: the one from Sevillanas de los Cuarenta (2:29 on cd) and two others lasting 2:37 and 2:41. This suggests it’s possible that the «correct» version is up to 8% longer than the one on Sevillanas de los Cuarenta. Even if you assume all the songs on the album should last 8% longer, the average length would become 2:56, still less than for punk. On the other hand, it’s doubtful that all songs on Sevillanas de los Cuarenta are too short. For example, Sevillanas del Espartero by Concha Piquer lasts 2:57 on Sevillanas de los Cuarenta, but Spotify has versions lasting only between 2:27 and 2:35.
The sample of 1970s songs is from albums C, D and F of the HISPAVOX Sevillanas de Oro collection (cd versions), containing songs by los Marismeños, Amigos de Gines and others (not all Sevillanas de Oro albums contain the release year of the songs, but these do).
The Discogs data are available through an API and as monthly data dumps. I thought I’d spare myself the trouble of figuring out how the API works, so I opted for the data dump (the one for 1 December 2014). The downside is that the data is 2.8 GB zipped and 19.2 GB unzipped, so downloading and analysing the data takes a while.
The data dump is xml (the API should return json). I’m not really familiar with xml so I used some not very sophisticated, but effective, regex to sort it out. The data is organised in releases (e.g., albums) that have tags (e.g., for the year in which it was released and for genres and styles). The releases contain tracks that have their own tags, including duration. In order to filter out excessive track lengths I ignored any release containing the string
mix and tracks with a duration longer than one hour.
Discogs uses hundreds of genre and style tags including some quite specific ones like ranchera and rebetiko, but not sevillanas. I decided to include only tracks with
sevillanas in the title. This will exclude some legitimate sevillanas, but I reckon there probably won’t be too many false positives.
I accessed the Spotify data through their web api. As indicated in the article, genre searches resulted in only a few hundred results per genre, which suggests these tags are often omitted.
Plotting a waveform
Based on this discussion, plotting a waveform from a .wav music file using Python should be simple, but saving the plot turned out to be a problem (googling the error message
OverflowError: Allocated too many blocks taught me I’m not the only one having that problem but I didn’t find a solution that worked for me). Instead I turned to R and found that the tuneR package will let you read and plot .wav files without a problem.
In November, a survey was published which found that 80% of Dutch youth with a Turkish background would not have a problem with the use of violence in jihad and 90% would think that Dutch Muslims who fight in Syria are heroes.
While some were shocked by the findings, others expressed doubts about the methodology of the survey or simply thought the results were improbable. Among other things, the survey was not based on a random sample and non-response wasn’t reported. The researchers did try to recruit a sample that was representative of the wider population on a number of background variables through quota sampling (discussions in Dutch here, here and here; research method here).
A Motivaction spokesperson said the way in which the research had been done was «acceptable» from a social sciences perspective.
Now a group of Turkish organisations demands that Minister Lodewijk Asscher, who contracted the survey, order Motivaction to release all results so they can ask an independent expert to review the study. If necessary, they may go to court to get the data.
I don’t know how likely it is the organisations will get the original data and if they do, it may still be difficult to demonstrate sampling bias. Still, if this is a step towards introducing principles of reproducible research in contracted policy research, then that’s an interesting development.