Joost asks in a direct message:

I’m organizing a Missing Maps event in Antwerp. One of the co-organizers wants to try giving a tweaked JOSM version on a USB stick to all the participants (preloaded settings etc) and use JOSM as a default editor. […] Did anyone try this at an event? Did you have a look at first timers using JOSM having a higher or lower OSM/MM retention? (It might be too much self-selection to really prove anything…)

I thought this was an interesting angle, and it connects with some of the work I’m currently doing, so I had a look at the data and am posting the results here. The short answer, based on a small sample: we’ve actually seen a difference in retention! However not in the way you might expect. I was surprised.

Before I begin I should say that I’m very interested in other perspectives on this question, particularly actual teaching experiences. This is a good scenario where statistics might be misleading, and where it helps to have actually talked to the mappers and observed what happened. Looking forward to people’s comments!

Preliminary caveats

It’s actually really hard to measure this well and generalise from past experiences, because every mapathon has its own story; different people attending, different things going right or wrong, etc. Different editors are also often used for different kinds of work: JOSM often gets used for field paper tracing and validation as well as satellite tracing. Unfortunately I haven’t been to most of the JOSM training sessions I’ll quantify below, so I don’t know what people actually did!

Furthermore, editor choice affects all kinds of follow-up considerations that may affect the outcomes of such a study; e.g. I’ve seen people forget how to launch JOSM a month after they first installed it, or OS updates causing java versioning issues, all of which is not something that can happen with iD.

And so on. You get the idea: many factors to keep in mind when we look at these numbers.

We can still look at general trends across the JOSM newcomers so far. Unfortunately there’s not a lot of observational data to make any strong statements, however I do think we can see some trends. And I’d certainly say that there is plenty of scope for further experiments!

Our observations so far…

The following statistics compare two groups of attendees at our monthly Missing Maps event in London: people who started with iD at their first mapathon, and people who started with JOSM. To make the comparison somewhat fair I’m only looking at attendees who have little prior OSM experience, with no more than 5 days of prior OSM contributions before their first mapathon attendance. I’ve also excluded the small number of people who used both editors at their first mapathon.

At our monthly mapathons, 37 people started with JOSM right away, spread across 12 events. On the other hand 298 first-time mappers started with iD (13 events).

Activity at the first event

16% of the JOSM mappers contributed for more than 2h in the initial mapathon edit session; this is about half as much as the people starting with iD, where 33% contribute for more than 2 hours. A histogram of their session durations illustrates the difference:

You may notice that the two distributions are quite different. JOSM contributors tend to have shorter contribution sessions. I verified that this is a general pattern across multiple events, and not biased by a single mapathon. Note however that this does not necessarily mean that JOSM trainees tend to lose patience more quickly – they may simply be doing different kinds of work.

Update: As Joost suggests in the comments, it might also simply mean that JOSM collects edit timestamps differently. In past explorations I’ve seen JOSM preserve timestamps for individual edits within a changeset, but I don’t know enough about the editor to understand what exactly is going on.

Short-term retention

Joost however was asking about the impact on retention, so let’s see what happens in the days and weeks after the first attendance. For that we will observe everyone’s subsequent contributions to HOT, at home or at a mapathon, up to a period of 90 days after their first mapathon attendance.

A month later the picture flips. 32% of JOSM newcomers were still active 30 days after they first came to a mapathon. On the other hand, only 20% of iD users were still mapping.

To assess these numbers further we can look at survival plots, these show how likely it is that a certain group is still active after some time has passed. Most importantly they tell us whether these trends are statistically significant.

The wide confidence interval for the JOSM group (the shaded region around the curves) illustrates how little data there is. The JOSM group has larger confidence intervals, which means there is a variety of retention profiles in this group, and not enough samples to determine a clear trend. As a result the confidence intervals of the two curves overlap, which means there’s likely not enough data to say for certain that the groups differ significantly.

However the curves do suggest an apparent trend: at Missing Maps monthly events, people who start with JOSM tend to remain actively engaged for longer.

Conclusion

Unexpectedly for me we do get some clear differences in outcome when looking at Missing Maps monthly events in London! Namely:

I looks like newcomers learning JOSM were more likely to stop early in their first session, compared to iD trainees. (Alternatively, JOSM and iD differ in how they collect edit timestamps.)

On the other hand, a larger share of JOSM trainees were retained as mappers over the following weeks.

Although I was surprised by this, this is not actually entirely unexpected. JOSM use tends to be associated with higher engagement: the most active mappers are often JOSM users.

However this does not necessarily mean that JOSM is the key trigger. It might simply reflect that the JOSM mappers at our events are a great bunch of people, fun to hang out with, and many of them know each other quite well; whereas the people at our iD tables are typically newcomers who are not yet as well-connected to the community. So maybe the difference is in the people, not the editor.

In closing I would say that we need many more observations across different kinds of settings to make these statistics meaningful. At the moment this is little more than anecdotal evidence. There’s definitely space for further experiments!