Actuaries sometimes get stereotyped as being numbers people. Now, this is sometimes a little unfair, particularly when it comes to puzzles. While we enjoy logic puzzles and number puzzles as much as the next guy, there’s a big crossword contingent too. Enough that there’s a regular cryptic one in the monthly TheActuary magazine.

But even so, the hankering after numerical stimulation is strong with us, and so there’s a handy way of getting a numerical element into a normal crossword.

Like this:

- Get a largely unlimited supply of crosswords, such as those in The Guardian
- Complete a load of these.
- Keep a record of how long it takes to do each one. To the nearest second is an adequate level of accuracy.
- If you’re in an office, gather a couple of people to join you – this makes it more interesting to do, as you learn something, you get to feel inadequate (or superior), and the amount of analysis that can be done increases drastically.
- But make sure you keep track of who was involved in each puzzle.

Now, in my office, we’ve been doing this for just over 15 months. Which equates to just over 1,230 puzzles. There’s enough data here to answer some decent questions. Such as – are we getting better at completing these over time? The answer to that is yes, as shown by the following graph giving a moving average of the previous ten completion times.

There are some obvious spikes, where one or two people have worked alone for a few puzzles without obvious success, and some cases where the puzzles themselves have been rather hard (occasionally they’ll have these themes that run over a whole week, which can either make things easier, or much harder). But there’s a definite downward trend at the start, a flatline in the middle, and more recently, a decline again (which may or may not turn out to be permanent).

But that doesn’t really tell you the answer to the question that everyone really wants to know. Which is – how good am I compared to the rest? For that, we need a bit more robust statistical analysis. One way is to look at all the puzzles a given person (ok, I admit it, it’s me) was involved in. That’s 148. Split these up into groups with identical participants, and then find all the puzzles that those participants were involved in without me. Take the average time for them, with and without me, and compare the two. The good news for me is that I have an impact of around 48s on the average time. In a good way (in case you’re wondering).

But that’s a bit tiresome to do for each of the 15 participants, and so the natural thing to do (well, to an actuary at least) is to haul out R and run a little GLM analysis. Making a few adjustments here and there, we have a similar result for all participants. Giving the table below (anonymised, where I’m number 11).

Now, this tells you a lot. Or at least, it tells *me* a lot. The intercept is effectively the time taken by no 2 on his own (we do this at his PC, so he’s always involved). Hence the NA across his line. The t value is a test of significance – how likely is it that the results obtained are just random chance, as opposed to genuine skill? As an example, number 10 has a better impact than me, but he’s only taken part in two puzzles, and so there’s a good chance that it was just fluke. Well, a 19% chance at least. Hence, no significance.

Oh, and in case you’re wondering, the record time for completion is around 1:21. Which is not bad considering that it probably takes close to a minute just to navigate around the crossword and type it all it.

Now, if only we’d kept track of the type of puzzle grid (whether or not there’s an across clue in the top row, for example), or clue types (number of anagrams per puzzle, or even just a clue count), we’d *really* have some data to work with.