Monday, December 23, 2019

Outcomes

At the start of the year I made some predictions. It's now time to see how I did.

In reverse order....


6. The level of CO2 in the atmosphere will increase (p=0.999).

Yup. I don't really need to wait for 1 Jan for that.

5. 2019 will be warmer than most years this century so far (p=0.75 - not the result of any real analysis).

As above, I know we've a few days to go but no need to wait for this one, which has been very clear for a good while now.

4. We will also submit a highly impactful paper in collaboration with many others (p=0.85).

Done, reviews back which look broadly ok, revision planned for early next year when we all have a bit of time (31 Dec is a stupid IPCC submission deadline for lots of other stuff).

3. Jules and I will finish off the rather delayed work with Thorsten and Bjorn (p=0.95).

Yes, the project is done, through the write-up continues. Actually we hope to submit a paper by 31 Dec but there will be more to do next year too.

2. I will run a time (just!) under 2:45 at Manchester marathon (p=0.6).

Nope, 2:47:15 this time. The prediction was made just a few days after I'd run a big PB in a 10k but even then I thought it was barely more likely than not, and it got less likely as the date approached. 

1. Brexit won't happen (p=0.95).

On re-reading the old post, I have to admit I cannot remember the precise intention I had when I wrote this. Given the annual time frame of the remainder of the bets, and the narrative of that time being that we were certainly going to leave on the 29th March (as repeated over 100 times by May - remember her? - and the rest of them) I do believe I must have been referring to leaving during 2019. After all, I could never hope to validate a bet of infinite duration. So yes, I'm going to give myself this one.

On the other hand, I did actually think that we would probably not be stupid enough to leave at all, and clearly I misunderestimated the electorate and also the dishonesty of the Conservative Party, or perhaps as it should be known, the English National Party.

I have learnt from that misjudgment and will not be offering any predictions as to where we end up at the end of next year. Which is sort of inconvenient, as we are trying to arrange a new contract with our European friends for work which could extend into 2021. Our options would seem to include: limiting the scope of the contract to what we can confidently complete strictly within 2020, which is far from ideal, or shifting everything to Estonia (incurring additional costs and inconvenience for us, though it may be the best option in the long term). Or just take a punt and cross our fingers that it all turns out ok, despite there being as yet no hint of a sketch of a plan as to how the sales of services into the EU will be regulated or taxed past 2020. It is quite possible that we'll just shut down the (very modest) operation and put our feet up. 

I'm still waiting for the brexiters to tell me how any of this is in the country's interests. But that's a rant for another day. Perhaps it's something to do with having enough of experts.

As for scoring my predictions: the idea of a “proper scoring rule” is to provide a useful measure of performance for probabilistic prediction. A natural choice is the logarithmic scoring rule L = log(p) where p is the probability assigned to the outcome, and with all of my predictions having a binary yes/no basis I'll use base 2 for the calculation. The aim is to maximise the score (ie minimise its negativity, as the log of numbers in the range 0 to 1 is negative). A certain prediction where we assign a probability of p=1 to something that comes out right scores a maximum 0, a coin toss is -1 whether right or wrong but if you predict something to only have a p=0.1 chance and it happens, then the score is log(0.1) which in base 2 is a whopping -3.3. Assigning a probability of 0 to the event that happens is a bad idea, the score is infinitely negative...oops.

My score is therefore:
0 - 0.42 - .23 - .07 -1.32 - .07 = 2.11

or about 0.35 per bet, which is equivalent to assigning p=0.78 to the correct outcome each time (which is just the geometric mean of the probabilities I did assign). Of course some were very easy, but that's why I gave them high p estimates which means high score (but a big risk if I'd got them wrong). I could have given a higher probability to the temperature prediction if I'd bothered thinking about it a bit more carefully. The running one was the only truly difficult prediction, because I was specifically calibrating the threshold to be close to the border of what I might achieve. It might have been better presented as a distribution for my finish time, where I would have had to judge the sharpness of the pdf as well as its location (ie mean).

Tuesday, December 17, 2019

BlueSkiesResearch.org.uk: Is the concept of ‘tipping point’ helpful for describing and communicating possible climate futures?

There’s a new book just out "Contemporary Climate Change Debates: A Student Primer" edited by Mike Hulme. I contributed a short essay arguing the negative side of the above question. It was originally intended to be "Will exceeding 2C of warming lock the world onto a 'Hothouse Earth' trajectory?" but no-one could be found to argue in favour of that (this was shortly after the publication of the Steffen nonsense) so we settled on something a bit more vague. Maybe I should summarise my compelling argument but don’t have time right now so you’ll have to take my word for it.

I haven’t had time to read the book but my complementary copy just arrived (hence the post) and the table of contents is quite interesting, so maybe it would make a good Christmas present for the person who is interested in climate change – or even for someone who isn’t!

Introduction: Why and how to debate climate change
Mike Hulme
1. Is climate change the most important challenge of our times?
Sarah Cornell and Aarti Gupta
PART I: What do we need to know?
2. Is the concept of 'tipping point' helpful for describing and communicating possible climate futures?
Michel Crucifix and James Annan
3. Should individual extreme weather events be attributed to human agency?
Friederike E.L. Otto and Greg Lusk
4. Does climate change drive violence, conflict and human migration?
David D. Zhang and Qing Pei; Christiane Fröhlich and Tobias Ide
5. Can the social cost of carbon be calculated?
Reyer Gerlagh and Roweno Heijmans; Kozo Torasan Mayumi
PART II: What should we do?
6. Are carbon markets the best way to address climate change?
Misato Sato and Timothy Laing; Mike Hulme
7. Should future investments in energy technology be limited exclusively to renewables?
Jennie C. Stephens and Gregory Nemet
8. Is it necessary to research solar climate engineering as a possible backstop technology?
Jane C.S. Long and Rose Cairns
PART III: On what grounds should we base our actions?
9. Is emphasising consensus in climate science helpful for policymaking?
John Cook and Warren Pearce
10. Do rich people rather than rich countries bear the greatest responsibility for climate change?
Paul G. Harris and Kenneth Shockley
11. Is climate change a human rights violation?
Catriona McKinnon and Marie-Catherine Petersmann
PART IV: Who should be the agents of change?
12. Does successful emissions reduction lie in the hands of non-state rather than state actors?
Liliana B. Andronova and Kim Coetzee
13. Is legal adjudication essential for enforcing ambitious climate change policies?
Eloise Scotford; Marjan Peeters and Ellen Vos
14. Does the 'Chinese model' of environmental governance demonstrate to the world how to govern the climate?
Tianbao Qin and Meng Zhang; Lei Liu and Pu Wang
15. Are social media making constructive climate policymaking harder?
Mike S. Schäfer and Peter North

Monday, November 11, 2019

BlueSkiesResearch.org.uk: Mina olen Eesti e-resident! 🇪🇪


I believe the title of the post proclaims me to be an Estonian e-resident. jules likewise. This marks the culmination of a very straightforward on-line process which was remarkably painless right up to the moment that we had to attend the Estonian Embassy in London to pick up our identity cards in person, at which point we had to brave Britain’s creaking rail network.

The point of establishing e-residency is to be able to set up a business there, which will enable Blue Skies Research to remain seamlessly in the EU in the event of the UK ever managing to leave. Not that the latter looks very likely, but in order to collaborate on any long-term project based on EU funding we need to be able to prove that there’s a plan in place to cover the theoretical possibility. This must be one of these “Brexit bonus” things that the tories have been promising us for the past few years. Though “bonus” would usually imply some sort of gain rather than added costs and bureaucracy, not to mention the losses in corporation tax which will henceforth be paid in Estonia rather than the UK. Even for our part-time hobby business, that is likely to be several thousands, perhaps up to ten thousand pounds, per year lost to the UK indefinitely into the future. Our combined share of EU membership fees is probably under a hundred quid per year. Even the bare cost of health insurance for when we visit our colleagues there will cost more than that when we lose the EHIC. But we will apparently get blue passports and we may eventually get a new 50p piece too when they have worked out the design. Apparently they had almost finalised that a while back, but hadn't worked out what to do about the border. Boom tish. Of course they still haven't, so Bonson is just lying through his teeth every time he opens his mouth, and the same old tory voters will just lap it up cos he's such a cheeky chappy with those clever latin bons mots.

We did managed to arrange another couple of things during the two-day trip, so it wasn’t a total waste of time. And it was cheaper than expected too, due to three of the four train trips being significantly delayed to such an extent we can reclaim half of the travel costs.

Saturday, November 09, 2019

BlueSkiesResearch.org.uk: Marty Weitzman: Dismally Wrong.

De mortuis nil nisi bonum and all that, but I realise I only wrote this down in a very abbreviated and perhaps unclear form many years ago, in fact prior to publication of the paper it concerns. I was sad to hear of his untimely death and especially by suicide when he surely had much to offer. But like all innovative researchers, he made mistakes too, and his Dismal Theorem was surely one of them. Since it’s been repeatedly brought up again recently, I thought I should explain why it’s wrong, or perhaps to be more precise, why it isn’t applicable or relevant to climate science in the way he presented it.

His basic claim in this famous paper was that a “fat tail” (which can be rigorously defined) on a pdf of climate sensitivity is inevitable, and leads to the possibility of catastrophic outcomes dominating any rational economic analysis. The error in his reasoning is, I believe, rather simple once you’ve seen it, but the number of people sufficiently well-versed in statistics, climate science and economics (and sufficiently well-motivated to carefully examine the basis of his claim) is approximately zero so as far as I’m aware no-one else ever spotted the problem, or at least I haven’t seen it mentioned elsewhere.

The basic paradigm that underpins his analysis is that if we try to estimate the parameters of a distribution by taking random draws from it, then our estimate of the distribution is going to naturally take the form of a t-distribution which is fat-tailed. And importantly, this remains true even when we know the distribution to be Gaussian (thin-tailed), but we don’t know the width and can only estimate it from the data. The presentation of this paradigm is hidden beyond several pages of verbiage and economics which you have to read through first, but it’s clear enough on page 7 onwards (starting with “The point of departure here”).

The simple point that I have to make is to observe that this paradigm is not relevant to how we generate estimates of the equilibrium climate sensitivity. We are not trying to estimate parameters of “the distribution of climate sensitivity”, in fact to even talk of such a thing would be to commit a category error. Climate sensitivity is an unknown parameter, it does not have a distribution. Furthermore, we do not generate an uncertainty estimate by comparing a handful of different observationally-based point estimates and building a distribution around them. (Amusingly, if we were to do this, we would actually end up with a much lower uncertainty than usually stated at the 1-sigma level, though in this case it could indeed end up being fat-tailed in the Weitzman sense.) Instead, we have independent uncertainty estimates attached to each observational analysis, which are based on analysis of how the observations are made and processed in each specific case. There is no fundamental reason why these uncertainty estimates should necessarily be either  fat- or thin-tailed, they just are what they are and in many cases the uncertainties we attach to them are a matter of judgment rather than detailed mathematical analysis. It is easy to create artificial toy scenarios (where we can control all structural errors and other “black swans”) where the correct posterior pdf arising from the analysis can be of either form.

Hence, or otherwise, things are not necessarily quite as dismal as they may have seemed.

Friday, October 25, 2019

God's own marathon

I'd like to post about brexit but there's been absolutely nothing of note going on recently - truly it is a tale told by an idiot, full of sound and fury, signifying nothing.

So instead of that, another race report.

Every year the British Masters Athletics Federation organises a Masters Marathon Championship event, usually as part of one of the autumn city marathons ("Masters" = veterans). This year it was in York, or more precisely the Yorkshire Marathon. England Athletics also organised an England vs Celtic Nations Masters Marathon match at the same event. It's all a bit of meaningless fun really but what with it being reasonably local and with nothing much else on, I decided to give it a go this year. I could have qualified for the England team (through being in the top 4 in my age category in Mancs) but offered my services to Scotland instead. 

Training was a bit desultory, I wasn't going to spend hot summer days trudging around on long runs so just squeezed in an abbreviated plan in the 12 weeks leading up to the event. It was never going to be a PB for me, and a half marathon in September where I barely broke 1:25 (about 3 mins slower than my best) confirmed this. Never mind, I had a smart Scotland (veteran) vest to wear!

Had been hoping to see a bit of York over the race weekend, as we had barely visited before, but the location of our Airbnb to the north, combined with the need to visit the start village on the Saturday afternoon to pick up a race bib, meant we didn't really have time or energy for much else. But we managed to find a decent pizza in the impressive old assembly rooms which was stuffed full with a bunch of mostly skinny oldish people eating unusually large dinners early on a Saturday evening. I don't think the staff knew what was going on. It wasn't quite as good a pizza as in Manchester or Leeds, actually, but seemed adequate for the purposes.

The airbnb was a bit further than I'd have liked to be from the start, but we had plenty of time to wander there in the morning mostly down a quiet cycle route. Loads of people were already stuffed into the starting pens - I've never understood why people want to spend so long standing around like that - and I hopped over the fence with about 10 mins to go and found myself among lots of England veteran vests. In what was an interesting novelty to me, the vets were all wearing race bibs with their age categories on their backs - this is what I'd had to pick up the day before - so we could tell who we were supposed to be competing against. This is a really good idea as usually when an old baldie hoves into view mid-race I have little idea whether they are a direct competitor or not and therefore whether I should try to beat them.



The race was fairly uneventful really - I wasn't that sure of my fitness so didn't risk going too hard. Unfortunately that meant that I got dropped just off the back of what would have been a useful group around the 2nd woman and ran most of the race solo which didn't help in what had become a steady breeze. Temperature was comfortable enough, we only had one very light spot of rain along with a bit of sun that never got too hot. For some reason there were a couple of pipe bands, subsequent web searching suggests they were probably local to York as I can't imagine why any would have come down from Scotland.

Went through halfway in just under 1:25, almost exactly the same as my recent half marathon, but slowed slightly for the second half as my lack of motivation and training started to tell. Nothing drastic though, and I finished in 2:52:52, nominally my worst proper marathon since 2016 (not counting the very mountainous Bentham race) but not really that bad considering. With the wind and lots of gentle undulations it didn't feel like quite as fast a course as Manchester, though runbritainrankings seems to think it was just as quick or even quicker. But that may have been biased by the number of vets peaking specifically for this event. Anyway it was a decent event and well organised apart from the minor annoyance of having to pick up the back bib in person rather than having it posted out as with the standard race number.



On the finishing straight I managed to find my pre-arranged flags and crossed the line waving them to a suitably mixed reception.

According to the results I was the 7th overall M50, but 5th in the BMAF championships (which you had to specifically enter separately) and the 1st member of the Celtic Nations team. Though these results don't seem to have been officially announced yet. I don't get a haggis as a prize or anything like that. Just an unfeasibly large garish pink t-shirt which has been donated to jules for her own sartorial experimentation.

Monday, September 23, 2019

My latest brexit prediction

Thought I'd better make a prediction before tomorrow's verdict: the Supreme Court will rule the matter justiciable, it will furthermore conclude that Johnson lied to the Queen, but it will not demand a specific solution such as reopening parliament or declaring that the prorogation was null and void. This option was not even discussed on the "Talking Politics" podcast I listened to over the weekend, and since they've reliably got everything else wrong about brexit for the past three years, it's a slam dunk.

No, honestly, though I do think it's a plausible outcome I wouldn't attach a very high probability to being right on this. The story of brexit has been one of unpredictable twists and turns, even if the final outcome is amply summed up in this pie chart:






And here's some more twitter fun:

Probably takes a click to make the gifs/videos play. [Oh, the second one doesn't seem to work. That's a shame. Well, it was just a long list of brexiters pretending everything was going to be great a couple of years ago, and now pretending that they never claimed it was going to be great. Just the usual lying liars lying.]

Oh how I long for the days when a PM syphoning off 100k of public money to one of their mistresses qualified as a proper scandal. These days it barely rates a mention on the BBC, and only then after people have baited them for a day over why they haven't covered it.

Meanwhile the Labour party conference has managed to create an outcome that is even worse than anyone imagined possible, not merely sitting on the implausible brexit unicorn fence but choosing to do so through a show-of-hands vote that many think was called the wrong way or at least too close to call without a proper count. What a shambles.

At least the LibDems have got there finally. I can snark at how long it took them to get there but they are still well ahead of the other two parties.

Monday, September 02, 2019

Bracing for brexit


So, the Govt has decided to splash £100m of our money on telling us to do what it has signally failed to do for the last 3 years - get ready for brexit. Of course the main aim of this marketing campaign is really to soften up the population for the supposed inevitability of brexit at the end of October, and hoodwink them into thinking that if it "happens" then that would be the end of the matter, rather than the start of decades of negotiation, argument and recrimination over the subsequent arrangements.




I had a look at the govt site, and for a small and simple company such as BlueSkiesResearch, there are pages and pages of vague verbiage that mostly miss the point and nothing that explains whether or not we would be able to travel to the rest of the EU to work there as we did in Hamburg and Stockholm over the last few years. Probably the best strategy will be to just lie and pretend it's a holiday. Of course there's no guidance for that either but we can be fairly confident that this would be sorted out in time for our next trip (probably the EGU meeting in Vienna if any Austrian immigration officials are reading).

More consequentially, I've also applied for - and received - Estonian e-residency (jules has also applied, but a bit later so hers has not come through yet). This will enable us to establish a business over there within the EU and hopefully allow easy participation in such things as Horizon2020 and its successor funding programmes. I know the govt had promised to support existing grants but the point is to be able to apply in the future.


Of course an inevitable consequence of this - on top of the time and money wasted, which will amount to a few hundred pounds by the time it's done and dusted - is that our company will be paying corporation tax in Estonia rather than the UK. Just one more bit of pointless self-harm by the idealogues.

I've still got to go to London to pick up the id card, that's more time and money down the drain. Perhaps after visiting the Estonian Embassy I'll take a stroll along Downing Street and chuck a few petrol bombs at No 10. Only joking, I'll probably take a milkshake.

Of course the most likely outcome - as I have said consistently for over three years now - is that we actually remain in the EU after all, when this colossally stupid act of self-humiliation collapses under its own dishonesty and idiocy. In the meantime, the damage mounts up and whatever happens now, the harm will take decades to recover from.


Monday, July 08, 2019

Parcevall Hall

As it says on my Twitter profile (@julesberrry), I am a bad recorder player. This "skill" enables one to attend things like playing recorder weekends in big old houses with lovely gardens! The recorder is a nice quiet instrument so one really can't go wrong no matter how bad. But I still feel fortunate for not being a bad french horn player.










Wednesday, June 19, 2019

More winning!!

Where winning = doing something, anything, faster than James.

Last year I discovered why the Lake District is called that. I always thought it was a funny name for a bunch of pretty mountains and lots of cars. But it turns out there are all these big deep cold lakes, and you are allowed to swim in almost all of them! 

Ullswater 500m, 1 mile (1610m - don't ask me why it isn't a sensible 1500m!), and 3.5km swims were last Sunday.  The 500m (84 finishers) is perhaps the beginners event. The 3.5km is pretty much ironman practice distance and the standard was high. However, it was so cold that this event was reduced to 2.5km. The 1 mile was equally cold (11.8C brrrr.) but they made us do the whole thing! 292 people finished this one, including me and James. I got round 6 minutes quicker than James which makes the difference between us in swimming and running about the same, but the other way round! But somehow James came out more inspired. My race was a bit of a fist fight. Whereas a week ago at Leeds I was swimming among a wave of elegant, lithe, lightweight, coordinated, fit but middle aged women, when it comes to pure swimming, the big, the fat, the young and the male tend to trounce lightweight middle-aged elegance! I was completely unprepared for being half overtaken by thrashing behemoths doing front crawl who then collapsed into breaststroke for  few strokes thus entangling all their kicky limbs among mine. The way out is to kick violently, but this does take quite a lot of energy. Next time! Still, a reaction of annoyance rather than panic is encouraging I suppose. I am still not sure how to overtake these people, however, as it is really hard to get around widely flailing limbs in a packed field, and trying to draft behind them doesn't really work. 







Tuesday, June 18, 2019

The sociopaths have taken over the asylum

Just in case anyone was in any doubt about the nature of the swivel-eyed loons who will shortly be picking our new PM....


Saturday, June 15, 2019

[jules' pics] World triathletes

James kindly blogged my amazing triumph in the British Triathlon Championships... the triumph being not dying during the event and also BEATING HIS MARATHON TIME!! (hurrah!)

Here are the real ones. 

Cycling (not the lead group)


Running



Georgia Taylor-Brown won in the end (she is in second in the running pic here). Katie Zaferes was second, and Jess Learmonth was third.

Some men also did it later on.


Friday, June 14, 2019

2:46:41

jules has taken up triathloning. I'm a rubbish swimmer so am not really tempted. The cycling and running bit would be ok but there's not much fun in doing a race where I start by half-drowning myself and giving everyone else a 20 minute head start. Anyway she has done a couple of shorter pool-based events over the last couple of years but enjoys open water swimming so wanted to do one of those, which are more often the full Olympic distance (1500m swim, 40k bike, 10km run).

Leeds of course is the centre of the UK for triathlon, with not just the Brownlees but also the women's team (who are probably better than the men these days) mostly based there. So doing the Leeds triathlon was the obvious choice. As well as the UK age-group championships there was an international elite event following (part of the ITU World Triathlon Series).

We started out with the traditional pizza, which was very good but so small we had to get some more slices.



The morning was bright and sunny but quite cold. Compared to Windermere where we had been practising, the water was apparently not too bad at 15C.




One of these pictures contains jules, the other is the wave in front of hers.


This isn't jules, who had apparently just swum past without me noticing. She didn't want to wave in case she got accidentally rescued! She was a little faster than I'd expected and you really can't tell people apart in the water when they are all wearing wetsuits and hats. So I missed the fun of watching her struggle to get out of her wetsuit in transition.


A massive collection of very high-tech bikes. Together with jules' one. All surrounded by high fences and patrolled by security guards all night as you had to leave your bike there the night before.


Not much evidence from the photo but she was actually running in this pic! (It was uphill to be fair). And having been following her round the course, I didn't quite have time to get into the grandstand proper for the finish, due to the circuitous route and closed roads. But her hat is just visible over the barrier. There was also a live stream on the BBC website...ah here it is with no sound.




jules had worked out that she might be able to beat my marathon time....and sure enough...























She's been wearing the medal non-stop since the weekend! So I've got my work cut out over the winter to win back bragging rights....


Friday, June 07, 2019

The risks of financial managers, part 2

With reference to this post.

Unknown commenter pointed out the issue with portfolio E in particular, that although it had an expected gain of 5% per year, investors who persist with this portfolio over the long term would probably lose more in the bad years than they would gain in good ones. Sounds contradictory? Not quite. If you do the sums, you will see that the expected gain over a long sequence of years is generated from a very small probability of a extremely large gain, together with a very large probability of losing almost all your initial investment. The distribution of wins and losses is binomial (which tends towards Gaussian for a lot of years) but in order to come out ahead the investor needs to get lucky roughly 3 out of 5 years, and the probability of this happening will shrink exponentially (in the long term) as the number of years increases because it's moving further and further into the tail of a Gaussian.

As an extreme version of this, consider being invited to place a sequence of bets on a coin toss where the result of a T means you lose whatever your stake was, but H means you get back 3 times your stake (ie you win 2x stake, plus get your stake back - odds of 2:1 in betting parlance). This bet clearly has positive expectation, each pound bet has an expected return of £1.50, so if you want to maximise your expected wealth then rationally this bet is a great offer. If you start with a pound in the pot and do this 20 times in a row, betting your entire pot each time, you either end up with 3^20 pounds (with a 1 in a million probability, when you get 20 heads) or else you lose everything (with 999,999 in a million probability, when a tail turns up at any time). (2^20 is actually 1,048,576 which is close enough to a million for many purposes and can be a useful rule of thumb to remember). The expected gain at the end of the 20 bets is about £3400 but the vast majority of players will end up with nothing. Would any of my readers pay £1000 for the right to take part in this game? 

In fact, for most people, most of the time, increasing wealth by a factor of 10 doesn't really make life 10 times better, but most people would be very averse to a bet where they could lose everything they own, including their house and the clothes off their back, even if the expected return was positive (eg betting the farm on the coin toss as above). A standard approach to account for this is to evaluate uncertain outcomes in terms of expected utility rather than expected value, and a utility function which is the logarithm of value is a plausible function to use.  One typical implication would be that the subject would be ambivalent about taking a bet where they might either double or halve their wealth with equal probability. The expected value of the bet is positive of course, but expected utility (compared to the prior situation) is zero. It should be noted that no-one really behaves as a fully rational utility-maximiser in realistic testing, but it's a plausible starting point widely used for rational decision theory.

This logarithmic utility maximisation idea leads naturally to the Kelly Criterion for choosing the size of the stake in betting games like the coin toss above. The point is that by betting a proportion of your wealth (rather than all of it) you can improve your return in terms of expected utility. Note that the log of 0 is infinitely negative, so losing all you own is best avoided! In 1956, Kelly proposed a formula for the stake which gives the maximum expected gain in logarithmic terms. The Kelly formula of (p(b+1)-1)/b, where p is probability of winning and b is odds in the traditional sense, implies a stake of (0.5*3-1)/2 = 0.25, ie you should bet a quarter of your wealth on each of the "triple or nothing" coin tosses. After the first bet, you will have either 0.75 or 1.5 pounds etc, so you either gain 50% or lose 25% and if you were to have an equal number of wins and losses you will more than triple your money in 20 bets. A smaller win in absolute terms, but a much better outcome in terms of expected utility and the majority of players who follow this strategy will make a profit.

So what does this have to do with the investment portfolios? Returning to the investments, each portfolio can be considered a bet where you stake a proportion of your wealth with a particular odds and 50% chance of winning. Eg with portfolio E the investor is betting 0.48 of their wealth with odds of (1.06/0.48 - 1):1 = 1.21:1. Kelly says that with such odds and a 50% win chance, you should really bet only about 9% of your wealth, which would return either 0.91 or 1.11 which gives a small gain in log terms. Of course the investor doesn't get to choose their stake here, but it still provides an interesting framework for comparison. The 5 investments have the following implied odds, stakes, geometric mean returns and Kelly-optimal stakes respectively:

A 1.6 0.07 1.02 0.18
B 1.7 0.10 1.03 0.12
C 1.6 0.16 1.02 0.18
D 1.4 0.27 1.00 0.14
E 1.2 0.48 0.91 0.09

C has a better return than A (having the same odds and a closer to optimal bet) but the rounding conceals it. B is better than either due to having better odds and a near-optimal stake. D is useless and E is worse than useless in these terms, implying a massive bet on rather poor odds which means most of the time you'll actually lose money in the long run.

It is fair to say that not everyone necessarily wants to maximise the expected log of their wealth, but I was surprised to see investment strategies proposed that were actually loss-making in log space. It's also true that investment E has the largest gain in purely expected value terms, but it would require an extraordinary appetite for risk to take it (rather than tolerance or indifference). And this wasn't a single accident, the other similar question had no fewer than 3 out of 6 options having the same property. I actually wonder if it's partly due to a cognitive error due to presentation. One of the questionees said that they wouldn't be bothered by a 40% loss one year if they could expect a 60% gain the next. If that was written as dividing their investment by a factor of 1.7 one year and then multiplying it by 1.6 the next, it might seem less attractive! 




Wednesday, June 05, 2019

The risks of financial managers

The following question is a slightly reworded version of a real question in a real financial management company's risk questionnaire that was provided to someone locally. I've tried to be fair to the financial company while making their question a bit less vague, they actually had two similar questions which cover this issue in slightly different ways.

"You have the choice of placing your investment in one the following 5 portfolios, ranging from low to high risk. For each portfolio, you can assume the return over each consecutive year (edit: was 5 years) takes one of two possible values, with 50% probability of each outcome. Which portfolio would you prefer for your investment?

A: 50% chance of either +11% or -7%
B: 50% chance of either +17% or -10%
C: 50% chance of either +25% or -16%
D: 50% chance of either +37% or -27%
E: 50% chance of either +58% or -48%"

So, which option(s) do you like, and why?

Tuesday, June 04, 2019

BlueSkiesResearch.org.uk: How confident should you have been about confidence intervals?

OK, it’s answer time for these questions (also here on this blog). First, a little background. This is the paper, or rather, here it is to download. The questions were asked of over 100 psychology researchers and 400 students and virtually none of them got all the answers right, with more wrong than right answers overall.

The questions were modelled on a paper by Gigerenzer who had done a similar investigation into the misinterpretation of p-values arising in null hypothesis significance testing. Confidence intervals are often recommended as an improvement over p-values, but as this research shows, they are just as prone to misinterpretation.

Some of my commenters argued that one or two of the questions were a a bit unclear or otherwise unsatisfactory, but the instructions were quite clear and the point was not whether one might think the statement probably right, but whether it could be deduced as correct from the stated experimental result. I do have my own doubts about statement 5, as I suspect that some scientists would assert that “We can be 95% confident” is exactly synonymous with “I have a 95% confidence interval”. That’s a confidence trick, of course, but that’s what confidence intervals are anyway. No untrained member of the public could ever guess what a confidence interval is.

Anyway, the answer, for those who have not yet guessed, is that all of the statements were false, broadly speaking because they were making probabilistic statements about the parameter of interest, which simply cannot be deduced from a frequentist confidence interval. Under repetition of an experiment, 95% of confidence intervals will contain the parameter of interest (assuming they are correctly constructed and all auxiliary hypotheses are true) but that doesn’t mean that, ONCE YOU HAVE CREATED A SPECIFIC INTERVAL, the parameter has a 95% probability of lying in that specific range.

In reading around the topic, I found one paper which had an example which is similar to my own favourite. We can generate valid confidence intervals for an unknown parameter with the following procedure: with probability 0.95, say “the whole number line”, otherwise say “the empty set”. If you repeat this many times, the long-run coverage frequency tends to 0.95, as 95% of the intervals do include the true parameter value. However, for a given example, we can state with absolute certainty whether the parameter is either in or outside the interval, so we will never be able to say, once we have generated an interval, that there is 95% probability that the parameter lies inside that interval.

(Someone is now going to raise the issue of Schrödinger’s interval, where the interval is calculated automatically, and sealed in a box. Yes, in this situation we can place 95% probability on that specific interval containing the parameter, but it’s not the situation we usually have where someone has published a confidence interval, and it’s not the situation in the quiz).

And how about my readers? These questions were asked on both blogs (here and here) and also on twitter, gleaning a handful of replies in all places. Votes here and on twitter were majority wrong (and no-one got them all right), interestingly all three of the commenters on the Empty Blog were basically correct though two of them gave slightly ambiguous replies but I think their intent was right. Maybe helps that I’ve been going on about this for years there 🙂

Wednesday, May 29, 2019

Bentham marathon

I've always intended to limit myself to one marathon a year, as I reckon that's quite enough time and effort to be devoted to serious running. But a couple of years ago I did the 3 peaks race when it was scheduled 4 weeks after Manchester marathon and that worked out ok, so when local road club Bentham Beagles announced they were putting on a marathon 6 weeks after Manchester, it felt like it would be a bit rude to not turn up.

The event was being arranged by a couple there who wanted to mark their 100th marathon with a local event (yes, there really are people who do 100 marathons as some sort of hobby/challenge). The route promised to be extremely hilly, heading first due south over the fells and down to a section of footpath around Stocks Reservoir, before returning over Bowland Knotts - both main climbs reaching altitudes of well over 400m, with plenty of smaller bumps to negotiate and numerous "arrowed" sections where the gradient exceeds 14% and in one place 20%. In fact the total climb of around 1000m comfortably exceeds that of the famous Snowdonia marathon (which goes round Snowdon, not up it). Not quite what Manchester training had prepared me for but I did manage a couple of runs over parts of the course in preparation so had some idea what I was letting myself in for. 

Post-Manchester resting was going ok and the blisters had healed, but then the three weeks leading up to the Bentham race were spent travelling first to Stockholm and then London, finally returning home around 10pm Friday night with the race starting at 9am on Saturday. Not quite ideal pre-race preparation, but never mind. I was determined to make it a fun run rather than going flat out, as it was never going to be a fast time. The on-line registration system provided a list of entries, and with a limit of 100 runners it wasn't too hard to do some stalking and work out that there quite probably wasn't going to be anyone properly fast (which means faster than me, of course). 

I guessed that a few of them might well set off a bit ambitiously at the start, however. Therefore a plan was hatched to try to go as easily as possible while keeping in touch with the leaders for the first half, before potentially pushing on a bit harder in the second half. Sure enough, a few people did charge off ahead but not ridiculously so, stretching out to a lead of up to a minute as we started up the first main climb. I think I was about 6th at one point, but soon enough a couple of the early leaders started to fall back and shortly afterwards I found myself running alongside one other guy who seemed quite experienced (he informed me that he had been pretty good in decades past with a ~28 min 10k to his name). We slowly reeled in the leaders, eventually forming a group of three for the last steep descent to the half-way mark.

The long-time leader stopped for a break at the reservoir (turned out to have foot problems) but Mr 28min was still going well and I didn't want to leave it to the last 10k in case he still had a turn of speed! (For context, my 10k PB is outside 37 mins, anything under 29 mins would be one of the leading times nationally.) So as we hit the second big hill I put a bit of effort in and was pleased to find he didn't respond. 3 miles later I had a quick snack at the top of Bowland Knotts and he was nowhere to be seen. I had time for a couple of pics on the way down...







jules had cycled out to meet me around the 20 mile mark and told me I had a decent gap so I just kept going at a sensible pace hoping my legs wouldn't fall off. I haven't done such a long or hilly run since the three peaks in 2017 so wasn't sure how the last few miles would go. Turned out just about ok and I finished in a time of 3:24, almost 9 mins clear of second.



The allocation of number was purely alphabetical!
Pic: Andrew Swales

Almost always road races are really just a time trial for me with the aim being to go as fast as possible over the distance so it was fun to be able to actually "race" for once without worrying about the time. And the event was very well organised with plenty of well-stocked refreshment stops. Obviously running 99 marathons previously meant the organisers knew what needed to be done!

Here's the strava log:


Sunday, May 26, 2019

BlueSkiesResearch.org.uk: How confident are you about confidence intervals?

Found a fun little quiz somewhere, which I thought some of my readers might like to take. My aim is not to embarrass people who may get some answers wrong – in testing, the vast majority of all respondents (including researchers who reported substantial  experience) were found to make mistakes. My hypothesis is that my readers are rather more intelligent than average 🙂 Please answer in comments but work out your answers before reading what others have said, so as not to be unduly influenced by them.

I will summarise and explain the quiz when enough have answered…


A researcher undertakes an experiment and reports “the 95% confidence interval for the mean ranges from 0.1 to 0.4”

Please mark each of the statements below as “true” or “false”. False means that the statement does not follow logically from the quoted result. Also note that all, several, or none of the statements may be correct:

1. The probability that the true mean is greater than 0 is at least 95%.

2. The probability that the true mean equals 0 is smaller than 5%.

3. The “null hypothesis” that the true mean equals 0 is likely to be incorrect.

4. There is a 95% probability that the true mean lies between 0.1 and 0.4.

5. We can be 95% confident that the true mean lies between 0.1 and 0.4.

6. If we were to repeat the experiment over and over, then 95% of the time the true mean falls between 0.1 and 0.4.

Sunday, May 12, 2019

BlueSkiesResearch.org.uk: Stockholm

Just had a couple of weeks in Stockholm, courtesy of Thorsten Mauritsen at MISU. who we had previously visited in Hamburg. Lots of science will be forthcoming but we are too busy doing it to write about it for now 🙂

For the moment, I will just note that Thorsten is Danish, previously working in Germany but now in Sweden, we had discussions with his British and French group members, the Head of Department is Spanish. Discussions in the canteen seemed to be mostly English in a variety of accents (including the Dutch student who had considered coming to the UK but who had been dissuaded by the obvious reason), mixed with a range of unidentifiable Scandinavian languages – presumably mostly, if not all, Swedish. Theresa May and Jeremy Corbyn would be horrified to hear of such an outrageous situation and I’m relieved that they are doing their level best to ensure that no Brits will risk encountering such a terrible situation again.

(Actually, to be honest I am relieved that their level best is so pitiful that we aren’t actually going to leave the EU. But I’m still disgusted that they are so scared at the thought of people living, working and studying in different countries that they are completely fixated on the idea of preventing us from doing so.)

2019-05-04 10.04.12
Haga parkrun was close to our hotel, and by strange quirk of fate on Sunday morning I ran a route which quite closely approximates a lap of the upcoming Stockholm marathon. I hadn't even known there was a marathon. Some were out practising for the famous Stockholm ski marathon too.
2019-05-05 11.05.30
Stockholm has a lot of islands, and as a result, there’s a lot of coastline and water.
2019-05-05 10.33.24On our last night we had dinner in a Michelin-starred restaurant which was an interesting experience. However this photo below is just the little castle on the top of Kastellholmen which may be used as some sort of conference centre I think.
2019-05-05 11.24.10

Monday, May 06, 2019

Oyster Roulette

 If you've ever lost at Oyster Roulette you might not be inclined to try it here. Or are they just being precautionary/truthful?

But, how nice that a full norovirus recovery pack it is included among the condiments... 
Salt, sugar, handwipes.



Tuesday, April 23, 2019

BlueSkiesResearch.org.uk: Steffen nonsense

Been pondering whether it was worth bother blogging this but I haven’t written for a while and in the end I decided the title was too good a pun to pass on (I never claimed to have high standards).

The paper "Trajectories of the Earth System in the Anthropocene" had entirely passed me by when it came out, though it did seem to attract a bit of press coverage eg with the BBC saying
Researchers believe we could soon cross a threshold leading to boiling hot temperatures and towering seas in the centuries to come.
Even if countries succeed in meeting their CO2 targets, we could still lurch on to this “irreversible pathway”.
Their study shows it could happen if global temperatures rise by 2C.
An international team of climate researchers, writing in the journal, Proceedings of the National Academy of Sciences, says the warming expected in the next few decades could turn some of the Earth’s natural forces – that currently protect us – into our enemies.
and continues in a similar vein quoting an author
“What we are saying is that when we reach 2 degrees of warming, we may be at a point where we hand over the control mechanism to Planet Earth herself,” co-author Prof Johan Rockström, from the Stockholm Resilience Centre, told BBC News.
“We are the ones in control right now, but once we go past 2 degrees, we see that the Earth system tips over from being a friend to a foe. We totally hand over our fate to an Earth system that starts rolling out of equilibrium.”
Like I said, I had missed this, and it was only an odd set of circumstances that led me to read it, about which more below. But first, the paper itself. The illustrious set of authors postulate that once the global temperatures reach about 2C above pre-industrial, a set of positive feedbacks will kick in such that the temperature will continue to rise to about 5C above pre-industrial, even without any further emissions and direct human-induced warming. Ie, once we are going past +2C, we won’t be able to stabilise at any intermediate temperature below +5C.

The paper itself is open access at PNAS. The abstract is slightly more circumspect, claiming only that they "explore the risk":
We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a "Hothouse Earth" pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene.
(the paper fleshes out these words in numerical terms).

The paper lists a number of possible positive carbon cycle feedbacks and quantifies them as summing to a little under half a degree of additional warming (Table 1 in the paper). The authors then wave their hands, say it could all get much worse, and with one bound Jack was free. End of paper. I went through it again to see what I’d missed, and I really hadn’t. It is just make-believe, they don’t "explore the risk" at all, they just assert it is significant. There’s a couple of nice schematic graphics about tipping points too.

The mildly interesting part is what led me to read the paper at all, 6 months after missing its original publication. An editor contacted me a little while ago to ask if I’d write half a of debate (to form a book chapter) over whether exceeding 2C of warming would lock us onto a trajectory for a much warmer hothouse earth. I was charged with arguing the sceptical side of that claim. I was initially a bit baffled by the proposal as I had not (at that point) thought anyone had claimed anything to the contrary, but it soon became clear what it was all about. I said I’d be happy to oblige, but it turns out that my intended opponents, being two of the co-authors on the paper itself, were not prepared to defend it in those terms.