Tuesday, August 28, 2007

Total washout

Honestly. What's the point in being on the other side of the world if not to be able to see interesting astronomical phenomenon that are not visible to the rest of you?

Apparently there's a lunar eclipse. But it's cloudy. Bah!

This is what it looked like to people up north in Hokkaido:


(Borrowed from here.)

Jules cynically says it always looks like that through the smog anyway.

Sunday, August 26, 2007

New earthquake (not very) early warning system

The Japanese Meteorological Agency is starting up a new early warming system for earthquakes at the start of October - of course earthquakes are not meteorological, but they are covered by the "natural disaster" remit. Shame foreigners aren't, but that's another story.

According to what I've read, the basic idea seems to be that they hope to detect the initial "p-wave" tremors (the primary pressure wave which travels fast), and warn before the main s-wave (secondary transverse wave which travels slower) hits. OTOH this page talks about a third type of "surface waves" which are slower still and cause the most damage. Anyway, with a speed of something like 4km/sec, it will be challenging to get any warning out in the area close to the epicentre (where the damage will be focussed) early enough to matter. But even a few seconds may be enough for people to duck under their desks. I just hope there won't be too many false alarms, or people will simply ignore them. We are regularly amused by the warnings of heavy rain over the loudspeakers in the street, that we can barely hear over the noise of the rain that is already thundering down :-) To be fair they do sometimes beat the storm.

The Shinkansen has had an automated system like this in operation for some time. Even if the train doesn't have time to stop, any slowing down can only help. The one time there was a derailment, in Niigata 2004, there still weren't any injuries.

On the importance of Bayesian analysis

An interesting example I spotted on Andrew Gelman's blog.

UK readers will remember a medical test where 6 people took a particular drug and all had an extreme life-threatening reaction ("cytokine storm", whatever that means). Apparently there were also 2 controls, who were not treated, and who (surprise) did not suffer the reaction.

But...with only 8 samples in total, the results are barely significant in frequentist terms. Perhaps the simplest way of analysing the result is to ask the following: since 6 people out of 8 fell ill, and given the null hypothesis that the treatment and control outcomes are probabilistically identical, what is the probability that the 6 ill people would coincide with the 6 treated people? This is a simple combinatorial question, the answer to which is 1/28 or 3.6% (there is some more detailed discussion at the link about the correct test to use). So it is just significant at the p<0.05 threshold but not p<0.01. Given the number of medical trials taking place, we should expect such failures regularly.

But we don't, of course. The reason being, our prior expectation of someone naturally having such a life-threatening reaction, absent any provocation, is so low as to be virtually zero. Any plausible Bayesian updating of the prior belief P(treatment is harmful) in the light of the observed data, is going to massively increase this probability, because the alternative hypothesis (that the reactions occurred by chance) is even lower. And this is obviously what all the researchers and commentators have actually done in practice, even if not explicitly and precisely.

Eg let's model it as the test having two possibilities: either it is harmful (all subjects will suffer) or not (reaction has the background probability 0.0001, surely an overestimate). Given an extremely complacent prior belief that the test is harmless with probability 0.999, the posterior after 6 test subjects have all reacted is given by:

P(test is harmful)=1*0.001/(1*0.001+0.0001^6*0.999) = 1, to as many significant digits as I can be bothered writing. That's a very trivial analysis of course, but real maths is hard to do in Blogger (no LaTeX facility).

Most japanese think that foreigners are human!

Great news. According to a new survey, almost 60% of Japanese actually think that foreigners are deserving of human rights.

Of course, that means that just over 40% don't.

Mind you, it does sometimes feel to us like we are living on an alien planet.

Saturday, August 25, 2007

More doom and gloom

Interesting to see Nature jumping on the no job prospects for PhDs and postdocs bandwagon (via Pharyngula). (Disclaimer - I haven't yet read the Nature article - no access at home - but I'm assuming that it draws the same obvious conclusion from the statistics.) Not so long ago their "jobs editor" Paul Smaglik was having a go at some anonymous blogging post-doc for daring to suggest that anything was less than perfect in their work life. But it is of course blindingly obvious that if every tenured staff member mentors on average even a single PhD student at a time, them the overwhelming majority of these PhDs will not subsequently go on to get tenured positions in academia.

Of course one can legitimately argue that it is fine for the vast majority of PhDs to not get academic jobs (and for a large majority of postdocs to never land a tenured position) - so long as they are aware of the situation and walk into it with eyes open, that's OK. But that hardly justifies the sort of situation where people complain about (and are reported uncritically on) the "shortage" of qualified staff simply because they "only" get 30 applicants per post rather than 75!

Cutting off your finger to spite your hand(2)?

Just in case anyone was under the misapprehension that the USA had cornered the market in right-wing nutcases:

Japan activist posts finger to PM

If I was the PM, I'd proffer a finger in return - but keep it firmly attached. About the only thing that Abe has done right since taking office is to not go to openly worship war criminals.

I’m Jack Bauer - my phone bill is crazy, but my job pays for it

I know that not owning a TV probably hurts my Japanese comprehension. But with adverts like this, (for rental of "24" season 6 DVDs) can you blame me?



Tuesday, August 21, 2007

The creation argument

I think it is clear that at least one of the following propositions is true: (1) the human species is very likely to go extinct before developing supernatural powers; (2) any civilization with supernatural powers is extremely unlikely to create a significant number of "universes"; (3) we are almost certainly living in a universe designed and manufactured by a "Creator". It follows that the belief that there is a significant chance that we will one day develop into a race with supernatural powers who create universes is false, unless we are currently living in a universe created by such a being.

I don’t pretend to know which of these hypotheses is more likely, but think that none of them can be ruled out. My gut feeling, and it’s nothing more than that, is that there’s a 20 percent chance we’re living in a created universe (maybe I really think it's much lower, but Pascal's wager and all that).



My argument is either (a) a quasi-religious triviality or (b) a major new scientific breakthrough, depending on your point of view. As for me, I'm staying out of it - I'm good enough at making enemies in my day job without going out and actively looking for new ones elsewhere :-)

Monday, August 20, 2007

New Sharp Zaurus!

Sorry to lead on any Zaurus fans with the title - it's not truly a new Zaurus, but merely new to me. The Zaurus line is long since defunct (indeed even the last few models were little more than cash cows to milk some more profit out of the design, with very little innovation), and my old SL-C860 is a little long in the tooth with the hinge now starting to misbehave a little. With this in mind, I thought it was time to upgrade before it was too late. So a few weeks ago we headed off to Akihabara where I expected to pick up the newest (least old) SL-C3200 for about ¥50,000 or a little more. You can learn more about the 3200 model here BTW.

Rather to our surprise, we managed to find a small Sofmap shop which sells 2nd hand stuff, which we had last visited several years ago (in fact I might have got my 860 there). Things often change over that sort of time scale here, and our memories were hazy about its location anyway. But it is still there, and had the full range of Zaurus models (along with all sorts of PDAs and cameras). I hadn't really gone looking for the older 3100 model, but I vaguely remembered that it was almost the same as the 3200 - just a slightly smaller disk, and less software for English learners, neither of which I am bothered about. So for a further saving of about ¥8,000 compared to their 3200s (which were already well below the best "new" price I could find), I chose the former. Although nominally 2nd hand, I think they must be unsold shop returns as they are in near-immaculate condition.

So now I've got a new Zaurus to install all my favourite software, and an excuse to have another look at the full range of software available (which to be honest hasn't changed much). The machine is clearly better than the 860 in many minor ways - better keyboard feel and layout, a nice Japanese-English dictionary and encyclopedia (especially now I can read it a bit), obviously massive disk space (comparatively speaking) and a slightly more elegant base shape. There are some other minor updates to the bundled software which are moderately useful/entertaining, like a train timetable/planner. It is also now very clear that my 860's battery was getting rather weak - the new machine lasts much better. Mostly it's the same however, which is what I wanted.

As well as gaining a shiny new SL-C3100, I have of course also gained a spare disposable SL-C860, which means I can play at being a Linux geek and install new distributions like pdaXrom on it - this is a full X11 windows manager thing with huge number of applications available. Installing the basic package was straightforward, setting up things like the internet connection rather less so, but I've even got (one method of) that working OK now (ok, jules fixed the last bit for me). I'm not sure that it is really that useful to me, especially since it means losing a lot of Sharp's inbuilt Japanese language abilities (and/or losing a lot of hair trying to install enough bits and pieces to get back roughly to where I started). But it's something new to play with.

Sunday, August 19, 2007

Schwartz' sensitivity estimate

Via email, I hear that this paper from Stephen Schwartz is making a bit of a splash in the delusionosphere. In it, he purports to show that climate sensitivity is only about 1.1C, with rather small uncertainty bounds of +-0.5C.

Usually, I am happy to let RealClimate debunk the septic dross that still infects the media. In fact, since I have teased them about their zeal in the past, it may seem slightly hypocritical of me to bother with this. However, this specific paper is particularly close to my own field of research, and the author is also rather unusual in that he seems to be a respected atmospheric scientist with generally rather mainstream views on climate science (although perhaps a bit critical of the IPCC here). However, his background is in aerosols, which suggests that he may have stumbled out of his field without quite realising what he is getting himself into.

Anyway, without further ado, on to the mistakes:

Mistake number 1 is a rather trivial mathematical error. He estimates sensitivity (K per W/m^2) via the equation

S=t/C

where C is the effective heat capacity (mostly ocean) and t is the time constant of the system (more on this later).

His numerical values for t and C are 5+-1, and 16.7+-7 respectively (with the uncertainties at one standard deviation). It is not entirely clear what he really intends these distributions to mean (itself a sign that he is a little out of his depth perhaps), but I'll interpret them in the only way I think reasonable in the context, as gaussian distributions for the parameters in question. He claims these values gives S equal to 0.3+-0.09, although he also writes 0.3+-0.14 elsewhere. This latter value works out at 1.1C+-0.5C for a doubling of CO2. But the quotient of two gaussians is not gaussian, or symmetric. I don't know how he did his calculation, but it's clearly not right.

In fact, the 16%-84% probability interval (the standard central 68% probability interval corresponding to +- 1sd of a gaussian, and the IPPC "likely") of this quotient distribution is really 0.18-0.52K/W/m^2 (0.7-1.9C per doubling) and the 2sd limit of 2.5% to 97.5% is 0.12-1.3K/W/m^2 (0.4-4.8C per doubling). While this range still focuses mostly on lower values than most analyses support, it also reaches the upper range that I (and perhaps increasingly many others) consider credible anyway. His 68% estimate of 0.6-1.6C per doubling is wrong to start with, and doubly misleading in the way that it conceals the long tail that naturally arises from his analysis.

Mistake number 2 is more to do with the physics. In fact this is the big error, but I worked out the maths one first.

He estimates a "time constant" which is supposed to characterise the response of the climate system to any perturbation. On the assumption that there is such a unique time constant, this value can apparently be estimated by some straightforward time series analysis - I haven't checked this in any detail but the references he provides look solid enough. His estimate, based on observed 20th century temperature changes, comes out at 5y. However, he also notes that the literature shows that different analyses of models give wildly different indications of characteristic time scale, depending on what forcing is being considered - for example the response to volcanic perturbations has a dominant time scale of a couple of years, whereas the response to a steady increase in GHGs take decades to reach equilibrium. Unfortunately he does not draw the obvious conclusion from this - that there is no single time scale that completely characterises the climate system - but presses on regardless.

Schwartz is, to be fair, admirably frank about the possibility that he is wrong:

This situation invites a scrutiny of the each of these findings for possible sources of error of interpretation in the present study.


He also says::

It might also prove valuable to apply the present analysis approach to the output of global climate models to ascertain the fidelity with which these models reproduce "whole Earth" properties of the climate system such as are empirically determined here.


Perhaps a better way of putting that would be to suggest applying the analysis to the output of computer models in order to test if the technique is capable of determining their (known) physical properties. Indeed, given the screwy results that Schwartz obtained, I would have thought this should be the first step, prior to his bothering to write it up into a paper. I have done this, by using his approach to estimate the "time scale" of a handful of GCMs based on their 20th century temperature time series. This took all of 5 minutes, and demonstrates unequivocally that the "time scale" exhibited through this analysis (which also comes out at about 5 years for the models I tested) does not represent the (known) multidecadal time scale of their response to a long-term forcing. In short, this method of analysis grossly underestimates the time scale of response of climate models to a long-term forcing change, so there is little reason to expect it to be valid when applied to the real system.

In fact there is an elementary physical explanation for this: the models (and the real climate system) exhibit a range of time scales, with the atmosphere responding very rapidly, the upper ocean taking substantially longer, and the deep ocean taking much longer still. When forced with rapid variations (such as volcanoes), the time series of atmospheric response will seem rapid, but in response to a steady forcing change, the system will take a long time to reach its new equilibrium. An exponential fit to the first few years of such an experiment will look like there is a purely rapid response, before the longer response of the deep ocean comes into play. This is trivial to demonstrate with simple 2-box models (upper and lower ocean) of the climate system.

Changing Schwartz' 5y time scale into a more representative 15y would put his results slap bang in the middle of the IPCC range, and confirm the well-known fact that the 20th century warming does not by itself provide a very tight constraint on climate sensitivity. It's surprising that Schwartz didn't check his results with anyone working in the field, and disappointing that the editor in charge at JGR apparently couldn't find any competent referees to look at it.

Wednesday, August 15, 2007

Ugh

So, I was googling for some IDL code for statistical tests (why reinvent the wheel) and I came across this ugly documentation of a Kolmogorov-Smirnov test:
; OUTPUTS: Probability two populations are drawn from same
; underlying distribution.
That's from the documentation of the UKMO IDL library (which I don't actually have, although it seems freely available). Of course, it is dead wrong. A K-S test does not calculate this at all!

It's not just climate scientists, though - essentially the same error is contained in this equivalent routine from a German university astronomy and astrophysics department (a high google hit):
PROB gives the probability that the null hypothesis (DATA came from a
Gaussian distribution with unit variance) is correct.
This one is particularly amusing, because the immediately previous two lines are
IDL> data = randomn(seed, 50) ;create data array to be tested
IDL> ksone, abs(data), 'gauss_cdf', D, prob, /PLOT ;Use K-S test
thus indicating beyond any shadow of a doubt that in this example the data did come from a Gaussian distribution, irrespective of the value of "prob" that results from a single application of the test (ignoring quibbles about pseudorandom versus "truly" random).

In case it's not clear enough, the error in both sets of documentation is that the K-S test actually reports the probability that a particular test statistic would be exceeded if the two distributions were the same, in other words, a frequentist P(data=D|hypothesis=H) statement. The alternative P(H|D), which the documentation claims that the routine outputs, is an entirely different beast which demands a Bayesian treatment - in particular, it depends critically on a prior P(H), for which there is (in general) no default, "objective", or "correct" choice.

Both these routines claim to be based directly on Numerical Recipes. It is notable that the text of NR (at least my edition) avoids making this particular error. However, the same (latter) routine, with the same error in the documentation, turns up all over the place, with apparently no-one in NASA, Princeton, Washington Uni etc (to name but three) ever noticing...

I suppose I could be more sympathetic to the climate scientists who have apparently been seduced by such ubiquitous and intuitively appealing language, and who have as a result tried to reshape probability theory so as to make such statements actually valid. OTOH, I still think they should still be prepared to accept that their theories are wrong when I point out the problems in words of one syllable together with elementary examples :-)

Poor pussy

Via Andy Ridgwell (whose latest paper was featured in Science's "best of the rest" recently [sub required]):

Thailand is dressing up errant policemen in "Hello Kitty" armbands to humiliate them. In order to take off the armband, they have to go up to a random member of the public, purring and meowing. If the member of the public cannot say "poor pussy" three times while stroking the policeman, without smiling, then the policeman gets to take off the armband.


Japan, not to be outdone, has introduced a humiliation system for cats - dressing them up as Thai policemen:


Poor pussy indeed!

Incredible but true

However, unlike the previous two stories, this one appears to be true.

Tuesday, August 14, 2007

Decadal climate predictions

This paper in Science has had a surprisingly muted reaction in the blogosphere. It's almost as if climate scientists aren't supposed to validate their methods and/or make falsifiable predictions.

In contrast to those rather underwhelmed posters, I think it's a really important step forwards, not just in terms of the actual prediction made (which, to be honest is not all that exciting) but what it implies about how people are starting to think more quantitatively and rigorously about the science of prediction. Of course the Hadley Centre is well placed for this trend given their close links to the UKMO. I could probably do the odd bit of trivial nit-picking about the paper if I felt like it, but that would be churlish in the absence of a better result. I am sure they are well on the way to improving their system anyway (the paper was submitted way back in January).

A quick note about the forecast "plateau" in temperatures that was the focus of much of the news coverage: the central forecast may stay slightly below the 1998 observed peak until 2010, but the spread around this forecast assigns significant probability to a higher value. If one assumes that the annual anomalies (relative to the forecast mean) are independent with each of 2008 and 2009 having a 30% chance of exceeding 1998 (just from eyeballing their plot), then that gives a 50% chance of a new record before 2010, and 75% including 2010, which is virtually the same as what I wrote here.

More whinging from the CBI

At first glance, I thought this was a sign that the CBI was actually considering putting their money where their mouths are (albeit in a feeble manner), by offering minuscule bursaries to science students. But no, it's cheaper to lobby the Govt for it than to actually dip their hands into their own pockets.

As for "struggling to fill their posts", "struggling to fill their pockets on the backs of underpaid and exploited workers" would be more like it. "Only" 30 applicants per job? My heart bleeds. If their research and development is only viable on the premise of a never-ending supply of lab fodder desperately scrabbling for the scraps on offer then maybe we wouldn't miss them so much. Just how many people do they actually want the Govt to train (at vast expense) for each job on offer?

There's a simple solution in the free market that the CBI claim to believe in: PAY MORE MONEY, MORONS! I don't really mean purely "more money", rather a more general "better conditions" - but of course advocates of Stern think that everything can be reduced to cash :-). Sorry to shout, but it really gets my goat to hear these fat cats, who are sitting pretty at the top of the capitalist pyramid, desperately struggling to stick their snouts in the socialist trough of Govt subsidies with the intention of propping up their businesses with a never-ending supply of compliant and desperate wage slaves.

I rather liked this comment (found via the Adam Smith Institute blog, who I see has linked to my previous post):
You’re a well compensated, shiny-suited male executive spending a week at a conference in Amsterdam. In the evenings you experience a “shortage” of women willing to sleep with you. How do you solve this problem? Do you perhaps write to your MP demanding that the EU offer grants to nubile Ukranian girls to migrate to brothels in western Europe?
Note that the author is an ex-scientist following the abrupt closure of his lab, so may just possibly be even more bitter and twisted than me (note to self: must try harder).

Monday, August 13, 2007

Math class really is tough!

Obviously Barbie was just anticipating the latest research :-)

However, NewScientist is still well behind the times, bleating recently that:
The most urgent problem for UK science is the shortage of enthusiastic new recruits. The proportion of teenagers choosing to study physics at ages 16 and 18 is in free fall. The situation in engineering and maths is little better and in chemistry things are starting to decline too. Just about everyone bar the government accepts that the root cause is a shortage of schoolteachers qualified in these subjects to inspire pupils. There will be no solution until this is officially accepted...
The "shortage of new recruits" is, I assert, merely the free market speaking: achieving a useful level of skill in scientific subjects is hard, and those who are capable of it can get much greater rewards (certainly in financial terms) elsewhere. Note that even with the current supposedly "hard" science A-Levels, some universities have switched to 4 years rather than 3 for their degree courses, at least for people who are considering research.

I find it disturbing that people can seriously propose that all we need is smooth-talking teachers to con pupils into a low-paid and insecure job with stringent intellectual demands, severe competition for jobs and high failure rate, when they would be substantially better off elsewhere. As I've mentioned before, an average estate agent in the UK earns 50% more than a scientist, and if you want to consider careers with perhaps more comparable intellectual demands, an average GP earns about 3 times as much, and has a secure job for life too. I'm not saying that these people aren't worth their salaries, but for anyone who is considering becoming a scientist, and who thinks that they might want to buy a house (say) at some point in their adult lives, bear in mind that this is the sort of financial competition you'll be up against.

Of course I should acknowledge that there are good things about being a scientist, especially for the eccentrics and independently-monied :-) But for normal people, it's a rather poor choice, and I'd rather see people talking openly about the real problems than papering over the cracks. Yet more innocent post-doc fodder is most certainly not what we need.

Sunday, August 12, 2007

"Women, and more severely challenged persons"

Spotted in an advert in NewScientist:
"Women are therefore especially encouraged to apply. The Max Planck Society also wishes to employ more severely challenged persons..."
At least they didn't say "... even more severely challenged persons (if such a thing exists)..." :-)

Saturday, August 11, 2007

Holiday

Another summer, another week of walking along amazing mountain ridges...

Unlike Stoat I don't have any pictures of naked men to show for my time away. Just flowers and mountain views:

A fuller set of pictures will appear in due course on my web site (update: here). Please excuse me if I have a distant look in my eyes for the next week or two...


Saturday, August 04, 2007

'Toxic waste' fed to schoolchildren

No, not a tale of turkey twizzlers, but dolphin meat in Japan. A couple of local politicians have dared to point out the bleeding obvious, that the dophins "traditionally" slaughtered off the coast on Japan and then stuffed into schoolkids (no-one actually buys the stuff willingly) by politicians in the hope that they will be indoctrinated into this "traditional way of life" are actually not fit for human consumption.

I look forward to the agriculture minister claiming that the Japanese intestines are adapted to mercury-rich food through their unique genetic heritage.

Perhaps not. Mercury poisoning has a long history in Japan - they basically invented the problem, and some people are justifiably touchy about the subject (which was covered up for decades, and lawsuits from the infamous ~1950s pollution scandal continue today).

Actually Prime Minister Abe has just lost his 2nd agriculture minister in as many months, both due to embezzlement scandals (the last may actually have more to do with the ruling LDP's historic defeat in the recent elections).

To be fair, the actual amount of dolphin eaten is probably small enough that the mercury isn't that big a health problem. But it all makes good knockabout politics.

Friday, August 03, 2007

I bet it was a fix

There's a mildly interesting story in the papers about how a "betting market" is investigating funny dealings on a tennis match where someone lost to a substantially worse player, in suspicious circumstances. Of course this is precisely the principle behind the sadly abandoned "Policy Analysis Market" - that for a price, even crooks will part with their information. Nevertheless. we should not ignore the chicken and egg problem, that if it was not for the betting market, there would have been no incentive for anyone to throw the match. Similarly, trading futures on the life of a specific politician gives people the chance to make two killings with one bullet.

One thing you can be sure is that no-one would ever pay a British tennis player to lose a match - why bother when they do it so reliably for free :-)

Wednesday, August 01, 2007

John Quiggin on the costs of climate change

JQ says:
"And even a 10 per cent reduction in income, by 2050, would not actually be noticeable against the background noise of macroeconomic and individual income fluctuations."

10% reduction in income by 2050, or equivalently 20% by 2100, is of course the far (lunatic?) extreme of the worst case that Stern could put together, not a realistic estimate.

Before JQ has a sense of humour failure, I'd better point out to that the above quote was addressing the costs of mitigation, not the projected losses due to climate change. But of course in economic terms, 10% is 10%. What's more, a figure an order of magnitude lower (for both mitigation costs, and climate change damage) would probably be more realistic. And note that the question isn't even about changing the net economic growth rate over this period by as much as by 0.2% pa (realistically, 0.02% pa) but rather where to draw the balance between mitigation and adaptation so as to minimise the total sum of these costs, which (assuming one believes the models at all) is very unlikely to be zero or less.

What no-one has yet explained to me is why I should be bothered about whether 3 generations down the line are "only" 8 times richer than me, rather than 10 (or more realistically, "only" 9.8 times rather than 10). By all means let's hear the arguments for and against various policy decision, but don't dress it up in in the spuriously authoritative language of economic argument with the facts carefuly concealed under claims of AGW-caused "global recession" on the side of the alarmists (including our recently departed Dear Leader Blair through the Stern report) opposed by equivalent comments on mitigation costs on the other side. A plague o' both your houses!