Lewandowsky has a new article at
co-written with his faithful sidekick Michael Mann, plus two medical researchers and a psychologist. His big finding is that because different scientists in different disciplines all get threatening letters, vexatious FOI requests, and harrassment from astroturfers, therefore the criticisms they receive cannot be scientific.
It’s tempting to speculate that he may be losing his marbles. He actually cites the Moon Hoax and Recursive Fury papers, and refers to the “re-examination of one of the first author’s papers to eliminate legal risks that is ongoing at the time of this writing”.
I’ve commented below the article. I can see my comment, but I imagine the thread is like Hilda’s at “Scientific American” and my comment is invisible to everyone else. Barry Woods reports that his two comments have failed to appear.
There are articles at
* * *
This article is actually about a forthcoming event at Bristol University:
Stephan Lewandowsky – Taming The Wilful Ignorance Monster: Scientific Uncertainty and Climate Change
7 November 2013, 1 pm: Experimental Psychology Common Room, Priory Road 12a.
ABSTRACT: Uncertainty forms an integral part of many global risks, from “peak oil” to genetically modified foods to climate change. In many contexts, uncertainty is cited in connection with political arguments against mitigative or corrective action. Using climate change as a case study, I show that a proper understanding of uncertainty should compel action rather than forestall it. Although risk judgments are inherently subjective, an analysis of the role of uncertainty within the climate system yields three mathematical constraints that are robust to a broad range of assumptions and that all suggest that greater uncertainty provides greater impetus for mitigative action. The constraints involve (a) the inevitable positive skew of estimates of climate sensitivity; (b) the inevitably convex damage function, and (c) the inevitably bounded aspect of the carbon budget. Those normative constraints are related to human behaviour and the nature of scientific endeavours.
This subject has already been discussed by Lewandowsky in a series of three articles
Ben Pile saw through Lewandowsky’s peculiar logic at
and Ben’s article was spotted and discussed by prominent climate scientist James Annan at
where I unwisely got involved in a discussion about Bayesian statistics, and got roundly sneered at bey the regulars for my statistical ignorance. I unwisely started a discussion about the bra sizes of women in woolly jumpers, and got rapidly out of my depth, despite Lucia’s gallantly coming to my rescue. Annan even posted an article about a new fallacy which he attributed to me. How kind. The discussion with some pretentious philosopher spilled over to Climate Resistance, where it got so boring Ben finally wiped it.
The lesson I learnt was not to get involved in an argument with someone with expertise in the subject in a hostile environment, even when you’re right and they’re spouting insanities.
Because insanity it is. What Lewandowsky is saying in the above abstract is that “under a broad range of assumptions” in a wide (though undefined) range of “human behaviour” and “scientific endeavour”, the more uncertain we are, the more reason we have to act, rather than not.
There’s a trivial sense in which this is nonsense, since in most situations except the childishly simple (“O’Grady says…”) both alternatives of any pair of possibilities can be defined as “action”. There’s no way you can define one course of action as “doing something”, and the other as “not doing something” that can’t be reversed.
But there is a deeper nonsense buried in Lewandowsky’s argument. He is claiming, not only that the less certain we are of the facts, the more reason we have to act, but that this is a mathematical truth: that the shape of a graph can tell us something about our duty to take certain decisions.
This is bonkers, as anyone who struggles through Lewandwsky’s three turgid articles will realise (Though James Annan and his fans don’t think so).The trouble is, you need to be a mathematician with a good grasp of Bayesian statistics to be able to demonstrate that it’s bonkers, and I’m not.
Anyone out there who can help?
(Since you ask, bra sizes came up because I was looking for an analogous situation to estimates of climate sensitivity, which form a skewed distribution with a long tail towards the high end. Breast size seemed a good fit, since most are small to middling, with a sizeable minority tailing off towards the ginormous. If you believe Lewandowsky, the less you know about actual breast sizes, the more reason you have to believe any woman in a floppy jumper to be well-endowed in that department. And that’s a mathematical fact: – a necessary truth, in the Leibnizian sense).