I just read Counterknowledge by Damian Thompson. It is a short book that looks at how alternative ‘truths’ can become embedded in the public domain, despite being factually wrong. It’s a little depressing since the rate of nonsense seems to be going up, and he introduced me to several things I was not aware of before, even though there is an industry in puncturing bubbles just as much as there is in creating them (like Ben Goldacre with Bad Science – the man who claimed the cap of ‘Doctor’ Gillian McKeith’s doctorate amongst many other things). A couple of points that stood out for me, and I’ll then get back to my key observation.
Firstly, the book 1421 is given as an example of mendacious counterknowledge. I read it a while back (see here), and it didn’t pass the sniff test. What I had not realised was the extent to which the publisher was not just slack in publishing it as a historical theory that had merit, but was complicit in crafting it to be compelling – even though it’s nonsense. Nice to see that the Amazon internal review and the first two reader reviews state things more clearly.
Secondly, I had not realised how widespread Intelligent Design type beliefs were amongst Muslims – and as a direct result, how little understanding of evolution there is in that group. The extent of the mis-knowledge is frightening, as are the fiats about belief that all religions seem to revel in that look dangerously foolish, especially when the real world is towed behind them, as it is in a Theocracy. But, my biggest surprise was that self-censorship of writers to avoid offence to Muslims and my own lack of digging meant that I knew so little about it and had believed that the rise of ID amongst Americans was the worrying element.
This self censorship, and the complicit aversion of the eyes from people who should know better on many topics, is his main concern – he gives examples of Universities that accredit degrees in Homeopathy for example, so giving a patena of science to the production of placebo pills (Boots doesn’t get much better press for selling them).
Overall, well worth a read. Now, let me get to the point it prompted me to write about.
What’s the standard?
I share the authors concern, but from a slightly different position. The thing that has worried me for years both in work and as I observe the world, is the lack of transparent standards for the quality of knowledge. The challenge is that it is easy to make pronouncements that sound exactly like they are well assessed and statistically valid … but convention in most circles does not require the demonstration of it. The exception is properly done science where the use of peer-review and full disclosure of data goes along with the expectation that ideas will be tested to destruction, and only those that survive the rigours can be judged to be well founded (note – not proven, that’s impossible). This is not a personal attack, it’s scientific method.
What I actually see in pretty much everywhere I’ve ever worked or consulted is poor or absent testing for statistical probability in the use of data. Assertions are confidently made based on hair-thin differences across small sample sizes, on specially selected data sets, with no allowance for chance. And, mental biases are allowed to affect method with no recognition that such things exist, even though we are all prone to them. The real challenge though is not that these mistakes happen – everyone makes mistakes, and that’s a great way to learn … it’s that to the consumer there is often no way to seperate the good from the dross since they may well appear the same. You cannot be an expert in everything that you will see, so you need to rely on the right checks having been made, and yet there is little appetite for the transparency required. I don’t want to return to the ‘aim, method, results, conclusion’ model of science papers – I like the conclusion first – but I do need to have confidence that assertions are not made more boldly than they deserve. I should also note that I have no problem with make fast decisions based on limited data – I’d just like to know that that is what I am doing!
The same is true in public, where we have fiascos like the MMR and autism scare that was based on very poorly interpreted data … but where the concern become ‘public knowledge’ from where it is very hard to remove. The public have no way to each assess the evidence presented, and yet lack of trust in politicians and civil servants (sometimes with good reason, especially for politicians) means that they have little way of getting robust independent assessments. So, they are left to flounder, and society suffers.
Most media positively harms since there is more mileage in a banner scare headline than in an ‘MMR – it’s still ok’ headline. They believe that they are ok since they usually qualify the assertions, but they present in a way designed to be compelling, so it should hardly be surprising that many people are mislead – for example, the use of individual case studies that are put over as if they are representative, but actually are the most gripping. As a specific example of the latter, I was really irritated to see an article in (I think) the Metro a few months ago about the crime rate in Nottingham, with a clear inference that it was ‘still’ leading the country. The case for was made in the first 70% of the article, but with absolutely no mention of the well known statistical selection issue (the definition of ‘Nottingham’ includes many poorer areas, but not the more affluent areas – much more so than many other areas … so some crime rates, especially the drug related ones, look inflated) or the removal of a single gangland boss who had been responsible for a huge percentage of the gun crime. In the last couple of paragraphs there was a comment from a council official who said it wasn’t a good survey, but it would be easy to interpret that as ‘well, he would say that wouldn’t he’. And, taking up much more page space, with a large photo, was the story of a woman in Nottingham who had been burgled or menaced or some such, and now didn’t feel safe. Whilst I felt sorry for her, it was irrelevant to the thesis .. but clearly left the impression that things must be pretty bad. The article was positively misleading, and yet where was the consideration for the reader in that.
Helping consumers see the standard
What I think is missing in both public and private domains is agreed ways of representing the robustness of the thesis or story being presented. I don’t mean a counter-view here, or suppressing free press, just a mechanism for the reader to understand how much credence he or she should give to what they are reading that is clearly visible. As a quick thought, though one that could do with refinement, I’d suggest the use of bronze silver and gold levels :-
At the Bronze level should be standards that can be applied without involving others, so individual bloggers or rapid decks produced in work can be understood. For my money it would at least include stating all sources, including reference to any material counter-views (should handle 1421), and ensuring statistical significance/sensitivity was considered in all maths. It would also include making clear where the sources used were of unknown quality. There’s nothing to stop someone not meeting this standard of course, but the absence of a bronze star should make that obvious, and the reader should be treating what they read as nothing more than entertainment. In a work context, I’d expect almost all data presented to a senior manager for review or decision making to meet this standard (for those I work with now, I am thinking about Level 5 and above).
At silver level I’d expect someone independent, named, and appropriately knowledgeable to have reviewed the work, with a specific remit to find faults and mistakes, and any residual differences in opinion referenced. I’d expect all math to be robust, and all presentation to be checked to see if a reader could be mislead. This is the type of standard I’d expect of the Financial Times, Economist or a good Wikipedia article. In a work context it’s what I’d expect for information going to Executives for formal review (Risk Committees, scorecards, material decisions – in my current company, typically level 6/7 and above).
At gold level is the normal fully peer-reviewed standards expected in clinical trials. In a work context it would be the formal accounts and city reporting. Mistakes should be regarded as so rare that they are notable.