Showing posts with label badscience. Show all posts
Showing posts with label badscience. Show all posts

Monday, 2 March 2009

What should we know?

I decided a while ago that I didn't want this blog to be a bad science blog. There are plenty of those, I really like them but as the market's a little swamped I thought I'd just talk about stat-mech and hope that someone thinks it's interesting as well. Last weekend, however, I went to a talk by Ben Goldacre in Bath and so these things were brought to mind.

The thrust of the talk was that we, the public, are being misled and lied to by the media when it comes to science. He has compelling examples whereby the media would print unpublished stories from a discredited scientist but ignore several published articles that say the opposite. These examples are clear cut, the media are willing to lie for a good story. Even a well educated member of the public has no chance if information is being withheld.

What if it's less clean cut? Could the blame be shared in some cases? Take this story, the Durham fish oil trial (also mentioned in the talk, I don't have anything new). Uncritically reported by the media this "trial" had no control group, no predefined measure of success and more than a whiff that they knew what the outcome would be before it started. I need go no further describing it. The reasons why this "trial" was of zero scientific value are laid bare for anyone to see. The problem when one accepts what the article is saying (trial will prove fish oil works) without asking the huge question "where the hell's the control group?".

Anyone can ask this question. I expect people to ask this question. The concept of a control group is not difficult and everyone should understand it. In fact a full double blind trial is also easy to understand even if you didn't expect it to be necessary. There are certain things that I believe we should all just know about. Some good starting ones would be
  1. Double blind trials. For me I wouldn't have guessed they need to be double blinded, it's great that scientists don't exclude themselves from ruining their own experiments.
  2. Statistical significance. Small scale experiments can be good, but you need to be able to say when things could have been chance.
  3. Pattern recognition. Related to significance. People are pattern recognition machines, we see things where they are not.
If you ask questions about these things then it'll be a lot harder to slip things past you. If not, you can be taken for a ride. There are few other areas of our lives where we leave ourselves so open to abuse. None of these things are too difficult to understand. It's certainly easier than buying a car...

Anyway, back to physics next time. There's lots I want people to know about physics but that's another fight for another time.

Sunday, 14 December 2008

Out of place



This was in Bath Physics Department, seems a bit out of place...

In other news there's a bug in the Ising model code which means I'm getting rather beautiful squares appearing across the system. I can't remember the last time I coded something that wasn't riddled with bugs.

Saturday, 19 July 2008

Plausible theories from experts

This posting, from the rather excellent Mind Hacks, got me all worked up again (this is quite easy to do). It just struck me how easy it is to say something plausible, for example "increasing violence is caused by computer games", and then make no attempt to check whether it's true.

In this case the plausible statement is on the use of facebook, the internet, other such things. It even managed to be press released by the Royal College of Psychiatrists. This starts off
A generation of Internet users who have never known a world where you can't surf on-line may be growing up with a different and potentially dangerous view of the world and their own identity, according to a warning delivered to the Annual Meeting of the Royal College of Psychiatrists.
Could be true. I wouldn't like to say. Things start to smell a little funny when they say
This is the age group involved with the Bridgend suicides and what many of these young people had in common was their use of Internet to communicate.
OK stop there. Now I'm suspicious, don't all young people use the internet? By the way, the Bridgend suicides are also being blamed on mobile phone masts and, for all I know, computer games. In fact it feels like there is a rather sinister trend for untested/untestable claims to be applied to these tragic events, and why? Because it will get press attention. Without a doubt.

It seems that a horrible statistical fluctuation in the all-too-large distribution of teenage suicides is not a satisfying reason for the media or the public. And this leaves the door wide open for "experts" to fill the gap.

It's just too easy to say you think something is true and then press release to an unquestioning media. A classic example is the evolutionary psychology stuff (badscience has lots on this). These are the claims that we will split in to two distinct races or that we will evolve big willies. The papers just say that "Experts say..." washing themselves of responsibility. But who are these experts? Many of the proposals are plausible but that's not enough.

I could spend all day coming up with things that could be true. Unless it is testable then what use is it? Physicists come up with plausible theories all the time, but no one will get the nobel prize until it can be tested. The famous Feynman quote goes
"It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong."
I appreciate that physics experiments are much easier (and by that I mean controlled) than social experiments, but that's no excuse for claiming you have the answer when all you have is a plausible explanation. It's a massively important distinction.

To anyone claiming to know the cause of the Bridgend suicides I beg you to think carefully; teenage suicide is a serious problem and they deserve much much better.

Edit: Here's the BBC coverage