Wednesday, 10 June 2009

Hummingbirds are the fastest animals on Earth

Relative to their body size. Which completely changes everything. According to the Guardian

They can cover more body lengths per second than any other vertebrate and for their size can even outpace fighter jets and the space shuttle

Which is nice, and the high speed photo is beautiful. But it's not really the same is it? In fact the space shuttle statistic sort of makes it seem silly. All the other important numbers, apart from velocity, don't scale with the animal size. The friction, reaction time, not least the speed of sound. It doesn't help me imagine what it feels like to be a hummingbird.

It's somewhat similar to all those statistics you see about insects. Fleas jumping hundreds of times their height and ants carrying many times their body weight. If you had a giant ant I doubt this strength thing continue, the strength of skeletons and legs just don't scale with height.

The dive tops out at 60mph which is pretty impressive, I'd love to know it in perspective with the reaction times of the birds. How does 60mph feel to them? Apparently at the bottom of the dive

the hummingbirds experienced an acceleration force nearly nine times that of gravity, the highest recorded for any vertebrate undergoing a voluntary aerial manoeuvre, with the exception of jet fighter pilots. At 7g, most pilots experience blackouts.

That's definitely cool. So long as by g they don't mean in units of bird length again. Anyway, don't want to be too grouchy, the photo is excellent - enjoy.

Photo by Christopher J. Clark and Teresa Feo/UC Berkeley

Saturday, 6 June 2009

Daisy world


A bit lazy linkage here. I went to a talk a while ago by Graeme Ackland from Edinburgh about Daisy World. It's not new, I think it's been around since the 80s, but it is quite cool. It's a really simple model of a planet where the climate conditions (here just the temperature) and the living organisms on the planet feed back to one another.

On Daisy World there are only daisies, there are a million extensions where they have forests and animals and all sorts. I think the simplest model gives the nicest story. This page gives a nice explanation and it has a java applet that you can play with - this is the best bit.

http://www.ph.ed.ac.uk/nania/nania-projects-Daisy.html

My extremely brief explanation is that there are white daisies and black daisies. White daisies cool down their environment, black daisies heat it up. If they get too hot or cold they die. Then there's a bunch of other parameters: how fast does temperature defuse, rate of daisy mutation, rates for birth and death etc. It's about as simple as it can be, and crucially is simple enough for mathematicians to come up with solutions.

The nice thing is that for reasonable parameters the system pretty much always self regulates. When things are slow to react, mutation rates are low, you get these big mass extinctions followed by regrowth. Really the best way to get a feel for it is to play with the simulations, it's very fun.

Tuesday, 19 May 2009

A physics of society?

Is it possible that we're not as in control as we think we are? We spend our entire teenage years convincing ourselves that we're individuals, but when it comes to our collective behaviour is that really true? From governments to economists to red top newspapers, everyone wants to understand why society is how it is. Physicists are no exception.

I recently finished reading Critical Mass by Philip Ball. Philip Ball is a chemist come physicist come science writer. From reading his book he sounds like a physicist at heart, perhaps I'm biased. The book is enormous and contains many of the things I'd like to write about here. There's statistical mechanics, game theory, networks and many other things that I won't review because plenty of other people have done that. I want to focus on the idea that there could be a physics for society. That our complex collective behaviour could be understood in the framework of statistical physics alongside more traditional methods in sociology and economics.

The problem with a physics of society is that it inevitably reduces us to simple units, completely throwing out our little subtleties, hopes and fears etc. This is a thought that is pretty distasteful to most people. It's up there with determinism for unpalatable ideas. But when you think about it it's not so bad. For most of our day to day life our choices are relatively restricted. In Britain and America if I want to vote, European elections aside, I'm realistically going to be choosing between two parties. In a market I'm buying or selling, and when I walk to work my routes are limited by obvious geographic constraints. It's for this reason that, in some circumstances, it is OK to draw a box around us and call us a yes or a no, up or down and so on.

We've already seen through universality that, sometimes, the underlying detail of a system is not the most important thing. As human beings we interact with our neighbours, our colleagues (and perhaps some random internet people). When our choices are limited, our interactions fairly short ranged, is it that ridiculous to think that some of the models we use every day in statistical physics could be applied to us? I think not.

Not too long ago the BBC (I think it was them, I can't find a link) had a programme about the credit crunch. They looked at complicated psychological reasons as to how over competitiveness, and I think something to do with chemicals in the brain, could cause an economic bubble to form. That people willingly fool themselves into believing that everything's okay. The trouble with this approach is it sees the end result of the complex interactions between traders as simply the behaviour of one trader - multiplied up. While I certainly don't want to rubbish the research behind this claim (I don't know where it is), we know that the behaviour of groups is different to the individual.

Physics has a much simpler explanation for bubbles forming in markets. It's simply based on the idea that people tend to follow what people around them are doing. Even if all the external signs are telling you that you should get out, the influence of people around you can often be much stronger. There are lots of models, and I'm going to go through some of them over the next month or so. The point in all of them is that people behave differently when they're around other people. When there's enough of us strange and interesting things can happen.

All of this is not to say that physicists know better than sociologists or psychologists (I suspect we know better than economists :-p ) but it does look like we should be sharing our knowledge better. The basic models of collective behaviour are simple enough for anyone to understand. They're not going to be exact but they can certainly enrich our understanding of the world around us.

I highly recommend Critical Mass. It's very well written and very thoughtful, well worth your time.

Saturday, 9 May 2009

Critical Point

I'm finally getting around to sharing what, for me, is the most beautiful piece of physics we have yet stumbled upon. This is the physics of the critical point. It doesn't involve enormous particle accelerators and it's introduction can border on the mundane. Once the consequences of critical behaviour are understood it becomes truly awe inspiring. First, to get everyone on the same page, I must start with the mundane - please stick with it, there's a really cool movie at the bottom...

Most people are quite familiar with the standard types of phase transition. Water freezes to ice, boils to water vapour and so on. Taking the liquid to gas transition, if you switch on your kettle at atmospheric pressure then when the temperature passes 100 degrees centigrade all the liquid boils. If you did this again at a higher pressure then the boiling point would be at a higher temperature - and the gas produced at a higher density. If you keep pushing up the pressure the boiling point goes higher and higher and the difference in density between the gas and the liquid becomes smaller and smaller. At a certain point, the critical point, that difference goes to zero and for any higher pressure/temperature the distinction between the liquid and gas becomes meaningless, you can only call it a fluid.

The picture below, taken from here, shows the standard phase diagram, with the critical point marked, for water.




Magnets also have a critical point. Above the critical temperature all the little magnetic dipoles inside the material are pointing in different directions and the net magnetisation is zero. Below the critical temperature they can line up all the in the same direction and create a powerful magnet. While the details of this transition are different from the liquid-gas case, it turns out that close to the critical point the details do not matter. The physics of the magnet and the liquid (and many other systems I won't mention) are identical. I'll now try to demonstrate how that can be true.

The pictures below are taken from a computer simulation of an Ising model. The Ising model is a simple model for a magnet. It's been used for so much more than that since its invention but I don't really want to get into it now. For the pictures below squares are coloured white or black. In the Ising model squares can change their shade at any time, white squares like to be next to white squares and black squares like to be next to black squares. Fighting against this is temperature, when there is a high temperature then squares are happier to be next to squares of a different colour. Above the critical temperature, if you could zoom out enough, the picture would just look grey (see T=3 below). Grey, in terms of a magnet, would be zero magnetisation.







If you drop the temperature then gradually larger and larger regions start to become the same colour. At a certain point, the critical point, the size of these regions diverges. Any colder and the system will become mostly white, or mostly black (as above, T=2). Precisely at the critical point (T=2.69 in these units), however, a rather beautiful thing happens. As the size of the cooperative regions diverge, so too do fluctuations. In fact at the critical point there is no sense of a length scale. If you are struggling to understand what this means then look at the four pictures below. They are snapshots of the Ising model, around the critical point, at four very different scales - see if you can guess which one is which.








Now watch this movie for the answer (recommend switching to HD and going full screen).





The full picture has 2^34 sites (little squares), that's about 17 billion. This kind of scale invariance is a bit like the fractals you get in mathematics (Mandelbrot set etc) except that this is not deterministic, it is a statistical distribution.

How does it demonstrate that the details of our system (particles, magnetic spins, voting intentions - whatever) are not important? In all these cases the interactions are short ranged and the symmetry and dimension are the same. Now imagine that you have a picture of your system (like above) at the critical point and you just keep zooming out. After a while you'll be so far away that you can't tell if it's particles or zebras interacting at the bottom as that level of detail has been coarse grained out and all the pictures look the same. This is not a rigorous proof, I just want to convey that it's sensible.

Of course the details will come into play at some point, the exact transition temperature is system dependent for example, but the important physics is identical. This is what's known as universality, and it's discovery, in my opinion, is one of the landmarks in modern physics. It means I can take information from a magnet and make sensible comments about a neural network or a complex colloidal liquid. It means that simple models like the Ising model can make exact predictions for real materials.

So there it is. If you don't get it then leave a comment. If you're a physics lecturer and you want to use any of these pictures then feel free. I'd only ask that you let me know as, well, I'd like to know if people think it's useful for teaching. For now you'd have to leave a comment as I haven't sorted out a spam-free email address.

UPDATE: Forward link to a post on universality.

Monday, 2 March 2009

What should we know?

I decided a while ago that I didn't want this blog to be a bad science blog. There are plenty of those, I really like them but as the market's a little swamped I thought I'd just talk about stat-mech and hope that someone thinks it's interesting as well. Last weekend, however, I went to a talk by Ben Goldacre in Bath and so these things were brought to mind.

The thrust of the talk was that we, the public, are being misled and lied to by the media when it comes to science. He has compelling examples whereby the media would print unpublished stories from a discredited scientist but ignore several published articles that say the opposite. These examples are clear cut, the media are willing to lie for a good story. Even a well educated member of the public has no chance if information is being withheld.

What if it's less clean cut? Could the blame be shared in some cases? Take this story, the Durham fish oil trial (also mentioned in the talk, I don't have anything new). Uncritically reported by the media this "trial" had no control group, no predefined measure of success and more than a whiff that they knew what the outcome would be before it started. I need go no further describing it. The reasons why this "trial" was of zero scientific value are laid bare for anyone to see. The problem when one accepts what the article is saying (trial will prove fish oil works) without asking the huge question "where the hell's the control group?".

Anyone can ask this question. I expect people to ask this question. The concept of a control group is not difficult and everyone should understand it. In fact a full double blind trial is also easy to understand even if you didn't expect it to be necessary. There are certain things that I believe we should all just know about. Some good starting ones would be
  1. Double blind trials. For me I wouldn't have guessed they need to be double blinded, it's great that scientists don't exclude themselves from ruining their own experiments.
  2. Statistical significance. Small scale experiments can be good, but you need to be able to say when things could have been chance.
  3. Pattern recognition. Related to significance. People are pattern recognition machines, we see things where they are not.
If you ask questions about these things then it'll be a lot harder to slip things past you. If not, you can be taken for a ride. There are few other areas of our lives where we leave ourselves so open to abuse. None of these things are too difficult to understand. It's certainly easier than buying a car...

Anyway, back to physics next time. There's lots I want people to know about physics but that's another fight for another time.

Friday, 20 February 2009

Entropy

I've been meaning to post something interesting about stat-mech about once a fortnight and so far I'm not doing so well. For today I thought I'd share my perspective on entropy.

If you ask the (educated) person in the street what entropy is they might say something like "it's a measure of disorder". This is not a bad description, although it's not exactly how I think about it. As a statistical mechanition I tend to think of entropy in a slightly different way to say, my Dad. He's an engineer and as such he thinks of entropy more in terms of the second law of thermodynamics. This is also a good way of thinking about it, but here's mine.

Consider two pictures, I can't be bothered making them (EDIT: see this post, the T=2,3 pictures) so you can just imagine them. First imagine a frozen image of the static on your television, and secondly imagine a white screen. On the basis of the disorder description you might say that the static, looking more disordered, has a higher entropy. However, this is not the case. These are just pictures, and there is one of each, so who is to say which is more disordered?

Entropy does not apply to single pictures, it applies to 'states'. A state, in the thermodynamic sense, is a group of pictures that share some property. So for the static we'll say that the property is that there are roughly as many white pixels as black pixels with no significant correlations and for the white screen we'll say it's all pixels the same colour. The entropy of a state is the number of pictures (strictly it's proportional to the logarithm of this) that fit its description.

For our blank screen it's easy, there are only two pictures, all black or all white. For the static there are a bewildering number of pictures that fit the description. So many that you'll never see the same screen of static twice, for a standard 720x480 screen it'll be something like 10 to the power 100,000*.

So it's the disordered state, all those pictures of static that look roughly the same, that has the high entropy. If we assume that each pixel at any time is randomly (and independently) black or white, then it's clear why you never see a white screen in the static - it's simply out gunned by the stupidly large number of jumbled up screens.

In a similar way a liquid has a higher entropy than a crystal (most of the time, there is one exception), there are more ways for a load of jumbled up particles to look like a liquid than the structured, ordered crystal. So why then does water freeze? This, as you might guess, comes down to energy.

Water molecules like to line up in a particular way that lowers their energy. When temperature is low then energy is the most important thing and the particles will align on a macroscopic scale to make ice. When temperature is high entropy becomes more important, those nice crystalline configurations are washed out by the shear number of liquid configurations.

And this is essentially why matter exists in different phases, it's a constant battle between entropy and energy and depending which wins we will see very different results.

I'll try and update with some links to better descriptions soon.

*this number is only as accurate as my bad definition of the disordered state.

Monday, 12 January 2009

Busy Bees

The second installment of Swarm was on BBC 1 last night, I missed the first one but I highly recommend catching this before it goes off iPlayer.

The best bit was the fire ants making an ant raft to escape flooding. Ants are ridiculous.  They also had bees trying to decide where to make a new home.  The scouter bees come back with reports on possible locations, conveying the message with a dance. All the scouters sell their location and the others decide who to follow. When one of them gets enough support then they all up sticks and move - pretty smart.

On the same theme, I was at a talk recently about consensus decisions in sticklebacks. Apparently they're very reproducible in experimental terms. Again, they have to make a decision, this time about which way to swim. On their own they make the good decision the majority of the time (say 60%) but when they're in a group the almost always get it right. Each fish is pretty stupid, the group is less stupid.

I love problems like this because, while it is a biology problem, it's simple units (fish, ants, bees) that can interact with their peers in some measurable way (well, if you're really clever and patient it's measurable). From this emerges surprising a complex behaviour that didn't exist with the individual - that's what statistical mechanics is all about.

Critical-point post is still delayed, when you're debugging code at work all day it's hard to feel motivated to come home and do the same thing. It's coming though.

UPDATE: Just seen part one, those starlings are badass. They look like drops of liquid, just wait until I get my MD code working and I'm going to be simulating me some birds! (not in the weird science sense, although that would be cool as well).