Thursday 17 December 2009

LA's big lake of colloids

The New York Times is running a piece about tap water and the regulation thereof called "That Tap Water Is Legal but May Be Unhealthy". One particular contaminant becomes dangerous on exposure to sunlight so, at a lake in Los Angeles, they've tipped 400,000 plastic balls into the lake to block out the sunlight.

Perhaps this shows I've been in stat-mech too long. All I could think about upon seeing this picture was - "cool, a massive 2D elastic disc simulation!".


It's quite interesting where the crystal structure is interrupted - each one of those interfaces costs a lot of free energy. You can also see it's not truly 2D as along certain stress lines the particles have gone up and over to reduce the energy.

I wonder if it's in equilibrium or whether it'll age with time...

This is what science can do to you :-s

Don't know what fair use would be for stealing this photo but hopefully if I link to the NYT enough they won't mind - go and click on one of their ads of something...

Wednesday 9 December 2009

Backup news

Anyone that's been here from the start will know I have a slightly unhealthy obsession with backups.  A couple of things have changed since I last blogged about this.

Time Machine
Firstly, I now have a mac at home and I've started using Time Machine. I don't want to pat Apple on the back too much because that really gets of my nerves, but Time Machine is absolutely fantastic.

It's exactly how personal backup software should work. You buy an external hard disk, tell Time Machine to backup there, and then you're done. You never need to worry about it again. Most of the time when I need my backup it's because I've accidentally deleted something I shouldn't. Time Machine allows you to, as the name suggests, just go back in time and find it before you made the mistake. Works like a dream.

After a botched attempt to upgrade to Snow Leopard I recently had my first call to do a complete system restore. All I can say is that it seemed to work perfectly for me - it didn't even take that long.


Rsync + windows
At work we backup to an external file server. Until recently that was Linux based and so I had no trouble using Rsync. Now we've been moved to a Windows server which creates all kinds of problems. Rsync just doesn't get on with Windows. Anyway, after a bit of poking around I finally have a script that does the job. This is my basic rsync call now:

rsync -rptgoDhpP --modify-window=1 --delete --log-file=RSYNCLOG --exclude-from=./exclude /home/username/ username

I'm pretty sure most of those options could be replaced with the -a but honestly, now it's working I don't want to touch it! The key command is the modify-window. This accounts for the different way that Windows and Unix file systems time stamp modified files.

SVN - Subversion
For programming and writing papers (in LaTeX) I've started using subversion to take care of version control. I'm also using a shared repository to co-write a paper at the moment - it handles simultaneous editing quite well. There is a start up cost in getting your head around how it works, I found this page very helpful, but once you're there it works very nicely.

I mention it here because the version control works a bit like a backup. You can step back through committed versions very easily. If you use OS X then it's installed along with XCode so you probably have it. With Linux it'll be in the standard repositories.

Well that's enough backup geekery for this year. Anyone using anything that they're particularly happy with? I've kind of given up on backing up over the internet for now but would be interested if there's been any developments.

Sunday 29 November 2009

An unintuitive probability problem

Probability can do strange things to your mind. This week I had a probability problem where every time I tried to use intuition to solve it I ended up going completely wrong. I thought I'd share it as I think it's interesting.

Consider a one dimensional random walk. At each time step my walker will go left with probability , and right with probability . It stays where it is with probability . Furthermore these probabilities are dependent on the walker's position in space, so it's really and . I'm imagining I'm on a finite line of length, L, although it doesn't matter too much.

Now if , then we just have a normal random walker. In my problem I have the following setup: but . What does this mean? At any given point, x, my walker is more likely to go left than right. If it does go left it will come back with the same rate (although it's more likely to go left again).




So here's the question: If I leave this for a really long time, what is the equilibrium probability distribution for the walkers position, ?

Friday 20 November 2009

Great LHC animation

The purpose of this blog was to showcase other types of physics other than the LHC. But I can't resist, this is a really nice animated video showing the stages of getting stationary protons up to 7TeV

http://cdsweb.cern.ch/record/1125472

(via @CERN)

Thursday 29 October 2009

Speed limit for computer processors - serial vs parallel

This news item from Nature about this PRL talks about how computer processors are going to hit a speed limit due to the speed at which a system can make transitions between quantum states.

There are two independent bounds on this minimum time — one based on the average energy of the quantum system, the other based on the uncertainty in the system's energy. In their calculations, Levitin and Toffoli unify the bounds and show there is an absolute limit to the number of operations that can be achieved per second by a computer system of a given energy.

I'm not an expert in quantum information so all I can say is that it looks interesting. There are implications for myself because most of my work is pretty intensive computer simulation. Some of what I do simply needs fast processors, there are sections of my Monte Carlo simulations that cannot be parallelised (fancy cluster algorithms being one). So for these, in principle, it limits what could ever be done.

However, mostly my limit is on what statistics I can collect and that can be solved by using more and more processors. The move from single core being standard, to eight these days, has been a revolution in terms of what I can now get done in a reasonable time scale.

In fact one very interesting development is using standard computer graphics cards to perform molecular dynamics (MD) simulations. I've only read the abstract of this paper I'm afraid but they've apparently done this. Graphics cards designed for games have many little processors on them (GPUs) and they can all work on the problem more efficiently than one super powered CPU trying to do it on its own.

So next time you say that computer games are a waste of time think of this...

Monday 19 October 2009

There's more to the LHC than bloody black holes

The LHC is cold again. This is very exciting, and also it can't come soon enough. In the absence of any actual science going on an endless stream of bollocks seems to have been coming out about the collider. The latest being this drivel about things coming from the future to... oh God I can't be bothered. It rather upsets me that the only things people really know about the LHC are that it might make a black hole and maybe something is coming through time to sabotage it. So I thought I'd talk about why this machine is ridiculously fantastic and complicated (the more likely cause of breakage).

One of the features of synchrotrons that I've always thought is amazing is the way they cool the beams. By cool I'm not talking about temperature around the beam pipe (although that's bloody cold too so that the magnets work). I'll quickly describe what it is and how people solve it, although I'm still not 100% sure how they've solved it at the LHC.

Our general collider accelerates particles around a ring using strong electric fields. The particles are bent into a circle by bending magnets and they are kept in a beam by the focussing quadropole magnets. The effect of these magnets is that if a particle is heading sideways out of the beam then they push it back in in the opposite direction. In this way the particles kind of snake around the course never straying too far out of line. The task of cooling the beam is to reduce this snaking as much as possible so that we have a really dense, straight running beam.

Wednesday 9 September 2009

Quorum decisions in bacteria

Stumbled across a few nice things related to quorum decision making recently. Remember how sticklebacks make their decisions? Well bacteria do it too, below is a great TED talk by Bonnie Bassler on how they communicate and how they decide to act as an enormous group.



Also came across this article on humans making group decisions in a Kasparov vs The World chess game. It gets the saliva flowing on how you can engineer good decisions.

Addition: Incidentally, I also think this talk is a great example of how to give a science talk. It's a little rushed (probably nerves) but the enthusiasm is fantastic and the use of visual aids is perfect. I'm giving a workshop on presentations so I've been thinking about this stuff a lot recently.

Saturday 1 August 2009

Biological Membranes

It's been ages since my last post. This is because I've been busy doing lots of interesting physics, met a bunch of interesting physicists, maybe I'll write something about it. For now, something I've been meaning to write about for a while, and for once it's something that's timely.

The journal Soft Matter has an issue out with a membrane biophysics theme. You can read the editorial for yourself if you have access, otherwise make do with my ropey understanding of it. Soft Matter is a relatively new journal that I think is looking really good. Their website needs work but I'll leave that for my science 2.0 rant which is bubbling up.

So why am I interested in membranes (I'm not working on them, I'm just interested)? Well once again I'm interested in them as large system of small parts that make something amazing when they get together - ie statistical physics. So, here's my compressed guide to membranes: please remember I'm not a biologist, I'm very new to this, only barely understand it and I tend to over simplify things.

Wednesday 10 June 2009

Hummingbirds are the fastest animals on Earth

Relative to their body size. Which completely changes everything. According to the Guardian

They can cover more body lengths per second than any other vertebrate and for their size can even outpace fighter jets and the space shuttle

Which is nice, and the high speed photo is beautiful. But it's not really the same is it? In fact the space shuttle statistic sort of makes it seem silly. All the other important numbers, apart from velocity, don't scale with the animal size. The friction, reaction time, not least the speed of sound. It doesn't help me imagine what it feels like to be a hummingbird.

It's somewhat similar to all those statistics you see about insects. Fleas jumping hundreds of times their height and ants carrying many times their body weight. If you had a giant ant I doubt this strength thing continue, the strength of skeletons and legs just don't scale with height.

The dive tops out at 60mph which is pretty impressive, I'd love to know it in perspective with the reaction times of the birds. How does 60mph feel to them? Apparently at the bottom of the dive

the hummingbirds experienced an acceleration force nearly nine times that of gravity, the highest recorded for any vertebrate undergoing a voluntary aerial manoeuvre, with the exception of jet fighter pilots. At 7g, most pilots experience blackouts.

That's definitely cool. So long as by g they don't mean in units of bird length again. Anyway, don't want to be too grouchy, the photo is excellent - enjoy.

Photo by Christopher J. Clark and Teresa Feo/UC Berkeley

Saturday 6 June 2009

Daisy world


A bit lazy linkage here. I went to a talk a while ago by Graeme Ackland from Edinburgh about Daisy World. It's not new, I think it's been around since the 80s, but it is quite cool. It's a really simple model of a planet where the climate conditions (here just the temperature) and the living organisms on the planet feed back to one another.

On Daisy World there are only daisies, there are a million extensions where they have forests and animals and all sorts. I think the simplest model gives the nicest story. This page gives a nice explanation and it has a java applet that you can play with - this is the best bit.

http://www.ph.ed.ac.uk/nania/nania-projects-Daisy.html

My extremely brief explanation is that there are white daisies and black daisies. White daisies cool down their environment, black daisies heat it up. If they get too hot or cold they die. Then there's a bunch of other parameters: how fast does temperature defuse, rate of daisy mutation, rates for birth and death etc. It's about as simple as it can be, and crucially is simple enough for mathematicians to come up with solutions.

The nice thing is that for reasonable parameters the system pretty much always self regulates. When things are slow to react, mutation rates are low, you get these big mass extinctions followed by regrowth. Really the best way to get a feel for it is to play with the simulations, it's very fun.

Tuesday 19 May 2009

A physics of society?

Is it possible that we're not as in control as we think we are? We spend our entire teenage years convincing ourselves that we're individuals, but when it comes to our collective behaviour is that really true? From governments to economists to red top newspapers, everyone wants to understand why society is how it is. Physicists are no exception.

I recently finished reading Critical Mass by Philip Ball. Philip Ball is a chemist come physicist come science writer. From reading his book he sounds like a physicist at heart, perhaps I'm biased. The book is enormous and contains many of the things I'd like to write about here. There's statistical mechanics, game theory, networks and many other things that I won't review because plenty of other people have done that. I want to focus on the idea that there could be a physics for society. That our complex collective behaviour could be understood in the framework of statistical physics alongside more traditional methods in sociology and economics.

The problem with a physics of society is that it inevitably reduces us to simple units, completely throwing out our little subtleties, hopes and fears etc. This is a thought that is pretty distasteful to most people. It's up there with determinism for unpalatable ideas. But when you think about it it's not so bad. For most of our day to day life our choices are relatively restricted. In Britain and America if I want to vote, European elections aside, I'm realistically going to be choosing between two parties. In a market I'm buying or selling, and when I walk to work my routes are limited by obvious geographic constraints. It's for this reason that, in some circumstances, it is OK to draw a box around us and call us a yes or a no, up or down and so on.

We've already seen through universality that, sometimes, the underlying detail of a system is not the most important thing. As human beings we interact with our neighbours, our colleagues (and perhaps some random internet people). When our choices are limited, our interactions fairly short ranged, is it that ridiculous to think that some of the models we use every day in statistical physics could be applied to us? I think not.

Not too long ago the BBC (I think it was them, I can't find a link) had a programme about the credit crunch. They looked at complicated psychological reasons as to how over competitiveness, and I think something to do with chemicals in the brain, could cause an economic bubble to form. That people willingly fool themselves into believing that everything's okay. The trouble with this approach is it sees the end result of the complex interactions between traders as simply the behaviour of one trader - multiplied up. While I certainly don't want to rubbish the research behind this claim (I don't know where it is), we know that the behaviour of groups is different to the individual.

Physics has a much simpler explanation for bubbles forming in markets. It's simply based on the idea that people tend to follow what people around them are doing. Even if all the external signs are telling you that you should get out, the influence of people around you can often be much stronger. There are lots of models, and I'm going to go through some of them over the next month or so. The point in all of them is that people behave differently when they're around other people. When there's enough of us strange and interesting things can happen.

All of this is not to say that physicists know better than sociologists or psychologists (I suspect we know better than economists :-p ) but it does look like we should be sharing our knowledge better. The basic models of collective behaviour are simple enough for anyone to understand. They're not going to be exact but they can certainly enrich our understanding of the world around us.

I highly recommend Critical Mass. It's very well written and very thoughtful, well worth your time.

Saturday 9 May 2009

Critical Point

I'm finally getting around to sharing what, for me, is the most beautiful piece of physics we have yet stumbled upon. This is the physics of the critical point. It doesn't involve enormous particle accelerators and it's introduction can border on the mundane. Once the consequences of critical behaviour are understood it becomes truly awe inspiring. First, to get everyone on the same page, I must start with the mundane - please stick with it, there's a really cool movie at the bottom...

Most people are quite familiar with the standard types of phase transition. Water freezes to ice, boils to water vapour and so on. Taking the liquid to gas transition, if you switch on your kettle at atmospheric pressure then when the temperature passes 100 degrees centigrade all the liquid boils. If you did this again at a higher pressure then the boiling point would be at a higher temperature - and the gas produced at a higher density. If you keep pushing up the pressure the boiling point goes higher and higher and the difference in density between the gas and the liquid becomes smaller and smaller. At a certain point, the critical point, that difference goes to zero and for any higher pressure/temperature the distinction between the liquid and gas becomes meaningless, you can only call it a fluid.

The picture below, taken from here, shows the standard phase diagram, with the critical point marked, for water.




Magnets also have a critical point. Above the critical temperature all the little magnetic dipoles inside the material are pointing in different directions and the net magnetisation is zero. Below the critical temperature they can line up all the in the same direction and create a powerful magnet. While the details of this transition are different from the liquid-gas case, it turns out that close to the critical point the details do not matter. The physics of the magnet and the liquid (and many other systems I won't mention) are identical. I'll now try to demonstrate how that can be true.

The pictures below are taken from a computer simulation of an Ising model. The Ising model is a simple model for a magnet. It's been used for so much more than that since its invention but I don't really want to get into it now. For the pictures below squares are coloured white or black. In the Ising model squares can change their shade at any time, white squares like to be next to white squares and black squares like to be next to black squares. Fighting against this is temperature, when there is a high temperature then squares are happier to be next to squares of a different colour. Above the critical temperature, if you could zoom out enough, the picture would just look grey (see T=3 below). Grey, in terms of a magnet, would be zero magnetisation.







If you drop the temperature then gradually larger and larger regions start to become the same colour. At a certain point, the critical point, the size of these regions diverges. Any colder and the system will become mostly white, or mostly black (as above, T=2). Precisely at the critical point (T=2.69 in these units), however, a rather beautiful thing happens. As the size of the cooperative regions diverge, so too do fluctuations. In fact at the critical point there is no sense of a length scale. If you are struggling to understand what this means then look at the four pictures below. They are snapshots of the Ising model, around the critical point, at four very different scales - see if you can guess which one is which.








Now watch this movie for the answer (recommend switching to HD and going full screen).





The full picture has 2^34 sites (little squares), that's about 17 billion. This kind of scale invariance is a bit like the fractals you get in mathematics (Mandelbrot set etc) except that this is not deterministic, it is a statistical distribution.

How does it demonstrate that the details of our system (particles, magnetic spins, voting intentions - whatever) are not important? In all these cases the interactions are short ranged and the symmetry and dimension are the same. Now imagine that you have a picture of your system (like above) at the critical point and you just keep zooming out. After a while you'll be so far away that you can't tell if it's particles or zebras interacting at the bottom as that level of detail has been coarse grained out and all the pictures look the same. This is not a rigorous proof, I just want to convey that it's sensible.

Of course the details will come into play at some point, the exact transition temperature is system dependent for example, but the important physics is identical. This is what's known as universality, and it's discovery, in my opinion, is one of the landmarks in modern physics. It means I can take information from a magnet and make sensible comments about a neural network or a complex colloidal liquid. It means that simple models like the Ising model can make exact predictions for real materials.

So there it is. If you don't get it then leave a comment. If you're a physics lecturer and you want to use any of these pictures then feel free. I'd only ask that you let me know as, well, I'd like to know if people think it's useful for teaching. For now you'd have to leave a comment as I haven't sorted out a spam-free email address.

UPDATE: Forward link to a post on universality.

Monday 2 March 2009

What should we know?

I decided a while ago that I didn't want this blog to be a bad science blog. There are plenty of those, I really like them but as the market's a little swamped I thought I'd just talk about stat-mech and hope that someone thinks it's interesting as well. Last weekend, however, I went to a talk by Ben Goldacre in Bath and so these things were brought to mind.

The thrust of the talk was that we, the public, are being misled and lied to by the media when it comes to science. He has compelling examples whereby the media would print unpublished stories from a discredited scientist but ignore several published articles that say the opposite. These examples are clear cut, the media are willing to lie for a good story. Even a well educated member of the public has no chance if information is being withheld.

What if it's less clean cut? Could the blame be shared in some cases? Take this story, the Durham fish oil trial (also mentioned in the talk, I don't have anything new). Uncritically reported by the media this "trial" had no control group, no predefined measure of success and more than a whiff that they knew what the outcome would be before it started. I need go no further describing it. The reasons why this "trial" was of zero scientific value are laid bare for anyone to see. The problem when one accepts what the article is saying (trial will prove fish oil works) without asking the huge question "where the hell's the control group?".

Anyone can ask this question. I expect people to ask this question. The concept of a control group is not difficult and everyone should understand it. In fact a full double blind trial is also easy to understand even if you didn't expect it to be necessary. There are certain things that I believe we should all just know about. Some good starting ones would be
  1. Double blind trials. For me I wouldn't have guessed they need to be double blinded, it's great that scientists don't exclude themselves from ruining their own experiments.
  2. Statistical significance. Small scale experiments can be good, but you need to be able to say when things could have been chance.
  3. Pattern recognition. Related to significance. People are pattern recognition machines, we see things where they are not.
If you ask questions about these things then it'll be a lot harder to slip things past you. If not, you can be taken for a ride. There are few other areas of our lives where we leave ourselves so open to abuse. None of these things are too difficult to understand. It's certainly easier than buying a car...

Anyway, back to physics next time. There's lots I want people to know about physics but that's another fight for another time.

Friday 20 February 2009

Entropy

I've been meaning to post something interesting about stat-mech about once a fortnight and so far I'm not doing so well. For today I thought I'd share my perspective on entropy.

If you ask the (educated) person in the street what entropy is they might say something like "it's a measure of disorder". This is not a bad description, although it's not exactly how I think about it. As a statistical mechanition I tend to think of entropy in a slightly different way to say, my Dad. He's an engineer and as such he thinks of entropy more in terms of the second law of thermodynamics. This is also a good way of thinking about it, but here's mine.

Consider two pictures, I can't be bothered making them (EDIT: see this post, the T=2,3 pictures) so you can just imagine them. First imagine a frozen image of the static on your television, and secondly imagine a white screen. On the basis of the disorder description you might say that the static, looking more disordered, has a higher entropy. However, this is not the case. These are just pictures, and there is one of each, so who is to say which is more disordered?

Entropy does not apply to single pictures, it applies to 'states'. A state, in the thermodynamic sense, is a group of pictures that share some property. So for the static we'll say that the property is that there are roughly as many white pixels as black pixels with no significant correlations and for the white screen we'll say it's all pixels the same colour. The entropy of a state is the number of pictures (strictly it's proportional to the logarithm of this) that fit its description.

For our blank screen it's easy, there are only two pictures, all black or all white. For the static there are a bewildering number of pictures that fit the description. So many that you'll never see the same screen of static twice, for a standard 720x480 screen it'll be something like 10 to the power 100,000*.

So it's the disordered state, all those pictures of static that look roughly the same, that has the high entropy. If we assume that each pixel at any time is randomly (and independently) black or white, then it's clear why you never see a white screen in the static - it's simply out gunned by the stupidly large number of jumbled up screens.

In a similar way a liquid has a higher entropy than a crystal (most of the time, there is one exception), there are more ways for a load of jumbled up particles to look like a liquid than the structured, ordered crystal. So why then does water freeze? This, as you might guess, comes down to energy.

Water molecules like to line up in a particular way that lowers their energy. When temperature is low then energy is the most important thing and the particles will align on a macroscopic scale to make ice. When temperature is high entropy becomes more important, those nice crystalline configurations are washed out by the shear number of liquid configurations.

And this is essentially why matter exists in different phases, it's a constant battle between entropy and energy and depending which wins we will see very different results.

I'll try and update with some links to better descriptions soon.

*this number is only as accurate as my bad definition of the disordered state.

Monday 12 January 2009

Busy Bees

The second installment of Swarm was on BBC 1 last night, I missed the first one but I highly recommend catching this before it goes off iPlayer.

The best bit was the fire ants making an ant raft to escape flooding. Ants are ridiculous.  They also had bees trying to decide where to make a new home.  The scouter bees come back with reports on possible locations, conveying the message with a dance. All the scouters sell their location and the others decide who to follow. When one of them gets enough support then they all up sticks and move - pretty smart.

On the same theme, I was at a talk recently about consensus decisions in sticklebacks. Apparently they're very reproducible in experimental terms. Again, they have to make a decision, this time about which way to swim. On their own they make the good decision the majority of the time (say 60%) but when they're in a group the almost always get it right. Each fish is pretty stupid, the group is less stupid.

I love problems like this because, while it is a biology problem, it's simple units (fish, ants, bees) that can interact with their peers in some measurable way (well, if you're really clever and patient it's measurable). From this emerges surprising a complex behaviour that didn't exist with the individual - that's what statistical mechanics is all about.

Critical-point post is still delayed, when you're debugging code at work all day it's hard to feel motivated to come home and do the same thing. It's coming though.

UPDATE: Just seen part one, those starlings are badass. They look like drops of liquid, just wait until I get my MD code working and I'm going to be simulating me some birds! (not in the weird science sense, although that would be cool as well).