Wednesday, 25 April 2012

The Renormalisation Group

A new video which more or less completes the critical phenomena series. Jump straight to it if you want to skip the background.

One of my favourite topics is the critical point. I've posted many times on it, so to keep this short you can go back here for a summary. In brief, we're looking at a small point on the phase diagram where two phases begin to look the same. The correlation length diverges and all hell breaks loose. Well, lots of things diverge. At the critical point all length scales are equivalent and, perhaps most remarkably, microscopic details become almost irrelevant. Different materials fit into a small number of universality classes that share broad properties such as symmetry or dimensionality.

For a long time this universal nature was known about but it couldn't be said for sure if it was a truly universal thing, or just a really good approximation. Then along came the renormalisation group (RG), which gave a strong theoretical basis to critical phenomena and sorted everything out.

The renormalisation group is usually at the back end of an advanced statistical mechanics course, and that is not the level I'm going for with this blog. However, when making the videos for the demonstration of scale invariance and universality it became apparent that, even just making the pictures for these videos, I had to use RG. Even if I didn't realise it.

First I'll try to explain schematically what RG aims to do. Then I'll show how this is similar to how I make my pictures, and finally we'll get to a demonstration of RG flow at the end. I'll try not to dumb it down too much but I also want to be as quick as possible.

Renormalisation group

Let's look at how we do this with the Ising model. A simple model for a magnet where spins, sigma, (magnetic dipoles) can point up or down, $latex \sigma=\pm 1$, and like to align with their neighbours through a coupling constant, $latex J$. The energy is a sum over nearest neighbour pairs

$latex \displaystyle E=\sum_{ij} -J \sigma_i \sigma_j$

Where RG enters is to say that, if the physics is the same on all length scales, then we should be able to able to rescale our problem, to cast it on a different length scale, and get back the same thing. In real-space RG this is done by blocking. We bunch a group of our spins up together and form a new super spin that takes on the majority value of its constituents. It's as though the spins in the block get together and vote on how they want to be represented, and then we can deal with them as one.

Here's what it looks like. Take an Ising model configuration
Block them together

And vote

We're left with a pixelated version of the first picture. Now here I will slightly deviate from RG as standard. The next step is to ask, if these super spins were a standalone Ising model, what temperature would they have? If our initial system is right on the critical point then the renormalised (blocked) system should have the same temperature because it should look exactly the same – scale invariance. If you're even slightly off then the apparent temperature, let's call it $latex T_{RG}$, will flow away from the critical point towards a fixed point.

These fixed points are the ferromagnet (all spins the same, $latex T_{RG}=0$) or the paramagnet (completely random, $latex T_{RG} \rightarrow \infty$) as shown below.

Normally RG is done in terms of coupling constants rather than temperature. However, I think in our case temperature is more intuitive.

Zooming out

By now the link between RG and the pictures I make may already be clear. The configurations I will show below are made of something like $latex 10^{10}$ spins. Clearly I can't make a 10 Giga pixel jpeg so I have to compress the data. In fact the way I do it is an almost identical blocking process. Spins are bundled into $latex b \times b$ blocks and I use a contrasting function (a fairly sharp tanh) that is not far away at all from majority rule as described above.

If we start by zooming in to a 768x768 subsection then each pixel is precisely one spin. As we zoom out we eventually need to start blocking spins together. In the video below there are three systems: one ever-so-slightly below $latex T_c$, one ever-so-slightly above $latex T_c$ and one right on the money. At maximum zoom they all look pretty much the same. If you had to guess their temperatures you'd say they're all critical.

As we start to zoom out we can see structure on  length scales, and the apparent temperatures start to change, in fact they flow towards the fixed point phases. Video below, recommend you switch on HD and watch it full screen.

So there it is. RG in action. If you're not precisely on the critical point then you will eventually find a length scale where you clearly have a ferromagnet or a paramagnet. At the critical point itself you can zoom out forever and it will always look the same. The renormalisation group is a really difficult subject, but I hope this visualisation can at least give a feeling for what's going on, even if the mathematical detail is a bit more challenging.

Wednesday, 18 April 2012

The thermodynamic limit

This post has been at the back of mind for a while and written in small, most likely disjoint pieces. I wanted to think about connecting some of the more formal side of statistical mechanics to our everyday intuitions. It's probably a bit half baked but this is a blog not a journal so I'll just write a follow-up if I think of anything.

I'm often accused of living in a rather idealised world called the thermodynamic limit.

This is completely true.

To see why this is a good thing or a bad thing I should probably say something about what I think it is. I'll start at the colloquial end and work up, first let's say that in the thermodynamic limit everything is in equilibrium.

Nothing ever changes around here

If you put enough stuff in a jar, keep it sealed in a room that stays the same temperature, and give it enough time then it will eventually end up in its equilibrium state. One could argue that the real equilibrium is the grey mush at the end of the universe so clearly I'm going for some time scale that's enough to let everything in the jar settle down but not so much that I get bored waiting for it. For atoms and molecules this usually gives us a window between roughly a picosecond (10^-12 seconds) and lets say a 100 seconds (I get bored pretty easily). Once it is in equilibrium the contents of the jar will stay in the same state forever – or until it gets kicked over. The point is that in equilibrium nothing changes.

Or does it? To our eyes we may see no change, but the atoms inside the jar will be wriggling furiously, perhaps even diffusing over great distances. How could such great change on the small scale be consistent with eternal boredom on the macroscopic length scale? The answer has two parts. Firstly, the atoms that make up the world are all frighteningly similar. So if one diffuses away it will quickly be replaced by an indistinguishable substitute. The second part motivates the "enough stuff" part of the previous paragraph.

Listen to a group of people talking and the conversation will ebb and flow, and sometimes go completely quiet. Sit in a busy cafe and all you can hear is general noise. A sort of hubbub that you can easily identify as conversation, maybe you can even get a feel for the mood, but you can't tell what anyone is saying. In the thermodynamic limit there are so many atoms that all we can see is a sort of average behaviour. We can tell what sort of state it is (a liquid, a solid, a magnet – the mood) but the individuals are lost.

So as we lumber towards a stricter definition of the thermodynamic limit we should think about what we mean by a state. I've talked about this before. In statistical mechanics there is a huge difference between a 'state' and a 'configuration'. By configuration we mean the exact position (and sometimes velocity) of every particle in the jar. We're doing this classically so we won't worry about uncertainty. A state, in the stat-mech sense, is an ensemble of configurations that share some macroscopic property. For example their density, or magnetisation, or crystal structure.

To be the equilibrium state, the corresponding configurations must satisfy at least one of two criteria (ideally both). Firstly they should have a low energy compared to the other configurations. If particles attract they should be close, if dipoles point the same way they should try to do that. This is intuitive, balls roll down hill, systems like to lower their potential energy. Secondly there should be a lot of them. An awful lot of them. This is often referred to as entropy, but really I'm just saying you need to buy a lot of tickets to guarantee winning a prize.

A bit more mathematical

This combination of potential energy, U, and entropy, S, is known as the free energy. You can write it down as:
High temperatures, T, favour high entropy (lots of configurations), low temperatures favour low energy. In statistical mechanics, unlike normal mechanics, systems lower their free energy and not just their energy. The state with the lowest free energy is the equilibrium state. No exception.

The aim with statistical mechanics is to write down equations that take interactions on the individual particle level and relate this to the probability of finding the particles in a particular configuration. In the mathematical sense the final step is known as "taking the thermodynamic limit", and this means taking the number of particles in your equation, N, to infinity.

It is these infinities that make states formally stable, and give us phase transitions. Infinitesimal changes in conditions, such as temperature, can lead to dramatic changes to the equilibrium state. Of course there are not infinity particles in the real world. However, with roughly 10^24 water molecules in my cup of tea it's a pretty good approximation.

To be in the thermodynamic limit, therefore, we an infinite amount of stuff sitting for an infinite amount of time. The system must be able to explore all configurations to decide which state to settle on. You can see where we're going to run into problems.

Back to the real world

Getting back to the start of this post, why are my accusers being so accusatory? Most likely because the real world, for the most part, is massively out of equilibrium. From stars and galaxies, down to swimming bacteria. Then there are materials, such as glasses, where the relaxation time has become so long that the equilibrium state can't be reached in times longer than the age of the universe. Or some say forever – but I'll come back to ergodicity at a later date.

In colloid land things get quite interesting. As mentioned in a previous post, colloids that are big enough to easily see take about a second to move around enough to start equilibrating. That's very close to me getting bored, so if it's a dense system or there are strong attractions one can expect colloids to quickly fall out of equilibrium.

The theoretical framework for life out of equilibrium is hugely more complicated that at equilibrium. Even quantities such as temperature start to lose their meaning in the strictest sense. In fact, while people are working hard and no doubt making progress, it's safe to say that it will never be as elegant – or let's say as easy – as what we have in the thermodynamic limit.

All is not lost

So this means everything we study in equilibrium is useless? It clearly doesn't exist. Well it's true nothing in the universe meets the strict definition of infinite time and infinite stuff, but in reality it's usually alright to have a lot of stuff and enough time. In fact we regularly study systems with only hundreds of particles and correctly predict the phase behaviour. It's usually the enough time part that is the problem.

Knowing what the equilibrium state should be is a bit like knowing the destination but not the journey. In many many cases this is enough, atoms can rearrange themselves so quickly that it doesn't really matter how they get where they're going. Of course in many cases that we worry about today we need to know both where the system is going, and how it will get there. It could be that on the way to the true equilibrium state we get stuck in a state with low, but not the lowest, free energy. A bit like settling on your favourite restaurant before going to the end of the street and trying them all. In this case we can maybe plot a different route through the phase diagram with controls such as pressure and temperature.

Increasingly these pathways to self-assembly are the focus for many in the statistical mechanics community. We want to design new materials with exotic thermodynamic ground states (equilibrium states), so it is really important to know what will happen in the thermodynamic limit – we will always need phase diagrams. But with colloids, they're pretty impatient and will easily settle for the wrong state, so we also need to think carefully about how we will get to the ground state. It's an exciting time right now because experimentally we're even able to mess around with the fundamental interactions between particles in real time, numbers that we usually take as constants can suddenly be changed. it really is possible to control every stage of the assembly process from the start all the way to the end.

Friday, 13 April 2012

Less ill

My spritely return has been a bit slower than I thought. However, thanks to the lovely people who work in the Dutch medical system I'm pretty much back.

Long winded post on the thermodynamic limit coming very shortly, then a follow up on ergodicity that I've been toying with. I've also got a new spin on the criticality videos that demonstrates the renormalisation group in action – I'm really really pleased with this video. Oh, and I'm at a conference next week so I'll round up some of the nice talks. There are a couple on critical Casimir forces so I may be compelled to put something down about that.

So lots coming up.