Monday, 7 May 2012

Journal Club: A New Blog

I've just started a new blog www.scmjournalclub.org. It's definitely in what you'd call the beta phase right now. I will certainly be changing the layout and gradually adding more permanent content over the next few weeks.

Contributions welcome to submissions@scmjournalclub.org.

While I do sometimes get a bit technical, Kinetically Constrained is hopefully of interest to people inside and outside of the field. The idea for the journal club is that it is aimed at people working in the area of soft matter and statistical mechanics. In particular I want it to be useful for postgraduate students who would find it helpful understanding papers they may have found a bit impenetrable otherwise.

How it will all work will hopefully evolve. I hope one day enough people check it out that the following scenario happens. A PG student presents a paper as best they can that they might be having trouble understanding. A comment thread follows and the problems get sorted out. Everyone wins.

I also suspect that hundreds of journal clubs happen each week in different universities. While I understand people might not want this to be public, for those that don't mind they could put their presentation on SCM journal club where it can benefit even more people.

To kick things off I've started with a recent paper on the arXiv by Andrés Santos on one of my favourite topics – hard spheres.
Brief Summary
In liquid-state theory the hard sphere equation of state is of particular importance because it is a fantastic reference system for a whole host of molecular and in particular colloidal liquids. The hard sphere equation of state (EoS) tells you what pressure you need to compress a your spheres to get a given density. With an analytical form for the EoS one can calculate any thermodynamic property one desires.
Percus-Yevick (PY) is a way to close to the Ornstein-Zernicke (OZ) equation – an exact relation between correlation functions – and is usually solved by either the compressibility route or the virial route. You’re basically choosing how your approximation enters. Here Santos has taken a different route, following the chemical potential, and it gives a slightly different closure to OZ.
Carnahan-Starling is an incredibly simple EoS for hard spheres which is in common use (fluid phase). It can be written as a 1/3-2/3 mix of the compressibility and virial PY routes. In a similar way Santos writes a 2/5-3/5 mix of compressibility and chemical potential routes and gets a similarly simple expression – which is ever-so-slightly better than Carnahan-Starling.
I'm more than happy to take contributions. I think it's nicer if people say who they are but I'll hold back the name if that's the barrier to submitting (provided it's not an anonymous destruction of a rival's paper). You can submit via submissions@scmjournalclub.org. For interested regulars I can look into direct posting via blogger.

Wednesday, 25 April 2012

The Renormalisation Group

A new video which more or less completes the critical phenomena series. Jump straight to it if you want to skip the background.

One of my favourite topics is the critical point. I've posted many times on it, so to keep this short you can go back here for a summary. In brief, we're looking at a small point on the phase diagram where two phases begin to look the same. The correlation length diverges and all hell breaks loose. Well, lots of things diverge. At the critical point all length scales are equivalent and, perhaps most remarkably, microscopic details become almost irrelevant. Different materials fit into a small number of universality classes that share broad properties such as symmetry or dimensionality.

For a long time this universal nature was known about but it couldn't be said for sure if it was a truly universal thing, or just a really good approximation. Then along came the renormalisation group (RG), which gave a strong theoretical basis to critical phenomena and sorted everything out.

The renormalisation group is usually at the back end of an advanced statistical mechanics course, and that is not the level I'm going for with this blog. However, when making the videos for the demonstration of scale invariance and universality it became apparent that, even just making the pictures for these videos, I had to use RG. Even if I didn't realise it.

First I'll try to explain schematically what RG aims to do. Then I'll show how this is similar to how I make my pictures, and finally we'll get to a demonstration of RG flow at the end. I'll try not to dumb it down too much but I also want to be as quick as possible.

Renormalisation group

Let's look at how we do this with the Ising model. A simple model for a magnet where spins, sigma, (magnetic dipoles) can point up or down, $latex \sigma=\pm 1$, and like to align with their neighbours through a coupling constant, $latex J$. The energy is a sum over nearest neighbour pairs

$latex \displaystyle E=\sum_{ij} -J \sigma_i \sigma_j$

Where RG enters is to say that, if the physics is the same on all length scales, then we should be able to able to rescale our problem, to cast it on a different length scale, and get back the same thing. In real-space RG this is done by blocking. We bunch a group of our spins up together and form a new super spin that takes on the majority value of its constituents. It's as though the spins in the block get together and vote on how they want to be represented, and then we can deal with them as one.

Here's what it looks like. Take an Ising model configuration
Block them together

And vote

We're left with a pixelated version of the first picture. Now here I will slightly deviate from RG as standard. The next step is to ask, if these super spins were a standalone Ising model, what temperature would they have? If our initial system is right on the critical point then the renormalised (blocked) system should have the same temperature because it should look exactly the same – scale invariance. If you're even slightly off then the apparent temperature, let's call it $latex T_{RG}$, will flow away from the critical point towards a fixed point.

These fixed points are the ferromagnet (all spins the same, $latex T_{RG}=0$) or the paramagnet (completely random, $latex T_{RG} \rightarrow \infty$) as shown below.


Normally RG is done in terms of coupling constants rather than temperature. However, I think in our case temperature is more intuitive.

Zooming out

By now the link between RG and the pictures I make may already be clear. The configurations I will show below are made of something like $latex 10^{10}$ spins. Clearly I can't make a 10 Giga pixel jpeg so I have to compress the data. In fact the way I do it is an almost identical blocking process. Spins are bundled into $latex b \times b$ blocks and I use a contrasting function (a fairly sharp tanh) that is not far away at all from majority rule as described above.

If we start by zooming in to a 768x768 subsection then each pixel is precisely one spin. As we zoom out we eventually need to start blocking spins together. In the video below there are three systems: one ever-so-slightly below $latex T_c$, one ever-so-slightly above $latex T_c$ and one right on the money. At maximum zoom they all look pretty much the same. If you had to guess their temperatures you'd say they're all critical.

As we start to zoom out we can see structure on  length scales, and the apparent temperatures start to change, in fact they flow towards the fixed point phases. Video below, recommend you switch on HD and watch it full screen.



So there it is. RG in action. If you're not precisely on the critical point then you will eventually find a length scale where you clearly have a ferromagnet or a paramagnet. At the critical point itself you can zoom out forever and it will always look the same. The renormalisation group is a really difficult subject, but I hope this visualisation can at least give a feeling for what's going on, even if the mathematical detail is a bit more challenging.



Wednesday, 18 April 2012

The thermodynamic limit

This post has been at the back of mind for a while and written in small, most likely disjoint pieces. I wanted to think about connecting some of the more formal side of statistical mechanics to our everyday intuitions. It's probably a bit half baked but this is a blog not a journal so I'll just write a follow-up if I think of anything.

I'm often accused of living in a rather idealised world called the thermodynamic limit.

This is completely true.

To see why this is a good thing or a bad thing I should probably say something about what I think it is. I'll start at the colloquial end and work up, first let's say that in the thermodynamic limit everything is in equilibrium.

Nothing ever changes around here

If you put enough stuff in a jar, keep it sealed in a room that stays the same temperature, and give it enough time then it will eventually end up in its equilibrium state. One could argue that the real equilibrium is the grey mush at the end of the universe so clearly I'm going for some time scale that's enough to let everything in the jar settle down but not so much that I get bored waiting for it. For atoms and molecules this usually gives us a window between roughly a picosecond (10^-12 seconds) and lets say a 100 seconds (I get bored pretty easily). Once it is in equilibrium the contents of the jar will stay in the same state forever – or until it gets kicked over. The point is that in equilibrium nothing changes.

Or does it? To our eyes we may see no change, but the atoms inside the jar will be wriggling furiously, perhaps even diffusing over great distances. How could such great change on the small scale be consistent with eternal boredom on the macroscopic length scale? The answer has two parts. Firstly, the atoms that make up the world are all frighteningly similar. So if one diffuses away it will quickly be replaced by an indistinguishable substitute. The second part motivates the "enough stuff" part of the previous paragraph.

Listen to a group of people talking and the conversation will ebb and flow, and sometimes go completely quiet. Sit in a busy cafe and all you can hear is general noise. A sort of hubbub that you can easily identify as conversation, maybe you can even get a feel for the mood, but you can't tell what anyone is saying. In the thermodynamic limit there are so many atoms that all we can see is a sort of average behaviour. We can tell what sort of state it is (a liquid, a solid, a magnet – the mood) but the individuals are lost.

So as we lumber towards a stricter definition of the thermodynamic limit we should think about what we mean by a state. I've talked about this before. In statistical mechanics there is a huge difference between a 'state' and a 'configuration'. By configuration we mean the exact position (and sometimes velocity) of every particle in the jar. We're doing this classically so we won't worry about uncertainty. A state, in the stat-mech sense, is an ensemble of configurations that share some macroscopic property. For example their density, or magnetisation, or crystal structure.

To be the equilibrium state, the corresponding configurations must satisfy at least one of two criteria (ideally both). Firstly they should have a low energy compared to the other configurations. If particles attract they should be close, if dipoles point the same way they should try to do that. This is intuitive, balls roll down hill, systems like to lower their potential energy. Secondly there should be a lot of them. An awful lot of them. This is often referred to as entropy, but really I'm just saying you need to buy a lot of tickets to guarantee winning a prize.

A bit more mathematical

This combination of potential energy, U, and entropy, S, is known as the free energy. You can write it down as:
High temperatures, T, favour high entropy (lots of configurations), low temperatures favour low energy. In statistical mechanics, unlike normal mechanics, systems lower their free energy and not just their energy. The state with the lowest free energy is the equilibrium state. No exception.

The aim with statistical mechanics is to write down equations that take interactions on the individual particle level and relate this to the probability of finding the particles in a particular configuration. In the mathematical sense the final step is known as "taking the thermodynamic limit", and this means taking the number of particles in your equation, N, to infinity.

It is these infinities that make states formally stable, and give us phase transitions. Infinitesimal changes in conditions, such as temperature, can lead to dramatic changes to the equilibrium state. Of course there are not infinity particles in the real world. However, with roughly 10^24 water molecules in my cup of tea it's a pretty good approximation.

To be in the thermodynamic limit, therefore, we an infinite amount of stuff sitting for an infinite amount of time. The system must be able to explore all configurations to decide which state to settle on. You can see where we're going to run into problems.

Back to the real world

Getting back to the start of this post, why are my accusers being so accusatory? Most likely because the real world, for the most part, is massively out of equilibrium. From stars and galaxies, down to swimming bacteria. Then there are materials, such as glasses, where the relaxation time has become so long that the equilibrium state can't be reached in times longer than the age of the universe. Or some say forever – but I'll come back to ergodicity at a later date.

In colloid land things get quite interesting. As mentioned in a previous post, colloids that are big enough to easily see take about a second to move around enough to start equilibrating. That's very close to me getting bored, so if it's a dense system or there are strong attractions one can expect colloids to quickly fall out of equilibrium.

The theoretical framework for life out of equilibrium is hugely more complicated that at equilibrium. Even quantities such as temperature start to lose their meaning in the strictest sense. In fact, while people are working hard and no doubt making progress, it's safe to say that it will never be as elegant – or let's say as easy – as what we have in the thermodynamic limit.

All is not lost

So this means everything we study in equilibrium is useless? It clearly doesn't exist. Well it's true nothing in the universe meets the strict definition of infinite time and infinite stuff, but in reality it's usually alright to have a lot of stuff and enough time. In fact we regularly study systems with only hundreds of particles and correctly predict the phase behaviour. It's usually the enough time part that is the problem.

Knowing what the equilibrium state should be is a bit like knowing the destination but not the journey. In many many cases this is enough, atoms can rearrange themselves so quickly that it doesn't really matter how they get where they're going. Of course in many cases that we worry about today we need to know both where the system is going, and how it will get there. It could be that on the way to the true equilibrium state we get stuck in a state with low, but not the lowest, free energy. A bit like settling on your favourite restaurant before going to the end of the street and trying them all. In this case we can maybe plot a different route through the phase diagram with controls such as pressure and temperature.

Increasingly these pathways to self-assembly are the focus for many in the statistical mechanics community. We want to design new materials with exotic thermodynamic ground states (equilibrium states), so it is really important to know what will happen in the thermodynamic limit – we will always need phase diagrams. But with colloids, they're pretty impatient and will easily settle for the wrong state, so we also need to think carefully about how we will get to the ground state. It's an exciting time right now because experimentally we're even able to mess around with the fundamental interactions between particles in real time, numbers that we usually take as constants can suddenly be changed. it really is possible to control every stage of the assembly process from the start all the way to the end.

Friday, 13 April 2012

Less ill

My spritely return has been a bit slower than I thought. However, thanks to the lovely people who work in the Dutch medical system I'm pretty much back.

Long winded post on the thermodynamic limit coming very shortly, then a follow up on ergodicity that I've been toying with. I've also got a new spin on the criticality videos that demonstrates the renormalisation group in action – I'm really really pleased with this video. Oh, and I'm at a conference next week so I'll round up some of the nice talks. There are a couple on critical Casimir forces so I may be compelled to put something down about that.

So lots coming up.

Tuesday, 6 March 2012

been ill

Apologies for yet another reader-losing break in posts. I've been ill. Nothing terrible, but I'm not getting as far down my priority list as I might otherwise.

Hopefully a spritely return in the next few weeks.

Saturday, 21 January 2012

Just hurry up and sit down!

As a semi frequent flyer, and incredibly impatient stand-behinderer I couldn't resist linking to this - Time needed to board an airplane: A power law and the structure behind it from a Norwegian group, Vidar Frette and Per Hemmer.

Boarding strategy is of great importance to airlines, where the turn around time of planes – especially short haul – can make a real dent in profits. For the authors of this paper, however, it seems they just think it's a neat model to test out 1D problems where the particles are distinguishable, rather than the more common indistinguishable particles. In a traffic model the cars are usually identical, whereas here the passengers have a specific seat booking. Statistically this makes a difference.

Of course many people do look at specific strategies. For example here, it seems that it's difficult to think up a strategy that beats random loading. One would think that loading back-to-front would be better but this is not the case. A quick google search finds this nice page from Menkes van den Briel. There you can see videos of all the different strategies.

Unfortunately the best strategy seems to involve seating people in order of window/middle/aisle. Not great if you're sitting next to your kids.

All of which did remind me that it is much quicker boarding when you don't have seat bookings. When I fly to England using a certain orange-themed airline, that doesn't book seats, there's an initial mêlée followed by reasonably rapid sitting down. On a certain royal blue-themed airline it takes forever for a plane half the size to get sat down.

My suggestion is that I should be allowed to starting poking, with increasing frequency and verbal abuse, anyone that I deem to be taking too long to put their bag away.

Sunday, 15 January 2012

Clustering in sea-ice floes

I started writing this post as a long winded account of the difference between equilibrium and non-equilibrium statistical mechanics. It turns out that that is hard to discuss without waffling on, so instead I will just talk about an interesting paper from the world out of equilibrium - which is most of the real world.

I've been walking around with this interesting paper, "Molecular-dynamics simulation of clustering processes in sea-ice floes" by Agnieszka Herman, in my bag since November. It was picked up in the spotlight section, in Phys. Rev. E (loosely the stat-mech/complexity section). My attention was grabbed by the idea that simple ideas in granular gases could hold sway in the icy seas of the Arctic.

Marginal ice zone

Roughly speaking, it's always icy at the top of the earth and then as you go south it turns into ocean. Around the transition between icy and not icy (only the best technical explanations for my readers) is the so called marginal ice zone (MIZ). This is where bits of ice break away from the main ice pack and float around in the sea. Understanding how this ice moves around, and the effect of external forcing, is important if we're to best understand the impact of global climate change.

The ice-floes studied in this paper are in an intermediate region between densely packed and very low density. The sizes of the ice fragments are roughly distributed with a power-law tail and they float about and hit each other inelastically. It is here that one can make the link to something closer to my own field, it is a 2D granular gas.

Granular gases

In the world of the small everything is constantly being battered by random thermal noise. It's so random that it, in fact, becomes rather predictable and Boltzmann distributed. In the world of a bit bigger, this thermal noise doesn't really affect the individual particles any more and we're now dealing with grains. I've talked about this before in the context of colloids – the last bastion of thermodynamics before everything goes granular.

In a granular context the ice fragments are particles that move ballistically in between collisions, and when they collide energy is lost. This system, of dissipative colliding grains is known to have interesting dynamics including the clustering of particles and other complicated correlations.

The really nice thing about this paper is that what Agnieszka Herman has done is to simulate such a granular gas, but adding in realistic numbers for all sorts of effects such as friction, wind, currents, restitution coefficient (how inelastic it is) and to see if it can reproduce what is observed in the oceans. This can not have been easy to set up!

Comparing to real life

The image below is the sort of sea ice clustering that is seen in the MIZ. One sees that the smaller floes tend to accumulate on one side of the larger floes.

This is also seen in the simulations results. This is because, as well as losing energy in collisions, the floes are being driven by wind and currents. The larger floes catch up with the smaller ones pushing them along for a while until they fall off. The colour bar shows the velocities of the different floes.
At higher densities – more collisions – you can still see the gaps behind the large floes, although the distribution of velocities is now narrower.
I don't know how rigid this system is, it'd be interesting to know if there's a breakout point where the ice floes can suddenly escape. It's really neat to think that you can connect such different systems, not to mention such different scales, and still be able to say something sensible.

Big thanks to Agnieszka for providing the colour images. Images, copyright APS, are reproduced with permission from the paper Phys. Rev. E 84, 056104 (2011).