Showing posts with label stat-mech. Show all posts
Showing posts with label stat-mech. Show all posts

Wednesday, 25 April 2012

The Renormalisation Group

A new video which more or less completes the critical phenomena series. Jump straight to it if you want to skip the background.

One of my favourite topics is the critical point. I've posted many times on it, so to keep this short you can go back here for a summary. In brief, we're looking at a small point on the phase diagram where two phases begin to look the same. The correlation length diverges and all hell breaks loose. Well, lots of things diverge. At the critical point all length scales are equivalent and, perhaps most remarkably, microscopic details become almost irrelevant. Different materials fit into a small number of universality classes that share broad properties such as symmetry or dimensionality.

For a long time this universal nature was known about but it couldn't be said for sure if it was a truly universal thing, or just a really good approximation. Then along came the renormalisation group (RG), which gave a strong theoretical basis to critical phenomena and sorted everything out.

The renormalisation group is usually at the back end of an advanced statistical mechanics course, and that is not the level I'm going for with this blog. However, when making the videos for the demonstration of scale invariance and universality it became apparent that, even just making the pictures for these videos, I had to use RG. Even if I didn't realise it.

First I'll try to explain schematically what RG aims to do. Then I'll show how this is similar to how I make my pictures, and finally we'll get to a demonstration of RG flow at the end. I'll try not to dumb it down too much but I also want to be as quick as possible.

Renormalisation group

Let's look at how we do this with the Ising model. A simple model for a magnet where spins, sigma, (magnetic dipoles) can point up or down, $latex \sigma=\pm 1$, and like to align with their neighbours through a coupling constant, $latex J$. The energy is a sum over nearest neighbour pairs

$latex \displaystyle E=\sum_{ij} -J \sigma_i \sigma_j$

Where RG enters is to say that, if the physics is the same on all length scales, then we should be able to able to rescale our problem, to cast it on a different length scale, and get back the same thing. In real-space RG this is done by blocking. We bunch a group of our spins up together and form a new super spin that takes on the majority value of its constituents. It's as though the spins in the block get together and vote on how they want to be represented, and then we can deal with them as one.

Here's what it looks like. Take an Ising model configuration
Block them together

And vote

We're left with a pixelated version of the first picture. Now here I will slightly deviate from RG as standard. The next step is to ask, if these super spins were a standalone Ising model, what temperature would they have? If our initial system is right on the critical point then the renormalised (blocked) system should have the same temperature because it should look exactly the same – scale invariance. If you're even slightly off then the apparent temperature, let's call it $latex T_{RG}$, will flow away from the critical point towards a fixed point.

These fixed points are the ferromagnet (all spins the same, $latex T_{RG}=0$) or the paramagnet (completely random, $latex T_{RG} \rightarrow \infty$) as shown below.


Normally RG is done in terms of coupling constants rather than temperature. However, I think in our case temperature is more intuitive.

Zooming out

By now the link between RG and the pictures I make may already be clear. The configurations I will show below are made of something like $latex 10^{10}$ spins. Clearly I can't make a 10 Giga pixel jpeg so I have to compress the data. In fact the way I do it is an almost identical blocking process. Spins are bundled into $latex b \times b$ blocks and I use a contrasting function (a fairly sharp tanh) that is not far away at all from majority rule as described above.

If we start by zooming in to a 768x768 subsection then each pixel is precisely one spin. As we zoom out we eventually need to start blocking spins together. In the video below there are three systems: one ever-so-slightly below $latex T_c$, one ever-so-slightly above $latex T_c$ and one right on the money. At maximum zoom they all look pretty much the same. If you had to guess their temperatures you'd say they're all critical.

As we start to zoom out we can see structure on  length scales, and the apparent temperatures start to change, in fact they flow towards the fixed point phases. Video below, recommend you switch on HD and watch it full screen.



So there it is. RG in action. If you're not precisely on the critical point then you will eventually find a length scale where you clearly have a ferromagnet or a paramagnet. At the critical point itself you can zoom out forever and it will always look the same. The renormalisation group is a really difficult subject, but I hope this visualisation can at least give a feeling for what's going on, even if the mathematical detail is a bit more challenging.



Wednesday, 18 April 2012

The thermodynamic limit

This post has been at the back of mind for a while and written in small, most likely disjoint pieces. I wanted to think about connecting some of the more formal side of statistical mechanics to our everyday intuitions. It's probably a bit half baked but this is a blog not a journal so I'll just write a follow-up if I think of anything.

I'm often accused of living in a rather idealised world called the thermodynamic limit.

This is completely true.

To see why this is a good thing or a bad thing I should probably say something about what I think it is. I'll start at the colloquial end and work up, first let's say that in the thermodynamic limit everything is in equilibrium.

Nothing ever changes around here

If you put enough stuff in a jar, keep it sealed in a room that stays the same temperature, and give it enough time then it will eventually end up in its equilibrium state. One could argue that the real equilibrium is the grey mush at the end of the universe so clearly I'm going for some time scale that's enough to let everything in the jar settle down but not so much that I get bored waiting for it. For atoms and molecules this usually gives us a window between roughly a picosecond (10^-12 seconds) and lets say a 100 seconds (I get bored pretty easily). Once it is in equilibrium the contents of the jar will stay in the same state forever – or until it gets kicked over. The point is that in equilibrium nothing changes.

Or does it? To our eyes we may see no change, but the atoms inside the jar will be wriggling furiously, perhaps even diffusing over great distances. How could such great change on the small scale be consistent with eternal boredom on the macroscopic length scale? The answer has two parts. Firstly, the atoms that make up the world are all frighteningly similar. So if one diffuses away it will quickly be replaced by an indistinguishable substitute. The second part motivates the "enough stuff" part of the previous paragraph.

Listen to a group of people talking and the conversation will ebb and flow, and sometimes go completely quiet. Sit in a busy cafe and all you can hear is general noise. A sort of hubbub that you can easily identify as conversation, maybe you can even get a feel for the mood, but you can't tell what anyone is saying. In the thermodynamic limit there are so many atoms that all we can see is a sort of average behaviour. We can tell what sort of state it is (a liquid, a solid, a magnet – the mood) but the individuals are lost.

So as we lumber towards a stricter definition of the thermodynamic limit we should think about what we mean by a state. I've talked about this before. In statistical mechanics there is a huge difference between a 'state' and a 'configuration'. By configuration we mean the exact position (and sometimes velocity) of every particle in the jar. We're doing this classically so we won't worry about uncertainty. A state, in the stat-mech sense, is an ensemble of configurations that share some macroscopic property. For example their density, or magnetisation, or crystal structure.

To be the equilibrium state, the corresponding configurations must satisfy at least one of two criteria (ideally both). Firstly they should have a low energy compared to the other configurations. If particles attract they should be close, if dipoles point the same way they should try to do that. This is intuitive, balls roll down hill, systems like to lower their potential energy. Secondly there should be a lot of them. An awful lot of them. This is often referred to as entropy, but really I'm just saying you need to buy a lot of tickets to guarantee winning a prize.

A bit more mathematical

This combination of potential energy, U, and entropy, S, is known as the free energy. You can write it down as:
High temperatures, T, favour high entropy (lots of configurations), low temperatures favour low energy. In statistical mechanics, unlike normal mechanics, systems lower their free energy and not just their energy. The state with the lowest free energy is the equilibrium state. No exception.

The aim with statistical mechanics is to write down equations that take interactions on the individual particle level and relate this to the probability of finding the particles in a particular configuration. In the mathematical sense the final step is known as "taking the thermodynamic limit", and this means taking the number of particles in your equation, N, to infinity.

It is these infinities that make states formally stable, and give us phase transitions. Infinitesimal changes in conditions, such as temperature, can lead to dramatic changes to the equilibrium state. Of course there are not infinity particles in the real world. However, with roughly 10^24 water molecules in my cup of tea it's a pretty good approximation.

To be in the thermodynamic limit, therefore, we an infinite amount of stuff sitting for an infinite amount of time. The system must be able to explore all configurations to decide which state to settle on. You can see where we're going to run into problems.

Back to the real world

Getting back to the start of this post, why are my accusers being so accusatory? Most likely because the real world, for the most part, is massively out of equilibrium. From stars and galaxies, down to swimming bacteria. Then there are materials, such as glasses, where the relaxation time has become so long that the equilibrium state can't be reached in times longer than the age of the universe. Or some say forever – but I'll come back to ergodicity at a later date.

In colloid land things get quite interesting. As mentioned in a previous post, colloids that are big enough to easily see take about a second to move around enough to start equilibrating. That's very close to me getting bored, so if it's a dense system or there are strong attractions one can expect colloids to quickly fall out of equilibrium.

The theoretical framework for life out of equilibrium is hugely more complicated that at equilibrium. Even quantities such as temperature start to lose their meaning in the strictest sense. In fact, while people are working hard and no doubt making progress, it's safe to say that it will never be as elegant – or let's say as easy – as what we have in the thermodynamic limit.

All is not lost

So this means everything we study in equilibrium is useless? It clearly doesn't exist. Well it's true nothing in the universe meets the strict definition of infinite time and infinite stuff, but in reality it's usually alright to have a lot of stuff and enough time. In fact we regularly study systems with only hundreds of particles and correctly predict the phase behaviour. It's usually the enough time part that is the problem.

Knowing what the equilibrium state should be is a bit like knowing the destination but not the journey. In many many cases this is enough, atoms can rearrange themselves so quickly that it doesn't really matter how they get where they're going. Of course in many cases that we worry about today we need to know both where the system is going, and how it will get there. It could be that on the way to the true equilibrium state we get stuck in a state with low, but not the lowest, free energy. A bit like settling on your favourite restaurant before going to the end of the street and trying them all. In this case we can maybe plot a different route through the phase diagram with controls such as pressure and temperature.

Increasingly these pathways to self-assembly are the focus for many in the statistical mechanics community. We want to design new materials with exotic thermodynamic ground states (equilibrium states), so it is really important to know what will happen in the thermodynamic limit – we will always need phase diagrams. But with colloids, they're pretty impatient and will easily settle for the wrong state, so we also need to think carefully about how we will get to the ground state. It's an exciting time right now because experimentally we're even able to mess around with the fundamental interactions between particles in real time, numbers that we usually take as constants can suddenly be changed. it really is possible to control every stage of the assembly process from the start all the way to the end.

Monday, 7 November 2011

A phase diagram in a jar

One of the things I love about colloids is just how visual they are. Be it watching them jiggling around under a confocal microscope, or the beautiful TEM images of crystal structures, I always find them quite inspirational, or at least instructional, for better understanding statistical mechanics.

Sedimentation

Just to prove I'm on the cutting edge of science, I recently discovered another neat example from 1993. At the liquid matter conference in Vienna Roberto Piazza gave a talk titled "The unbearable heaviness of colloids". As a side note there was a distinct lack of playful titles, maybe people were too nervous at such a big meeting. Anyway, the talk was about sedimentation of colloids.

Sedimentation is something I don't usually like to think about because gravity, as any particle physicist will agree, is a massive pain in the arse. Never-the-less, my experimental colleagues are somewhat stuck with it (well, most of them). As is often the way it turns out you can turn this into a big advantage. What Piazza did, and then others later, was to use the sedimentation profile of a colloidal suspension to get the full equation of state, in fact the full phase diagram, from a single sample.


The nicest example is from Paul Chaikin's lab (now in NYU, then in Princeton), where they used a colloidal suspension that was really close to hard spheres. They mixed a bunch of these tiny snooker balls in suspension, and then let it settle for three months. What they got is this lovely sample, with crystal at the bottom (hence the strange scattering of the light), and then a dense liquid which eventually becomes a low density gas at the top. It's as though the whole phase diagram is laid out before you.

Equation of State

This is a very beautiful illustration, but it's not the best bit. In the same way that atmospheric pressure is due to the weight of the air above you, if you can weigh the colloids above a particular point in the sample then you can calculate the pressure at that point. This is exactly what they did. There are many different ways to measure the density of colloids at a particular height, if you can do it accurately enough (which was the big breakthrough in Piazza's 1993 paper) then you can calculate the density as a function of pressure. In a system where temperature plays no role such as this, this is exactly the equation of state (EoS).
When compared with theoretical calculations for hard spheres the experimental data lies perfectly on the theory curves, complete with first order phase transition where it crystallises. This is really a lovely thing. EoSs are very sensitive to exact details, so in the same way that in my group we compare our simulation of the EoS to check our code, this showed very accurately that their colloids really were hard spheres.

So I think this is all very nice. I nicked the above images from Paul Chaikin's website, I recommend having a poke around, there's loads of great stuff (you really need to see the m&ms).


Saturday, 9 July 2011

Universality at the critical point

Time for more critical phenomena.

Another critical intro

I've talked about this a lot before so I will only very quickly go back over it. The phase transitions you're probably used to are water boiling to steam or freezing to ice. Now water is, symmetrically, very different from ice. So to go from one to the other you need to start building an interface and then slowly grow your new phase (crystal growth). This is called a first order phase transition and it's the only way to make ice.

Now water and steam are, symmetrically, the same. At most pressures the transition still goes the same way – build an interface and grow. However, if you crank up the pressure enough there comes a special point where the distinction between the two phases becomes a bit fuzzy. The cost of building an interface goes to zero so there's no need to grow anything. You just smoothly change between the two. This is a second order, or continuous, phase transition and it's what I mean by a critical point.

As I've demonstrated before, one of the consequences of criticality is a loss of a sense of scale. This is why, for instance, a critical fluid looks cloudy. Light is being scattered by structure at every scale. This insight is embodied in the theory of the renormalisation group, and it got lots of people prizes.

Universality

A second feature of critical phenomena is universality. Close to the critical point it turns out that the physics of a system doesn't depend on the exact details of what the little pieces are doing, but only on broad characteristics such as dimension, symmetry or whether the interaction is long or short ranged. Two systems that share these properties are in the same universality class and will behave identically around the critical point.

At this stage you may not have a good picture in your head of what I mean, it does sound a bit funny. So I've made a movie to demonstrate the point. The movie shows two systems at criticality. On the left will be an Ising model for a magnet. Each site can be up or down (north or south) and neighbouring sites like to line up. The two phases at the critical point are the opposite magnetisations represented here by black and white squares.
On the right will be a Lennard-Jones fluid. This is a model for how simple atoms like Argon interact. Atoms are attracted to one another at close enough range but a strong repulsion prevents overlap. The two phases in this case are a dense liquid and a sparse gas.

One of these systems lives on a lattice, the other is particles in a continuous space that are free to move around. Very different as you can see from the pictures. However, what happens when we look on a slightly bigger length scale? Role the tape!


At the end of the movie (which you can view HD) the scale is about a thousand particle diameters across containing about 350,000 particles and similar for the magnet. At this distance you just can't tell which is which. This demands an important point: These pictures I've been making don't just show a critical Ising model, they pretty much show you what any two-dimensional critical system looks like (isotropic, short range...). Even something complicated from outside of theory land. And this is why the theory of critical phenomena is so powerful, something that works for the simplest model we can think of applies exactly - not approximately - to real life atoms and molecules, or whatever's around the kitchen.

Wednesday, 13 April 2011

Paper review: Hexatic phases in 2D

I'm doing my journal club on this paper by Etienne Bernard and Werner Krauth at ENS in Paris:

First-order liquid-hexatic phase transition in hard disks

So I thought that instead of making pen-and-paper notes I'd make them here so that you, my huge following, can join in. If you want we can do it proper journal club style in the comments. For now, here's my piece.

Phase transitions in 2D

Dimension two is the lowest dimension we see phase transitions. In one dimension there just aren't enough connections between the different particles – or spins, or whatever we have – to build up the necessary correlations to beat temperature. In three dimensions there are loads of paths between A and B and the correlations really get going. We get crisp phase transitions and materials will readily gain long range order. Interestingly, while it should be easier and easier to form crystals in higher dimensions there do exist pesky glass transitions that get worse with increasing dimension. But I digress.

In two dimensions slightly strange things can happen. For one thing, while we can build nice crystals they are never quite as good as the ones you can get in 3D. What do I mean by this? Well in 3D I can give you the position of one particle and then the direction of the lattice vectors and you can predict exactly where every particle in the box will sit (save a bit of thermal wiggling). In 2D we get close, if I give you the position and lattice vectors then that defines the relative position and orientation for a long way – but not everywhere.

By "a long way" I mean correlations decay algebraically (distance to the power something) rather than exponentially (something to the power distance), which would be short ranged. We can call it quasi-long ranged.

Never-the-less, this defines a solid phase and this solid can melt into a liquid (no long range order of any kind). What is very interesting in two dimensions is that this appears to happen in two stages. First the solid loses its positional order, then it loses it's orientational order as well. This is vividly demonstrated in Fig 3. of the paper. The phase in the middle, with quasi-long range orientational order but short range positional order, is known as the hexatic phase.

When the lattice is shifted a bit the orientation can be maintained but the positions become disordered.

Thursday, 3 February 2011

Colloids are just right

All being good it looks like I've secured employment for a tiny while longer. Hooray!

The place I'm moving to is a big place for synthetic colloids, so it seems like a good time to go through what I know about colloids. If nothing else it'll be interesting to compare this to what I'll know in a year's time! So, here is a theorists perspective on colloid science.

I'll spare the usual introduction about how colloids are ubiquitous in nature, you can go to Wikipedia for that. The type of colloids I'm interested in here are synthetic colloids made in the lab. They're usually made from silica or PMMA (perspex), you can make a lot of them, they can be made so they're roughly the same size and you'll have them floating around in a solution. By playing with the solution you can have them density matched (no gravity) or you can have them sinking/floating depending what you want to study.

The colloids that people make sit nicely in a sweet spot of size and density that make them perfect for testing our fundamental understanding of why matter arranges itself in the way it does. Colloids can undergo most of the same phase transitions that we get in molecular systems, but here we can actually see them. Take for example this beautiful electron microscope image of a colloidal crystal from the Pine group at NYU.



1. They're big enough to image

Colloids are usually of the order a micron across. At this size it is still possible to use confocal microscopy to image the particles. While nothing like the resolution of the electron microscope, the confocal can actually track the positions of individual particles in real time, in solution. It's almost like a simulation without the periodic boundary conditions! A confocal can take lots of 2D slices through the sample, such as below from the Weeks group. The scale bar is 5 microns.


If you do it quick enough then you can keep track of the particle moving before it loses its identity. The Weeks group did some very famous work visualising dynamic heterogeneity in liquids near the glass transition. (see their science paper if you can).

If we want to think about colloids as model atoms, which we do, then there's another property apart from just their size that we need to be able to control.

2. You can control their interactions

Being the size they are, if we didn't do anything to our colloids after making the spheres they would stick together quite strongly due to van de Waals forces - this is the attraction between any smooth surface to another, as used by clingfilm. To counteract this the clever experimentalists are able to graft a layer of polymers around onto the surface of the colloid.

It's like covering it with little hairs. When the hairs from two particles come into contact they repel, overcoming the van de Waals attraction. The particles are "stabilised". In this way it's possible to make colloids that interact pretty much like hard spheres. So not only can we use them as model atoms, but we can use them to test theoretical models as well!

Further to this the colloids can be charged and by adding salt to the solvent one can control the screening length for attraction or repulsion to other colloids. Finally there's the depletion interaction. I want to come back to this so for now I'll just say that by adding coiled up polymers into the soup we can create, and tightly control, attractions between the colloids. With this experimentalists can tune their particles to create a zoo of different behaviours.

3. They're thermal

If the colloids are not too small to be imaged, why not make them bigger? If we made them, say 1cm, then we could just sit and watch them, right? Well not really. If you filled a bucket with ball bearings and solution, density matched them so they don't sink or float and then waited, you'd be there a long time. The only way to move them in a realistic amount of time is to shake them - this is granular physics.

Granular physics is great but it's not what we're doing here. Real atoms are subject to random thermal motions and they seek to fit the Boltzmann distribution. For this to work with colloids they need to be sensitive to temperature.

When a colloid is immersed in a fluid it subject to a number of forces. If it's moving then there will be viscous forces, and on an atomistic level it is constantly being bombarded by the molecules that comprise the fluid. In the interests of keeping this post to a respectable size I can't go through the detail, but suffice it to say that this is an old problem in physics - Brownian motion.

Under Brownian motion the large particle will perform a random walk that is characterised by its diffusion constant. The bigger this number the quicker it moves around. A more intuitive number is the time it takes for a particle to move a distance of one particle diameter. When you solve the equation of motion for a large particle in a Stokesian fluid you find that this time is given by

where is viscosity, a is the particle diameter, and k_B T is Boltzmann's constant and temperature. Now this does get more complicated in dense systems and the properties of the fluid matter, but this is a good start. This could be a topic for another post.

For a typical colloidal particle, around a micron in size, you have to wait about a second for it to move its own diameter. For something only as big as a grain of sand you can be waiting hours or days. Even by 10 microns it's getting a bit too slow. But close to 1 micron, not only does it move about in an acceptable time frame, we can easily track it with our confocal microscope. If it's diffusing around then we can hope that it will be properly sampling the Boltzmann distribution - or at the very least be heading there. So once again, that micron size sweet spot is cropping up.

So what else?

Hopefully this serves as a good starting point to colloids. Obviously there's a lot more to it. An area that I'm very interested in at the moment is what happens when the colloids are not spheres but some other shape. I'll be posting more about it in the coming months.

If you don't remember anything else just remember that colloids are the perfect size to test statistical mechanics and to be visible.

So well done colloids, you're just right size.

Sunday, 25 July 2010

Statistical mechanics of tetris

I'm finding that I'm becoming increasingly fascinated by shape. It seems such a simple thing yet scratch the surface only a little and the complexity comes pouring out. Take simple tiling problems; I can tile my floor with squares or regular hexagons, but not regular octagons - they'll always leave annoying gaps. From a statistical mechanics point of view those gaps are very important, little sources of entropy that you can't get rid of. In three dimensions understanding the packing of tetrahedra has proved no simple task. But that's a story for another day.

So it came as no surprise that I was very taken with Lev Gelb's talk on polyominoes at the Brno conference. Polyominoes are connected shapes on a two dimensional lattice. A monomino is a square, a domino you know. Tetrominoes are made of four squares and are exactly like the pieces from Tetris. Assuming that they're stuck in the plane (so you can't flip them over) there are 7 tetrominoes.


Sunday, 29 November 2009

An unintuitive probability problem

Probability can do strange things to your mind. This week I had a probability problem where every time I tried to use intuition to solve it I ended up going completely wrong. I thought I'd share it as I think it's interesting.

Consider a one dimensional random walk. At each time step my walker will go left with probability , and right with probability . It stays where it is with probability . Furthermore these probabilities are dependent on the walker's position in space, so it's really and . I'm imagining I'm on a finite line of length, L, although it doesn't matter too much.

Now if , then we just have a normal random walker. In my problem I have the following setup: but . What does this mean? At any given point, x, my walker is more likely to go left than right. If it does go left it will come back with the same rate (although it's more likely to go left again).




So here's the question: If I leave this for a really long time, what is the equilibrium probability distribution for the walkers position, ?

Wednesday, 9 September 2009

Quorum decisions in bacteria

Stumbled across a few nice things related to quorum decision making recently. Remember how sticklebacks make their decisions? Well bacteria do it too, below is a great TED talk by Bonnie Bassler on how they communicate and how they decide to act as an enormous group.



Also came across this article on humans making group decisions in a Kasparov vs The World chess game. It gets the saliva flowing on how you can engineer good decisions.

Addition: Incidentally, I also think this talk is a great example of how to give a science talk. It's a little rushed (probably nerves) but the enthusiasm is fantastic and the use of visual aids is perfect. I'm giving a workshop on presentations so I've been thinking about this stuff a lot recently.

Saturday, 1 August 2009

Biological Membranes

It's been ages since my last post. This is because I've been busy doing lots of interesting physics, met a bunch of interesting physicists, maybe I'll write something about it. For now, something I've been meaning to write about for a while, and for once it's something that's timely.

The journal Soft Matter has an issue out with a membrane biophysics theme. You can read the editorial for yourself if you have access, otherwise make do with my ropey understanding of it. Soft Matter is a relatively new journal that I think is looking really good. Their website needs work but I'll leave that for my science 2.0 rant which is bubbling up.

So why am I interested in membranes (I'm not working on them, I'm just interested)? Well once again I'm interested in them as large system of small parts that make something amazing when they get together - ie statistical physics. So, here's my compressed guide to membranes: please remember I'm not a biologist, I'm very new to this, only barely understand it and I tend to over simplify things.

Saturday, 9 May 2009

Critical Point

I'm finally getting around to sharing what, for me, is the most beautiful piece of physics we have yet stumbled upon. This is the physics of the critical point. It doesn't involve enormous particle accelerators and it's introduction can border on the mundane. Once the consequences of critical behaviour are understood it becomes truly awe inspiring. First, to get everyone on the same page, I must start with the mundane - please stick with it, there's a really cool movie at the bottom...

Most people are quite familiar with the standard types of phase transition. Water freezes to ice, boils to water vapour and so on. Taking the liquid to gas transition, if you switch on your kettle at atmospheric pressure then when the temperature passes 100 degrees centigrade all the liquid boils. If you did this again at a higher pressure then the boiling point would be at a higher temperature - and the gas produced at a higher density. If you keep pushing up the pressure the boiling point goes higher and higher and the difference in density between the gas and the liquid becomes smaller and smaller. At a certain point, the critical point, that difference goes to zero and for any higher pressure/temperature the distinction between the liquid and gas becomes meaningless, you can only call it a fluid.

The picture below, taken from here, shows the standard phase diagram, with the critical point marked, for water.




Magnets also have a critical point. Above the critical temperature all the little magnetic dipoles inside the material are pointing in different directions and the net magnetisation is zero. Below the critical temperature they can line up all the in the same direction and create a powerful magnet. While the details of this transition are different from the liquid-gas case, it turns out that close to the critical point the details do not matter. The physics of the magnet and the liquid (and many other systems I won't mention) are identical. I'll now try to demonstrate how that can be true.

The pictures below are taken from a computer simulation of an Ising model. The Ising model is a simple model for a magnet. It's been used for so much more than that since its invention but I don't really want to get into it now. For the pictures below squares are coloured white or black. In the Ising model squares can change their shade at any time, white squares like to be next to white squares and black squares like to be next to black squares. Fighting against this is temperature, when there is a high temperature then squares are happier to be next to squares of a different colour. Above the critical temperature, if you could zoom out enough, the picture would just look grey (see T=3 below). Grey, in terms of a magnet, would be zero magnetisation.







If you drop the temperature then gradually larger and larger regions start to become the same colour. At a certain point, the critical point, the size of these regions diverges. Any colder and the system will become mostly white, or mostly black (as above, T=2). Precisely at the critical point (T=2.69 in these units), however, a rather beautiful thing happens. As the size of the cooperative regions diverge, so too do fluctuations. In fact at the critical point there is no sense of a length scale. If you are struggling to understand what this means then look at the four pictures below. They are snapshots of the Ising model, around the critical point, at four very different scales - see if you can guess which one is which.








Now watch this movie for the answer (recommend switching to HD and going full screen).





The full picture has 2^34 sites (little squares), that's about 17 billion. This kind of scale invariance is a bit like the fractals you get in mathematics (Mandelbrot set etc) except that this is not deterministic, it is a statistical distribution.

How does it demonstrate that the details of our system (particles, magnetic spins, voting intentions - whatever) are not important? In all these cases the interactions are short ranged and the symmetry and dimension are the same. Now imagine that you have a picture of your system (like above) at the critical point and you just keep zooming out. After a while you'll be so far away that you can't tell if it's particles or zebras interacting at the bottom as that level of detail has been coarse grained out and all the pictures look the same. This is not a rigorous proof, I just want to convey that it's sensible.

Of course the details will come into play at some point, the exact transition temperature is system dependent for example, but the important physics is identical. This is what's known as universality, and it's discovery, in my opinion, is one of the landmarks in modern physics. It means I can take information from a magnet and make sensible comments about a neural network or a complex colloidal liquid. It means that simple models like the Ising model can make exact predictions for real materials.

So there it is. If you don't get it then leave a comment. If you're a physics lecturer and you want to use any of these pictures then feel free. I'd only ask that you let me know as, well, I'd like to know if people think it's useful for teaching. For now you'd have to leave a comment as I haven't sorted out a spam-free email address.

UPDATE: Forward link to a post on universality.

Friday, 20 February 2009

Entropy

I've been meaning to post something interesting about stat-mech about once a fortnight and so far I'm not doing so well. For today I thought I'd share my perspective on entropy.

If you ask the (educated) person in the street what entropy is they might say something like "it's a measure of disorder". This is not a bad description, although it's not exactly how I think about it. As a statistical mechanition I tend to think of entropy in a slightly different way to say, my Dad. He's an engineer and as such he thinks of entropy more in terms of the second law of thermodynamics. This is also a good way of thinking about it, but here's mine.

Consider two pictures, I can't be bothered making them (EDIT: see this post, the T=2,3 pictures) so you can just imagine them. First imagine a frozen image of the static on your television, and secondly imagine a white screen. On the basis of the disorder description you might say that the static, looking more disordered, has a higher entropy. However, this is not the case. These are just pictures, and there is one of each, so who is to say which is more disordered?

Entropy does not apply to single pictures, it applies to 'states'. A state, in the thermodynamic sense, is a group of pictures that share some property. So for the static we'll say that the property is that there are roughly as many white pixels as black pixels with no significant correlations and for the white screen we'll say it's all pixels the same colour. The entropy of a state is the number of pictures (strictly it's proportional to the logarithm of this) that fit its description.

For our blank screen it's easy, there are only two pictures, all black or all white. For the static there are a bewildering number of pictures that fit the description. So many that you'll never see the same screen of static twice, for a standard 720x480 screen it'll be something like 10 to the power 100,000*.

So it's the disordered state, all those pictures of static that look roughly the same, that has the high entropy. If we assume that each pixel at any time is randomly (and independently) black or white, then it's clear why you never see a white screen in the static - it's simply out gunned by the stupidly large number of jumbled up screens.

In a similar way a liquid has a higher entropy than a crystal (most of the time, there is one exception), there are more ways for a load of jumbled up particles to look like a liquid than the structured, ordered crystal. So why then does water freeze? This, as you might guess, comes down to energy.

Water molecules like to line up in a particular way that lowers their energy. When temperature is low then energy is the most important thing and the particles will align on a macroscopic scale to make ice. When temperature is high entropy becomes more important, those nice crystalline configurations are washed out by the shear number of liquid configurations.

And this is essentially why matter exists in different phases, it's a constant battle between entropy and energy and depending which wins we will see very different results.

I'll try and update with some links to better descriptions soon.

*this number is only as accurate as my bad definition of the disordered state.

Monday, 12 January 2009

Busy Bees

The second installment of Swarm was on BBC 1 last night, I missed the first one but I highly recommend catching this before it goes off iPlayer.

The best bit was the fire ants making an ant raft to escape flooding. Ants are ridiculous.  They also had bees trying to decide where to make a new home.  The scouter bees come back with reports on possible locations, conveying the message with a dance. All the scouters sell their location and the others decide who to follow. When one of them gets enough support then they all up sticks and move - pretty smart.

On the same theme, I was at a talk recently about consensus decisions in sticklebacks. Apparently they're very reproducible in experimental terms. Again, they have to make a decision, this time about which way to swim. On their own they make the good decision the majority of the time (say 60%) but when they're in a group the almost always get it right. Each fish is pretty stupid, the group is less stupid.

I love problems like this because, while it is a biology problem, it's simple units (fish, ants, bees) that can interact with their peers in some measurable way (well, if you're really clever and patient it's measurable). From this emerges surprising a complex behaviour that didn't exist with the individual - that's what statistical mechanics is all about.

Critical-point post is still delayed, when you're debugging code at work all day it's hard to feel motivated to come home and do the same thing. It's coming though.

UPDATE: Just seen part one, those starlings are badass. They look like drops of liquid, just wait until I get my MD code working and I'm going to be simulating me some birds! (not in the weird science sense, although that would be cool as well).

Wednesday, 30 July 2008

Glass in the New York Times

When people think of physics they tend to think of particle accelerators, string theory, E=mc² and so on, so when I tell them I'm studying glass they always look a little disappointed. Anyway, a couple of weeks ago we got a New York Times article from a guy called Kenneth Chang so we're all quite pleased about it. I had written a long post about it but I ended up just repeating what's in the article, so I've decided to list some main points and provide a few extra links.

He managed to give a good sense as to how much debate there is in the field. One thing everyone agrees on however, and where the article begins, is that cathedral windows do not sag because the glass has flowed.
"Medieval stained glass makers were simply unable to make perfectly flat panes, and the windows were just as unevenly thick when new."
If you want something that does do that then let me point you in the direction of pitch, which drips about once a decade but shatters when hit with a hammer. So what is glass then? Is it a liquid or what?

Glass has the same structure as a liquid. If you take a photo you couldn't really tell the difference. A liquid that's on its way to being a glass, a supercool liquid, is the same as well. If instead of a photo you look at a video you'll see that it's actually really different. Weeks and company have actually done this and you can see regions really close to one another, some with lots of motion, some hardly moving at all. This is the dynamic heterogeneity, mentioned in the article, that goes along with the hugely increasing viscosity. Their website has loads of great stuff, including movies and a link to a freely available version of the Science paper, I recommend taking a look.

The region that I'm roughly poking about in at the moment is to do with vibrations and rigidity. This is touched on in the article a couple of times. Matthieu Wyart and others have spent a lot of time developing the idea of a glass as a marginally rigid solid (the introduction to Wyart's thesis is actually quite readable and freely accessible). It's looking at how the random liquid structure affects things at low temperatures.

Anyway, I'll leave it there. If I've missed any important links just stick them in a comment. Been a bit too busy writing my thesis to do this properly. Dear God let it end soon!