Thursday 22 December 2011

Networks in Nature Physics

For those with access, looks like Nature Physics has a complexity issue. With articles by Barabási and Newman and the likes, it looks like it has a solid networks bent.

There's a paper on community structure by my favourite physicist, Mark Newman, that I'm looking forward to reading.

Enjoy!

Tuesday 15 November 2011

We all do economics

The very interesting blog, Mind Hacks, has a post on a theory of a bipolar economy.
A 1935 Psychological Review article proposed a ‘manic-depressive psychoses’ theory of economic highs and lows based on the idea that the market has a form of monetary bipolar disorder.
I find it quite interesting how people like to reframe the problem of economic crashes in their own subject. In psychology it seems perfectly natural to ascribe the behaviour to individual human behaviour. As a physicist I'm completely convinced that it's a collective effect that arises from many relatively simple individuals, trying to win a game, interacting in a highly complex system. Of course one could possibly say the same about the brain itself.

I wonder if biochemists have some hormone explanation and neuroscientists some neurotransmitter reason. Perhaps all these perspectives are equally right (or wrong) – I guess the only thing for sure is that we don't really know!

Monday 7 November 2011

A phase diagram in a jar

One of the things I love about colloids is just how visual they are. Be it watching them jiggling around under a confocal microscope, or the beautiful TEM images of crystal structures, I always find them quite inspirational, or at least instructional, for better understanding statistical mechanics.

Sedimentation

Just to prove I'm on the cutting edge of science, I recently discovered another neat example from 1993. At the liquid matter conference in Vienna Roberto Piazza gave a talk titled "The unbearable heaviness of colloids". As a side note there was a distinct lack of playful titles, maybe people were too nervous at such a big meeting. Anyway, the talk was about sedimentation of colloids.

Sedimentation is something I don't usually like to think about because gravity, as any particle physicist will agree, is a massive pain in the arse. Never-the-less, my experimental colleagues are somewhat stuck with it (well, most of them). As is often the way it turns out you can turn this into a big advantage. What Piazza did, and then others later, was to use the sedimentation profile of a colloidal suspension to get the full equation of state, in fact the full phase diagram, from a single sample.


The nicest example is from Paul Chaikin's lab (now in NYU, then in Princeton), where they used a colloidal suspension that was really close to hard spheres. They mixed a bunch of these tiny snooker balls in suspension, and then let it settle for three months. What they got is this lovely sample, with crystal at the bottom (hence the strange scattering of the light), and then a dense liquid which eventually becomes a low density gas at the top. It's as though the whole phase diagram is laid out before you.

Equation of State

This is a very beautiful illustration, but it's not the best bit. In the same way that atmospheric pressure is due to the weight of the air above you, if you can weigh the colloids above a particular point in the sample then you can calculate the pressure at that point. This is exactly what they did. There are many different ways to measure the density of colloids at a particular height, if you can do it accurately enough (which was the big breakthrough in Piazza's 1993 paper) then you can calculate the density as a function of pressure. In a system where temperature plays no role such as this, this is exactly the equation of state (EoS).
When compared with theoretical calculations for hard spheres the experimental data lies perfectly on the theory curves, complete with first order phase transition where it crystallises. This is really a lovely thing. EoSs are very sensitive to exact details, so in the same way that in my group we compare our simulation of the EoS to check our code, this showed very accurately that their colloids really were hard spheres.

So I think this is all very nice. I nicked the above images from Paul Chaikin's website, I recommend having a poke around, there's loads of great stuff (you really need to see the m&ms).


Friday 4 November 2011

Back from the dead

Can't remember the number of times I've said I've been away because I've been busy, but this time it'll be different. Well it probably won't be different, it looks like I'm destined to be an inconsistent blogger!

It's now been three months since I arrived in the Netherlands for my new job and I'm enjoying it a lot here. The pace is much faster in the group than I'm used to but I'm enjoying the buzz of lots of interesting things getting done. Now I'm more settled I'm hoping for a spectacular return to blogging - there's certainly enough to talk about here!

The Dutch are good at science

In general the Netherlands has a fantastic history in the sciences. I was watching Carl Sagan's Cosmos the other day (best telly ever made), he loved the Netherlands it would seem. There's a whole episode where people dress up in pointy hats and reenact bits from Dutch scientific history.


I'm no historian so there's no point making a huge list. Some notable greats though include Cristiaan Huygens, famous for the wave theory of light, he worked on telescopes and even the pendulum clock. The microscope was invented in the Netherlands, allowing the Antonie van Leeuwenhoek to discover "a universe in a drop of water".

What about statistical mechanics?

Closer to the focus of this blog, the name Johannes van der Waals is never far away. His theories allowed us to begin to understand why matter undergoes phase transitions.Two names that are important for us here in Utrecht are Peter Debye and Leonard Ornstein.

Peter Debye is another one of those names that just seems to pop up all the time. It's littered through my thesis because of his work on phonons. Debye was professor at the university of Utrecht for a very short time. I believe the university didn't deliver on his startup money so he left. The picture is from our coffee room in the Debye Institute.

As well as working in the Debye Institute I also work in the Ornstein Lab, after Leonard Ornstein. For me his name is most famous from the Ornstein-Zernike relation in liquid state theory, however, I think he did a lot of varied stuff. He followed on from Debye at Utrecht in 1914 where he remained until 1940. Ornstein was Jewish and at the beginning of the war was dismissed from his position at the university. Only six months later he died. Seems to me it should be the Ornstein Institute, anyway, we also have his picture up.
Enough history
So the Dutch weren't too bad at science. The living ones aren't too shabby either. So hopefully lots of interesting things to be posted in the coming weeks.

Saturday 9 July 2011

Universality at the critical point

Time for more critical phenomena.

Another critical intro

I've talked about this a lot before so I will only very quickly go back over it. The phase transitions you're probably used to are water boiling to steam or freezing to ice. Now water is, symmetrically, very different from ice. So to go from one to the other you need to start building an interface and then slowly grow your new phase (crystal growth). This is called a first order phase transition and it's the only way to make ice.

Now water and steam are, symmetrically, the same. At most pressures the transition still goes the same way – build an interface and grow. However, if you crank up the pressure enough there comes a special point where the distinction between the two phases becomes a bit fuzzy. The cost of building an interface goes to zero so there's no need to grow anything. You just smoothly change between the two. This is a second order, or continuous, phase transition and it's what I mean by a critical point.

As I've demonstrated before, one of the consequences of criticality is a loss of a sense of scale. This is why, for instance, a critical fluid looks cloudy. Light is being scattered by structure at every scale. This insight is embodied in the theory of the renormalisation group, and it got lots of people prizes.

Universality

A second feature of critical phenomena is universality. Close to the critical point it turns out that the physics of a system doesn't depend on the exact details of what the little pieces are doing, but only on broad characteristics such as dimension, symmetry or whether the interaction is long or short ranged. Two systems that share these properties are in the same universality class and will behave identically around the critical point.

At this stage you may not have a good picture in your head of what I mean, it does sound a bit funny. So I've made a movie to demonstrate the point. The movie shows two systems at criticality. On the left will be an Ising model for a magnet. Each site can be up or down (north or south) and neighbouring sites like to line up. The two phases at the critical point are the opposite magnetisations represented here by black and white squares.
On the right will be a Lennard-Jones fluid. This is a model for how simple atoms like Argon interact. Atoms are attracted to one another at close enough range but a strong repulsion prevents overlap. The two phases in this case are a dense liquid and a sparse gas.

One of these systems lives on a lattice, the other is particles in a continuous space that are free to move around. Very different as you can see from the pictures. However, what happens when we look on a slightly bigger length scale? Role the tape!


At the end of the movie (which you can view HD) the scale is about a thousand particle diameters across containing about 350,000 particles and similar for the magnet. At this distance you just can't tell which is which. This demands an important point: These pictures I've been making don't just show a critical Ising model, they pretty much show you what any two-dimensional critical system looks like (isotropic, short range...). Even something complicated from outside of theory land. And this is why the theory of critical phenomena is so powerful, something that works for the simplest model we can think of applies exactly - not approximately - to real life atoms and molecules, or whatever's around the kitchen.

Wednesday 15 June 2011

Meeting is good

Once again I find myself making some excuse as to why it's been over a month since my last post. My first reason is I'm finishing up my current postdoc. My other reason is I've been doing lots of travelling. This is much more exciting as I've been finding out more about all the cool soft matter / stat-mech work that is going on in the UK. Some of which I will blog about in time. I've also learned that half the people in soft matter in the UK have worked at some point in the Netherlands, which is handy because I'm moving to the Netherlands!

Getting to the point

All this travelling is related to the topic I wanted to get to today - the value of meeting. I was started off thinking about this thanks to Alice Bell's article in the THE on the value of the seminar. Here Alice calls for seminars to be posted online, something I agree with very much, as a way to reach more people (and to improve the standard a bit). From my experience I've had to use hundreds of pounds of grant money touring the country giving the same seminar. While I value that experience - meeting the people in the groups, direct interaction and so on - it's a shame that people at other universities can't see the talk as well.

Of course if people knew it was online they may not turn up, but hopefully not. I might start sticking mine up here.

The more efficient way of way of reaching many like-minded academics is of course the conference. A good conference can do wonders for your creativity and enthusiasm, it can give you an instant snapshot of the state-of-art and you can meet future employers/collaborators.

But they can be a bit stuffy and long. And expensive. So I'd like to fly the flag for a third kind of academic interaction, the informal science "retreat". Not long ago we had our annual Cornish Soft Matter weekend. A small group of physicists and chemists from a couple of universities got together for a more relaxed meeting. Talks were projected onto a sheet, we were sitting on sofas or the floor, and the start of a talk would be delayed due to people making a last minute cup of tea (usually this was me). All this in a really nice setting.

The demographic was largely PhD students and postdocs, and everyone had to give a short talk. If it overran, fine, if people had questions they'd asked them right away. Students were encouraged to ask as many questions as possible and academics resisted the urge to tear anyone to bits with their sharpened critical skills.

Scientifically it's great. I got to hear from the people who make all these synthetic colloids that I always cite. Their concerns weren't always about phase diagrams or dynamic arrest, sometimes it was simply how much stabiliser or chemical X do I need to get the polydispersity down. These are problems I don't usually get to hear about and it's particularly nice to get it from the people at the coal face.

Because the atmosphere is more relaxed you can give a different kind of talk. In a conference you're so worried about being jumped on that you tend to take out all the personality from a talk, all the wild speculation and, well, then fun side of science. Here we could kind of let rip. If we wanted.

Socially it is also a good thing. It's easy to get a little isolated with your own little problem, especially when your doing a PhD, so it's nice to mix a bit. Science, like most jobs, requires a degree of networking. While I hate this word and all that it implies, these informal gatherings are a much better way to get to know people than conferences. People at conferences are always trying to look smart and generally suck up to the established professors. Makes me shiver just thinking about it.

A snappy conclusion

The main thing that made this meeting nice was the atmosphere. I highly recommend anyone to organise something similar if it's possible. Sure, it was no Copenhagen, but the science was good and it helped create that sense of being in a scientific community.

While it's not free it's a lot cheaper than a conference. I guess you don't need to go all the way to Cornwall but it is nice to get out the department for a couple of days - especially when you usually sit at a desk all the time.

Wednesday 27 April 2011

An early look at simulation

While I was putting together the post on 2D disks I came across a lovely paper from 1962 on 2D melting by Alder and Wainwright. From there I found this paper from 1959: Studies in Molecular Dynamics. I. General Method by the same authors.

They describe the "event driven" molecular dynamics (MD) algorithm. Normally, with MD you calculate forces, and thus accelerations, and update this way. Hard disks or spheres behave more like snooker balls, the forces are more or less instantaneous impulses that conserve momentum so it's better to deal with collision events and leave out the acceleration part.

The paper gives a fascinating insight into the early days of computer simulation (they still refer to them as "automatic computers"), what their limitations were and what details were worth worrying about. To give you an idea, in 1959 they say:
With the best presently available computers, it has been possible to treat up to five hundred molecules. With five hundred molecules it requires about a half-hour to achieve an average of one collision per molecule.
So in their case it was CPU speed that was the problem, they get about thousand collisions per hour. To put that in perspective, a modern event-driven simulation of a similar system will maybe hit about a billion collisions per hour on a reasonable desktop [source]. I don't say this to mock their efforts, these are the giants on whom's shoulders we stand. I'll come back to why that number is so comparatively big these days, first I want to look at visualisation.

Visualistion

In 1959 there were no jpegs or postscripts and certainly no Povray or VMD, I'm not sure they even had printers. So how do you visualise your simulation? Well they had a rather elegant answer to that. They could output the current state of the system to a cathode-ray tube as a bunch of dots in the positions of the particles. Then they pointed a camera at the screen and left the shutter open while they ran a simulation. What you get is these beautiful images below showing the particle trajectories. Firstly in a crystal phase you can see the particles rattling around their lattice sites


This is a projection of the FCC lattice (the squares confused me at first). In the fluid phase they do a little bit of cage rattling and then start to wander off.

[Figures reprinted with permission Alder and Wainwright, J. Chem. Phys. 31, 459 (1959). Copyright 1959, American Institute of Physics].

I honestly couldn't show it better today. Some people dismiss visualisations as pretty pictures that only exist to attract attention. Perhaps this is sometimes true but it only takes one look at this to see how they can stir the imagination and shape the intuition – and that's what creates new ideas.

Algorithms

I'd quickly like to come back to the speed difference between 1959 and today. A lot of the difference can be put down to Moore's law. After an annoying amount of Googling I can't really say how much faster modern CPUs are. A lot probably. However, I'd like to focus on an often overlooked factor – the development of algorithms.

A general event driven algorithm calculates the collision time for each pair of particles and, if it is under a cutoff, stores it in an event queue. It then fast forwards to the shortest time whereupon it will need to update the queue with new events that appear after the collision. Initially this requires checking all pairs, Alder and Wainwright call this the "long cycle", and this has complexity of order N-squared, O(N^2). This means that if you double the number of particles, N, then you have four times as many calculations to perform.

After a collision you only need to update events involving the particles that collided so you can get away with doing N updates. This is the "short cycle" and is O(N). It's not mentioned in this paper but I think there's an issue with sorting the event queue so this is probably still O(N^2). Either way, for their early simulations the total number of collisions per hour tanked as N was increased.

And this is where algorithms come in. You can use all sorts of tricks. In dense systems you can use a cell structure to rule out collisions between pairs far away. Modern algorithms focus largely on keeping the event queue properly sorted. A binary tree will sort with O(log(N)) and here they claim to have it O(1). Of course the complexity is not the only important factor, there may be other more important overheads, but it gives an idea of the limitations.

In equilibrium statistical mechanics specialised computer algorithms have made a spectacular impact. Techniques such as the Wolff algorithm, Umbrella sampling, and many many more, have outstripped any speed up by Moore's law by many orders of magnitude. I could go on about algorithms for hours (maybe a post brewing), instead I'll just make the point that it doesn't always pay to just sit and wait for a faster computer.

We've come a long way

These early simulation studies weren't just important for developing methods, they were able to answer some serious questions that were hopelessly out of reach at the time. Since then simulation has firmly established itself in the dance between theory and experiment, testing ideas and generating new ones. And it shows no sign of giving up that position.

Friday 15 April 2011

Lipid membranes on the arXiv

A while ago I discussed lipid membranes and how they could exhibit critical behaviour. There were some lovely pictures on criticality on giant unilamellar vesicles (GUVs) which are sort of model cell walls. That work was done by Sarah Keller and friends in Seattle.

This morning on the arXiv I saw this new paper, also by Sarah:

Dynamic critical exponent in a 2D lipid membrane with conserved order parameter

They look at the critical dynamics of the GUV's surface. Being embedded in a 3D fluid does have its consequences so they've attempted to account for the effect of hydrodynamic interactions. I haven't poured over their model but the paper looks really nice.

Wednesday 13 April 2011

Paper review: Hexatic phases in 2D

I'm doing my journal club on this paper by Etienne Bernard and Werner Krauth at ENS in Paris:

First-order liquid-hexatic phase transition in hard disks

So I thought that instead of making pen-and-paper notes I'd make them here so that you, my huge following, can join in. If you want we can do it proper journal club style in the comments. For now, here's my piece.

Phase transitions in 2D

Dimension two is the lowest dimension we see phase transitions. In one dimension there just aren't enough connections between the different particles – or spins, or whatever we have – to build up the necessary correlations to beat temperature. In three dimensions there are loads of paths between A and B and the correlations really get going. We get crisp phase transitions and materials will readily gain long range order. Interestingly, while it should be easier and easier to form crystals in higher dimensions there do exist pesky glass transitions that get worse with increasing dimension. But I digress.

In two dimensions slightly strange things can happen. For one thing, while we can build nice crystals they are never quite as good as the ones you can get in 3D. What do I mean by this? Well in 3D I can give you the position of one particle and then the direction of the lattice vectors and you can predict exactly where every particle in the box will sit (save a bit of thermal wiggling). In 2D we get close, if I give you the position and lattice vectors then that defines the relative position and orientation for a long way – but not everywhere.

By "a long way" I mean correlations decay algebraically (distance to the power something) rather than exponentially (something to the power distance), which would be short ranged. We can call it quasi-long ranged.

Never-the-less, this defines a solid phase and this solid can melt into a liquid (no long range order of any kind). What is very interesting in two dimensions is that this appears to happen in two stages. First the solid loses its positional order, then it loses it's orientational order as well. This is vividly demonstrated in Fig 3. of the paper. The phase in the middle, with quasi-long range orientational order but short range positional order, is known as the hexatic phase.

When the lattice is shifted a bit the orientation can be maintained but the positions become disordered.

Thursday 17 March 2011

Six degrees - the documentary you can't see

A while back the BBC put on an documentary about networks called "Six Degrees". Normally when you see a documentary about a field that you're vaguely related to you feel a bit sick because they did it all wrong.

Well I have worked in networks a bit and I thought Six Degrees was excellent. It got a great balance of the historical study of networks and then it ran its own version of the Milgram experiment which was mostly used as a plot device to keep driving the story forwards. The people involved (Watts, Strogatz, Barabasi) were all very entertaining and successfully transmitted the excitement of scientific discovery.

Suffice to say it was great. I had planned to link to it and then discuss it a little bit. Annoyingly the BBC have switched off the iPlayer version of the programme and they now appear to have shutdown the version at top documentaries.

I know the beeb don't want to give away content for free but it strikes me that a resource this useful (I'd even recommend it to scientists new to the field) should be kept live. Instead it's buried away where it's now useless. Scientists are always told about public engagement, well unblock this film - engage!

I'm going to write to them an encourage them to let it free, then perhaps instead of a rant about the BBC we can talk about some science.

UPDATE:
As you can see from the comments, the BBC didn't make the film so they can't keep it online. I can't work out how to get a DVD yet but when I find out I'll put up a link and then can get on talking about networks. In the mean time, this book, "Small World" by Mark Buchanan is well worth a read.

Thursday 10 March 2011

Thoughts of a first-time peer reviewer

Most of my time is spent tirelessly chipping away at the scientific rock face, probably bogged down fixing a bug in my code or staring at some noisy looking data. Every now and then it all comes together and I want to tell people about it. So I write up my results as best I can, spend hours tinkering with figures, another few hours getting the fonts right on the axes, and after drafts and re-drafts, eventually I'll send it away to a journal to be published. This is where I become caught up in the process of peer review.

Usually it goes like this: The editor of the journal will check that the paper is basically interesting and then send it out to two reviewers who are chosen for their expertise in your area. These reviewers, or referees, will then read the paper, check it for basic errors and then comment on its originality and its pertinence to the field. This is sent back to the editor who will decide whether or not to publish. Usually the referees make you fix something, sometimes nothing, sometimes you can have a right old ding dong.

The main point is that the process is anonymous and behind closed doors. This is good and bad. Better blogs than this one discuss different options. It's not really my intention to criticise or support peer review, just to share my experiences.

Recently I was sent my first ever article to review. I can't say anything about the details, but it has been strange crossing over to the other side. I've had to ask questions that I have never thought much about before. So I wanted to put it down before I forget what all the fuss is about.

Reviewed

Up to now my only experience has been on the reviewed side of peer review. I've certainly had mixed experiences here. The first paper I had reviewed went through after lots of useful comments by the reviewers. It gave us more work but it made the paper better. Good experience. Another time a reviewer spotted a small error in our equations - also a good experience.

My worst experience involved two bad scenarios. Our first reviewer had not understood the paper, nor taken the time to follow the references that would allow him/her to do so. Instead of passing on to someone more qualified they just said it didn't make sense and was not interesting. The second referee had some interesting points but appeared to block it mainly on the basis that it didn't agree with other (presumably their) results. As you can see, I'm still bitter about this paper! It took 18 months to eventually get it through by which time it was thoroughly buried.

Of course, I'm biased, our paper could have been crap. Either way, the experience was bad enough that I was close to leaving science because of it. Receiving sneering anonymous reviews is a crushing blow to your ego - even if they're right.

Reviewer

So now I've reviewed my first paper. I won't say what I did, most of the questions I found myself asking would apply to any paper.

I'm quite used to reading other people's work, occasionally making a scoffing remark, or more likely not fully understanding it. The prospect of checking a paper for errors and assessing its quality filled me with dread. The only way I could deal with it was telling myself that it doesn't matter if I don't understand absolutely everything. The main thing is to check that they haven't done anything completely stupid.

This part of peer review I think is not too bad. There is an element of trust that someone has collected their data properly, but checking that it's not completely upside down is not too difficult or controversial.

Where it starts to get subtle is questioning the interpretation. Pulling someone up on their conclusion requires quite a bit of guts. Or I suppose an over inflated ego - of which there are many in science. This is related to another question, when should a scientific argument happen before publication and when should it happen afterwards? If the signal to noise ratio is to kept reasonably high then some things will need to be filtered out before hitting public view. I have not worked out an answer to this.

The final problem I had was with the question, "is this work of sufficient quality to be published in journal X?". Again this is really tricky, scientists can be real bitches when deciding what is or isn't interesting. On the other hand some scientists can try and get away with putting out any old crap just to lift their publication count. I found being asked to be the arbiter of quality quite stressful. Most results need to be on the scientific record somewhere, but should something be blocked for being too "incremental"? I suppose this is the journal's decision.

Is it worth it?

Apart from some initial stress I found the whole experience quite enjoyable. It makes you feel part of the scientific collective and it really tunes your critical skills. It will be interesting to see what becomes of peer review in the web 2.0 era, I would quite like to see it open up a little. I worry that unregulated, open anonymous comments, could be unhelpful. People are arseholes when they're anonymous, just ask a peer reviewer.

Tuesday 22 February 2011

New domain

I've taken the domain name, kineticallyconstrained.com. For now don't change anything as the exact address might move about a bit. I haven't quite worked out what sub domains to use blah blah blah.

Eventually I plan to put some permanent content and develop the site a bit. For now, to be honest, I'm mostly testing that the RSS feed is still working!

Thursday 3 February 2011

Colloids are just right

All being good it looks like I've secured employment for a tiny while longer. Hooray!

The place I'm moving to is a big place for synthetic colloids, so it seems like a good time to go through what I know about colloids. If nothing else it'll be interesting to compare this to what I'll know in a year's time! So, here is a theorists perspective on colloid science.

I'll spare the usual introduction about how colloids are ubiquitous in nature, you can go to Wikipedia for that. The type of colloids I'm interested in here are synthetic colloids made in the lab. They're usually made from silica or PMMA (perspex), you can make a lot of them, they can be made so they're roughly the same size and you'll have them floating around in a solution. By playing with the solution you can have them density matched (no gravity) or you can have them sinking/floating depending what you want to study.

The colloids that people make sit nicely in a sweet spot of size and density that make them perfect for testing our fundamental understanding of why matter arranges itself in the way it does. Colloids can undergo most of the same phase transitions that we get in molecular systems, but here we can actually see them. Take for example this beautiful electron microscope image of a colloidal crystal from the Pine group at NYU.



1. They're big enough to image

Colloids are usually of the order a micron across. At this size it is still possible to use confocal microscopy to image the particles. While nothing like the resolution of the electron microscope, the confocal can actually track the positions of individual particles in real time, in solution. It's almost like a simulation without the periodic boundary conditions! A confocal can take lots of 2D slices through the sample, such as below from the Weeks group. The scale bar is 5 microns.


If you do it quick enough then you can keep track of the particle moving before it loses its identity. The Weeks group did some very famous work visualising dynamic heterogeneity in liquids near the glass transition. (see their science paper if you can).

If we want to think about colloids as model atoms, which we do, then there's another property apart from just their size that we need to be able to control.

2. You can control their interactions

Being the size they are, if we didn't do anything to our colloids after making the spheres they would stick together quite strongly due to van de Waals forces - this is the attraction between any smooth surface to another, as used by clingfilm. To counteract this the clever experimentalists are able to graft a layer of polymers around onto the surface of the colloid.

It's like covering it with little hairs. When the hairs from two particles come into contact they repel, overcoming the van de Waals attraction. The particles are "stabilised". In this way it's possible to make colloids that interact pretty much like hard spheres. So not only can we use them as model atoms, but we can use them to test theoretical models as well!

Further to this the colloids can be charged and by adding salt to the solvent one can control the screening length for attraction or repulsion to other colloids. Finally there's the depletion interaction. I want to come back to this so for now I'll just say that by adding coiled up polymers into the soup we can create, and tightly control, attractions between the colloids. With this experimentalists can tune their particles to create a zoo of different behaviours.

3. They're thermal

If the colloids are not too small to be imaged, why not make them bigger? If we made them, say 1cm, then we could just sit and watch them, right? Well not really. If you filled a bucket with ball bearings and solution, density matched them so they don't sink or float and then waited, you'd be there a long time. The only way to move them in a realistic amount of time is to shake them - this is granular physics.

Granular physics is great but it's not what we're doing here. Real atoms are subject to random thermal motions and they seek to fit the Boltzmann distribution. For this to work with colloids they need to be sensitive to temperature.

When a colloid is immersed in a fluid it subject to a number of forces. If it's moving then there will be viscous forces, and on an atomistic level it is constantly being bombarded by the molecules that comprise the fluid. In the interests of keeping this post to a respectable size I can't go through the detail, but suffice it to say that this is an old problem in physics - Brownian motion.

Under Brownian motion the large particle will perform a random walk that is characterised by its diffusion constant. The bigger this number the quicker it moves around. A more intuitive number is the time it takes for a particle to move a distance of one particle diameter. When you solve the equation of motion for a large particle in a Stokesian fluid you find that this time is given by

where is viscosity, a is the particle diameter, and k_B T is Boltzmann's constant and temperature. Now this does get more complicated in dense systems and the properties of the fluid matter, but this is a good start. This could be a topic for another post.

For a typical colloidal particle, around a micron in size, you have to wait about a second for it to move its own diameter. For something only as big as a grain of sand you can be waiting hours or days. Even by 10 microns it's getting a bit too slow. But close to 1 micron, not only does it move about in an acceptable time frame, we can easily track it with our confocal microscope. If it's diffusing around then we can hope that it will be properly sampling the Boltzmann distribution - or at the very least be heading there. So once again, that micron size sweet spot is cropping up.

So what else?

Hopefully this serves as a good starting point to colloids. Obviously there's a lot more to it. An area that I'm very interested in at the moment is what happens when the colloids are not spheres but some other shape. I'll be posting more about it in the coming months.

If you don't remember anything else just remember that colloids are the perfect size to test statistical mechanics and to be visible.

So well done colloids, you're just right size.

Wednesday 19 January 2011

It's been a while

But I'm planning a dramatic comeback - just as soon as I've sorted my next job!

I've got some more critical-scaling stuff in the pipeline, some nice crystallisation videos and it may be time for some chat about self-assembly seeing as I'm now officially a self-assembler (self-assemblist?).

So don't delete Kinetically Constrained from your RSS reader just yet!