Tuesday, 26 August 2014

Post post doc

Continuing what has become a quarterly theme here is summer's post:

One reason for a drop in blogging productivity is that I have been severely distracted by that thing that many postdocs are -- career. It's a pretty neat job being a postdoc. Like a PhD you've got more time to dig deep into a topic but you're more on top of your skills. It's flexible, it's a great way to get work abroad, you can wear a T-shirt to work and so on. It's a nice job.

Eventually you do start to want a permanent position. Now in academia these are hard to get and there's very little geographic control. Many have written long and angrily about this, that's not what I want to get into. It is what it is. From a practical point of view it as an excellent time to think about whether you really want that academic job -- do you want to stick of twist? In my case I've gone for twist.

This autumn I'll be making the move from statistical physics to statistical, er, statistics. My physics work was gradually slipping towards data science and I can't resist any longer (it's the sexiest job of the 21st century don't you know!). Actually I want to post about that physics/data work at some point but need to wait for reviews etc.

I'm really excited about my new position and all the new things I'll be learning. If possible I'd like to keep this blog going, I think there's a decent cross over with what I'll be doing next. The themes might switch more towards data than soft matter but statistics are statistics. It'll be a little sad leaving physics but I'll always see the world through a stat-mech lens, which is no bad thing for anyone in my opinion.

Thursday, 15 May 2014

Journals for e-readers

One thing that makes me cross is that despite the terrifying amount of money our library pays to buy back our research in the form of journals, they're still not terribly easy to read. I've got an e-reader now and I'd like to read things on that, just the sort of value-added that the publishers could do. Unfortunately everything is still just a pdf file only to be printed on A4.

There are some utilities for coping with this but it's not really ideal.

I wanted to see how tough it is. So I tried to convert my last paper into something that would look nicer on an e-reader (in my case a kindle). The paper was written for an APS journal using the REVTeX 4.1 package. This makes it very easy to write papers formatted for APS (and possibly some others as well). The answer is that this was the best I could get it using REVTeX.

which originally looked like this:
or here if you have access

It's actually not too bad! The abstract's gone a little wrong and the font is not strong enough, but it's not a disaster. It proves to me that the guys who make REVTeX could quite easily make a beautiful e-reader mode full of useful options. I made the citations clickable links for example.

To get this working I simply replaced this line
\documentclass[aps, prl, twocolumn,superscriptaddress,amsmath,amssymb,floatfix]{revtex4-1}

with these lines


Without the second one it doesn't seem to work very well. If you've got a better (and just as easy method) then leave a comment. I could have made it better by dropping REVTeX and customising every detail. Frankly that was proving a lot of work and that's not what I believe LaTeX should be about.

From now on I'll be uploading an e-reader version as well as an A4 version for my papers.

Royal Society of Chemistry, with a bit of fiddling looks fantastic on the kindle:
Had to dig about a bit to get this one-column (there's a \twocolumn[ half way down the page in the template that needs removing). Fiddled with the margins and widths a lot. See the .tex file here. Might go back to the APS paper and reduce the paper size even more. This seems to work quite nicely:



Got the RSC send-to-kindle button working and this just sends the two-column pdf. I guess the best hope is converting the html version then.

I've found the best way I think now. It's to skip pdf altogether and go via html. Using htlatex I compiled the same original latex file into this webpage:
It's a bit broken here and there (you must use revtex4 and not revtex4-1, one day I'll go back through and work out all the settings needed to properly convert a revtex made file. Mostly it works.

Next step was to download the KindleGen utility and use that to convert the html to a .mobi file format. This is what I then put onto my kindle. If you download this and put it onto your kindle
you'll see that this is basically perfect for what you want from a Kindle version of a paper. For sure some things need fixing, for example the equations are a bit small. I'll work all of that out and make a separate post.

The bottom line for this whole thing is that if I, a mostly lazy man, can get half way decent conversions of my LaTeX files onto an e-reader in an evening, then the journals could easily do this.

Friday, 14 June 2013

Networks of networks

This week the Bath centre for "Networks and Collective Behaviour", of which I'm a member but by no means an official spokesman, had its official launch event. A fun two day meeting on "Uncertainty in interaction networks". For people such as myself who are in to statistical mechanics and the collective behaviour of systems with many parts, networks are a natural extension to what we do. It's a wonderfully interdisciplinary field with lots of cool problems to solve. Besides, networks is where I started out.

So, due to all this, my first few posts on my return from paternity leave might be networks based. From this week's meeting one thing that got me interested was people who are turning their attention to "multiplex networks". This is a word that means lots of things to me so to be more clear we're talking about separate networks that interact (this interaction is key). Usually it is assumed it is the same set of nodes with different sets of edges representing different relationships between them.

Cartoon of a multiplex network with two different types of link

An example given at the meeting was the power grid which is interacts with the water supply network and other services. Another example could be your social network where you distinguish between work colleagues and friends or family. Transport networks (for example bus and tube in London) will clearly interact in their function even though they are separate networks. 

Ginestra Bianconi from Queen Mary's gave a nice talk, in which she cited this PNAS on a network of computer gamers in a massively multiplayer online game. Data can be obtained for different types of interaction. Things such as friendship, trade, communication (positive) or enmity and attacks (negative). The authors claim that in order to properly understand the social structure one must consider all these different types on interaction. Among the things they found was that positive relationships tended to be clustered exponential networks whereas negative networks were not clustered and had powerlaw degree distributions. This means that a small number of players have a large number of negative connections. Also, different players had important roles in different networks. It certainly seems that with this extra layer of data one is in a better position to fully understand the network.

Ginestra's own work (in this EPL) was concerned with political affiliation networks. This imagined two networks representing social connections in two political parties. Nodes can be active in both networks but will tend to become inactive in the opposite network to their current opinion closer to an election. There is a stat-mech model where "tend to" is represented by a field (call it temperature) that couples to how happy a node is (call it energy). I'm going to come back to this theme.

I think multiplex networks are going to be really fun to work with. Seeing as networks are a useful simplification of reality multiplex networks seem to represent the next level of detail.

Wednesday, 30 January 2013

Back in the summer

I realise things have gone pretty quiet around here. It turns out having a baby takes away pretty much any spare time you thought you had.

I still have big plans for this blog and as the number of times I'm wide awake at 3am decreases I can see the light at the end of the sleep deprived tunnel. So I'm planning to have a quiet relaunch in the summer when I'll hopefully get some regularity back.

See you soon.

Monday, 7 May 2012

Journal Club: A New Blog

I've just started a new blog www.scmjournalclub.org. It's definitely in what you'd call the beta phase right now. I will certainly be changing the layout and gradually adding more permanent content over the next few weeks.

Contributions welcome to submissions@scmjournalclub.org.

While I do sometimes get a bit technical, Kinetically Constrained is hopefully of interest to people inside and outside of the field. The idea for the journal club is that it is aimed at people working in the area of soft matter and statistical mechanics. In particular I want it to be useful for postgraduate students who would find it helpful understanding papers they may have found a bit impenetrable otherwise.

How it will all work will hopefully evolve. I hope one day enough people check it out that the following scenario happens. A PG student presents a paper as best they can that they might be having trouble understanding. A comment thread follows and the problems get sorted out. Everyone wins.

I also suspect that hundreds of journal clubs happen each week in different universities. While I understand people might not want this to be public, for those that don't mind they could put their presentation on SCM journal club where it can benefit even more people.

To kick things off I've started with a recent paper on the arXiv by AndrĂ©s Santos on one of my favourite topics – hard spheres.
Brief Summary
In liquid-state theory the hard sphere equation of state is of particular importance because it is a fantastic reference system for a whole host of molecular and in particular colloidal liquids. The hard sphere equation of state (EoS) tells you what pressure you need to compress a your spheres to get a given density. With an analytical form for the EoS one can calculate any thermodynamic property one desires.
Percus-Yevick (PY) is a way to close to the Ornstein-Zernicke (OZ) equation – an exact relation between correlation functions – and is usually solved by either the compressibility route or the virial route. You’re basically choosing how your approximation enters. Here Santos has taken a different route, following the chemical potential, and it gives a slightly different closure to OZ.
Carnahan-Starling is an incredibly simple EoS for hard spheres which is in common use (fluid phase). It can be written as a 1/3-2/3 mix of the compressibility and virial PY routes. In a similar way Santos writes a 2/5-3/5 mix of compressibility and chemical potential routes and gets a similarly simple expression – which is ever-so-slightly better than Carnahan-Starling.
I'm more than happy to take contributions. I think it's nicer if people say who they are but I'll hold back the name if that's the barrier to submitting (provided it's not an anonymous destruction of a rival's paper). You can submit via submissions@scmjournalclub.org. For interested regulars I can look into direct posting via blogger.

Wednesday, 25 April 2012

The Renormalisation Group

A new video which more or less completes the critical phenomena series. Jump straight to it if you want to skip the background.

One of my favourite topics is the critical point. I've posted many times on it, so to keep this short you can go back here for a summary. In brief, we're looking at a small point on the phase diagram where two phases begin to look the same. The correlation length diverges and all hell breaks loose. Well, lots of things diverge. At the critical point all length scales are equivalent and, perhaps most remarkably, microscopic details become almost irrelevant. Different materials fit into a small number of universality classes that share broad properties such as symmetry or dimensionality.

For a long time this universal nature was known about but it couldn't be said for sure if it was a truly universal thing, or just a really good approximation. Then along came the renormalisation group (RG), which gave a strong theoretical basis to critical phenomena and sorted everything out.

The renormalisation group is usually at the back end of an advanced statistical mechanics course, and that is not the level I'm going for with this blog. However, when making the videos for the demonstration of scale invariance and universality it became apparent that, even just making the pictures for these videos, I had to use RG. Even if I didn't realise it.

First I'll try to explain schematically what RG aims to do. Then I'll show how this is similar to how I make my pictures, and finally we'll get to a demonstration of RG flow at the end. I'll try not to dumb it down too much but I also want to be as quick as possible.

Renormalisation group

Let's look at how we do this with the Ising model. A simple model for a magnet where spins, sigma, (magnetic dipoles) can point up or down, $latex \sigma=\pm 1$, and like to align with their neighbours through a coupling constant, $latex J$. The energy is a sum over nearest neighbour pairs

$latex \displaystyle E=\sum_{ij} -J \sigma_i \sigma_j$

Where RG enters is to say that, if the physics is the same on all length scales, then we should be able to able to rescale our problem, to cast it on a different length scale, and get back the same thing. In real-space RG this is done by blocking. We bunch a group of our spins up together and form a new super spin that takes on the majority value of its constituents. It's as though the spins in the block get together and vote on how they want to be represented, and then we can deal with them as one.

Here's what it looks like. Take an Ising model configuration
Block them together

And vote

We're left with a pixelated version of the first picture. Now here I will slightly deviate from RG as standard. The next step is to ask, if these super spins were a standalone Ising model, what temperature would they have? If our initial system is right on the critical point then the renormalised (blocked) system should have the same temperature because it should look exactly the same – scale invariance. If you're even slightly off then the apparent temperature, let's call it $latex T_{RG}$, will flow away from the critical point towards a fixed point.

These fixed points are the ferromagnet (all spins the same, $latex T_{RG}=0$) or the paramagnet (completely random, $latex T_{RG} \rightarrow \infty$) as shown below.

Normally RG is done in terms of coupling constants rather than temperature. However, I think in our case temperature is more intuitive.

Zooming out

By now the link between RG and the pictures I make may already be clear. The configurations I will show below are made of something like $latex 10^{10}$ spins. Clearly I can't make a 10 Giga pixel jpeg so I have to compress the data. In fact the way I do it is an almost identical blocking process. Spins are bundled into $latex b \times b$ blocks and I use a contrasting function (a fairly sharp tanh) that is not far away at all from majority rule as described above.

If we start by zooming in to a 768x768 subsection then each pixel is precisely one spin. As we zoom out we eventually need to start blocking spins together. In the video below there are three systems: one ever-so-slightly below $latex T_c$, one ever-so-slightly above $latex T_c$ and one right on the money. At maximum zoom they all look pretty much the same. If you had to guess their temperatures you'd say they're all critical.

As we start to zoom out we can see structure on  length scales, and the apparent temperatures start to change, in fact they flow towards the fixed point phases. Video below, recommend you switch on HD and watch it full screen.

So there it is. RG in action. If you're not precisely on the critical point then you will eventually find a length scale where you clearly have a ferromagnet or a paramagnet. At the critical point itself you can zoom out forever and it will always look the same. The renormalisation group is a really difficult subject, but I hope this visualisation can at least give a feeling for what's going on, even if the mathematical detail is a bit more challenging.

Wednesday, 18 April 2012

The thermodynamic limit

This post has been at the back of mind for a while and written in small, most likely disjoint pieces. I wanted to think about connecting some of the more formal side of statistical mechanics to our everyday intuitions. It's probably a bit half baked but this is a blog not a journal so I'll just write a follow-up if I think of anything.

I'm often accused of living in a rather idealised world called the thermodynamic limit.

This is completely true.

To see why this is a good thing or a bad thing I should probably say something about what I think it is. I'll start at the colloquial end and work up, first let's say that in the thermodynamic limit everything is in equilibrium.

Nothing ever changes around here

If you put enough stuff in a jar, keep it sealed in a room that stays the same temperature, and give it enough time then it will eventually end up in its equilibrium state. One could argue that the real equilibrium is the grey mush at the end of the universe so clearly I'm going for some time scale that's enough to let everything in the jar settle down but not so much that I get bored waiting for it. For atoms and molecules this usually gives us a window between roughly a picosecond (10^-12 seconds) and lets say a 100 seconds (I get bored pretty easily). Once it is in equilibrium the contents of the jar will stay in the same state forever – or until it gets kicked over. The point is that in equilibrium nothing changes.

Or does it? To our eyes we may see no change, but the atoms inside the jar will be wriggling furiously, perhaps even diffusing over great distances. How could such great change on the small scale be consistent with eternal boredom on the macroscopic length scale? The answer has two parts. Firstly, the atoms that make up the world are all frighteningly similar. So if one diffuses away it will quickly be replaced by an indistinguishable substitute. The second part motivates the "enough stuff" part of the previous paragraph.

Listen to a group of people talking and the conversation will ebb and flow, and sometimes go completely quiet. Sit in a busy cafe and all you can hear is general noise. A sort of hubbub that you can easily identify as conversation, maybe you can even get a feel for the mood, but you can't tell what anyone is saying. In the thermodynamic limit there are so many atoms that all we can see is a sort of average behaviour. We can tell what sort of state it is (a liquid, a solid, a magnet – the mood) but the individuals are lost.

So as we lumber towards a stricter definition of the thermodynamic limit we should think about what we mean by a state. I've talked about this before. In statistical mechanics there is a huge difference between a 'state' and a 'configuration'. By configuration we mean the exact position (and sometimes velocity) of every particle in the jar. We're doing this classically so we won't worry about uncertainty. A state, in the stat-mech sense, is an ensemble of configurations that share some macroscopic property. For example their density, or magnetisation, or crystal structure.

To be the equilibrium state, the corresponding configurations must satisfy at least one of two criteria (ideally both). Firstly they should have a low energy compared to the other configurations. If particles attract they should be close, if dipoles point the same way they should try to do that. This is intuitive, balls roll down hill, systems like to lower their potential energy. Secondly there should be a lot of them. An awful lot of them. This is often referred to as entropy, but really I'm just saying you need to buy a lot of tickets to guarantee winning a prize.

A bit more mathematical

This combination of potential energy, U, and entropy, S, is known as the free energy. You can write it down as:
High temperatures, T, favour high entropy (lots of configurations), low temperatures favour low energy. In statistical mechanics, unlike normal mechanics, systems lower their free energy and not just their energy. The state with the lowest free energy is the equilibrium state. No exception.

The aim with statistical mechanics is to write down equations that take interactions on the individual particle level and relate this to the probability of finding the particles in a particular configuration. In the mathematical sense the final step is known as "taking the thermodynamic limit", and this means taking the number of particles in your equation, N, to infinity.

It is these infinities that make states formally stable, and give us phase transitions. Infinitesimal changes in conditions, such as temperature, can lead to dramatic changes to the equilibrium state. Of course there are not infinity particles in the real world. However, with roughly 10^24 water molecules in my cup of tea it's a pretty good approximation.

To be in the thermodynamic limit, therefore, we an infinite amount of stuff sitting for an infinite amount of time. The system must be able to explore all configurations to decide which state to settle on. You can see where we're going to run into problems.

Back to the real world

Getting back to the start of this post, why are my accusers being so accusatory? Most likely because the real world, for the most part, is massively out of equilibrium. From stars and galaxies, down to swimming bacteria. Then there are materials, such as glasses, where the relaxation time has become so long that the equilibrium state can't be reached in times longer than the age of the universe. Or some say forever – but I'll come back to ergodicity at a later date.

In colloid land things get quite interesting. As mentioned in a previous post, colloids that are big enough to easily see take about a second to move around enough to start equilibrating. That's very close to me getting bored, so if it's a dense system or there are strong attractions one can expect colloids to quickly fall out of equilibrium.

The theoretical framework for life out of equilibrium is hugely more complicated that at equilibrium. Even quantities such as temperature start to lose their meaning in the strictest sense. In fact, while people are working hard and no doubt making progress, it's safe to say that it will never be as elegant – or let's say as easy – as what we have in the thermodynamic limit.

All is not lost

So this means everything we study in equilibrium is useless? It clearly doesn't exist. Well it's true nothing in the universe meets the strict definition of infinite time and infinite stuff, but in reality it's usually alright to have a lot of stuff and enough time. In fact we regularly study systems with only hundreds of particles and correctly predict the phase behaviour. It's usually the enough time part that is the problem.

Knowing what the equilibrium state should be is a bit like knowing the destination but not the journey. In many many cases this is enough, atoms can rearrange themselves so quickly that it doesn't really matter how they get where they're going. Of course in many cases that we worry about today we need to know both where the system is going, and how it will get there. It could be that on the way to the true equilibrium state we get stuck in a state with low, but not the lowest, free energy. A bit like settling on your favourite restaurant before going to the end of the street and trying them all. In this case we can maybe plot a different route through the phase diagram with controls such as pressure and temperature.

Increasingly these pathways to self-assembly are the focus for many in the statistical mechanics community. We want to design new materials with exotic thermodynamic ground states (equilibrium states), so it is really important to know what will happen in the thermodynamic limit – we will always need phase diagrams. But with colloids, they're pretty impatient and will easily settle for the wrong state, so we also need to think carefully about how we will get to the ground state. It's an exciting time right now because experimentally we're even able to mess around with the fundamental interactions between particles in real time, numbers that we usually take as constants can suddenly be changed. it really is possible to control every stage of the assembly process from the start all the way to the end.