If this is true:
Blogger Buzz: New Comments System on Blogger
Then it will halt my planned relocation to Wordpress. It's just in the nick of time. It's not enabled for me yet but as soon as it is I'll be switching off the moderation in favour of a spam filter.
UPDATE: In true Blogger fashion they've managed to arse this up. The spam filter only comes on when you enable full comment moderation. The whole point of a spam filter is that I don't want to moderate comments! I want them to go straight on the blog unless they look like obvious spam.
I'm beginning to wonder if any of these people actually run their own blog. Wordpress migration preparation continues...
UPDATE Nov 2011: As far as I'm concerned comment moderation is fixed, blogger has been humming along nicely for me for a while now.
Thursday, 12 August 2010
Sunday, 25 July 2010
Statistical mechanics of tetris
I'm finding that I'm becoming increasingly fascinated by shape. It seems such a simple thing yet scratch the surface only a little and the complexity comes pouring out. Take simple tiling problems; I can tile my floor with squares or regular hexagons, but not regular octagons - they'll always leave annoying gaps. From a statistical mechanics point of view those gaps are very important, little sources of entropy that you can't get rid of. In three dimensions understanding the packing of tetrahedra has proved no simple task. But that's a story for another day.
So it came as no surprise that I was very taken with Lev Gelb's talk on polyominoes at the Brno conference. Polyominoes are connected shapes on a two dimensional lattice. A monomino is a square, a domino you know. Tetrominoes are made of four squares and are exactly like the pieces from Tetris. Assuming that they're stuck in the plane (so you can't flip them over) there are 7 tetrominoes.
So it came as no surprise that I was very taken with Lev Gelb's talk on polyominoes at the Brno conference. Polyominoes are connected shapes on a two dimensional lattice. A monomino is a square, a domino you know. Tetrominoes are made of four squares and are exactly like the pieces from Tetris. Assuming that they're stuck in the plane (so you can't flip them over) there are 7 tetrominoes.
Labels:
stat-mech
Thursday, 1 July 2010
Tree diagrams solve everything
Just a quick one. I saw this post, When intuition and math probably look wrong, via Ben Goldacre's mini blog. The problem is set as follows:
I have two children, one of whom is a son born on a Tuesday. What is the probability that I have two boys?Intuition tells you the answer is 1/2, mathematicians tell you it's something else. I'll leave the answer until the end of the post in case you want to run off and solve it first. It's essentially a fancier version of the Monty Hall problem.
Labels:
probability
Tuesday, 15 June 2010
Comment spam
I'm getting killed by comment spam at the moment. You'd really think Google would be better at stopping it but I don't want to get into that. Anyway, given I don't have as much internet access at the moment I can't mop up the spam quick enough so for a little while I'm going to have either moderation or require openid signing in. It's a shame because I really want to keep comments open.
Update: login didn't work so for now it's bloody moderation.
Update 2: Constantly catching comment spam. Wordpress has a facility to block comments with more than one hyperlink. I'm seriously looking at moving the site over.
Update: login didn't work so for now it's bloody moderation.
Update 2: Constantly catching comment spam. Wordpress has a facility to block comments with more than one hyperlink. I'm seriously looking at moving the site over.
Labels:
announcements
My iPhone is an evil vindictive bastard
Yes I know you're supposed to say phone and not iPhone, but I think it's relevant here.
Being a clever smarty pants, when I landed in the Czech Republic I switched my phone to Prague time and it took care of everything. Oo, isn't it clever! No. For some bizarre reason overnight last night it decided that I couldn't possibly still be in Czech Republic and I must be back in the UK. So this morning I managed to miss the one talk I really wanted to see (Sharon Glotzers tetrahedra talk) because I was on bloody London time.
So you can keep your retina display and your megapixels, Mr Jobs - get the bloody time right! Just about ready to throw this thing at the wall. Oh, but it is so pretty...
Being a clever smarty pants, when I landed in the Czech Republic I switched my phone to Prague time and it took care of everything. Oo, isn't it clever! No. For some bizarre reason overnight last night it decided that I couldn't possibly still be in Czech Republic and I must be back in the UK. So this morning I managed to miss the one talk I really wanted to see (Sharon Glotzers tetrahedra talk) because I was on bloody London time.
So you can keep your retina display and your megapixels, Mr Jobs - get the bloody time right! Just about ready to throw this thing at the wall. Oh, but it is so pretty...
Labels:
technology
Monday, 14 June 2010
Brno - best poster spot ever
It's 10pm and I'm writing this during a talk on modelling water. In case you didn't know water is about as far from a simple liquid as you can get. It's very interesting, although from the baffling number of water models you'd think it's been solved by now, but they do keep going. And going...
Anyway, I'll talk more about the science later, in general I had a couple of thoughts today. First, and most certainly without naming names, there is a huge gulf in standard between the good talks and the bad talks. Experience seems to be a factor, the two best talks by a long way were the invited speakers. The best talk introduced a new subject to me, which is on coarse graining, and I felt like I had a good idea how it all worked when he finished. The worst talks I lost attention within a minute.
It surprised me that people can do this for years and still be no good at it, I guess they don't care. But it's so so important that you can tell people what you're doing! Not to be negative though, the good ones were good and they really make it worth being here.
Second thought is that this poster fiasco is heading to the ridiculous. I was lucky enough to get the board at the back of the room, facing a wall two feet away! Not exactly a prime spot as you can see from the photos. At A0 I doubt they'll be able to focus on it from that distance! C'est la vie.
I'll let you know how many punters I get.
Anyway, I'll talk more about the science later, in general I had a couple of thoughts today. First, and most certainly without naming names, there is a huge gulf in standard between the good talks and the bad talks. Experience seems to be a factor, the two best talks by a long way were the invited speakers. The best talk introduced a new subject to me, which is on coarse graining, and I felt like I had a good idea how it all worked when he finished. The worst talks I lost attention within a minute.
It surprised me that people can do this for years and still be no good at it, I guess they don't care. But it's so so important that you can tell people what you're doing! Not to be negative though, the good ones were good and they really make it worth being here.
Second thought is that this poster fiasco is heading to the ridiculous. I was lucky enough to get the board at the back of the room, facing a wall two feet away! Not exactly a prime spot as you can see from the photos. At A0 I doubt they'll be able to focus on it from that distance! C'est la vie.
I'll let you know how many punters I get.
Labels:
communication
Thursday, 10 June 2010
Conferences
Seems like it's been a very long time since I've posted anything. This is mostly because things have been a bit of a blur recently preparing a paper, a talk and a poster for some up coming conferences. As soon as conference season is over I'll be back on the regular posts.
The science poster is a bizarre and demoralising ritual. You know that hardly anyone will see it (at one of my conferences there will be something like 500 posters), but you daren't not do it properly just in case. So you spend days putting this thing together, £30 getting it printed, only to have hundreds of people walk straight past it. Who knows, maybe you can catch one or two people who will write down a reference.
Anyway, this is what I'm going to be standing in front of next week, I think it's quite pretty:
It occurred to me that I haven't really blogged about my own work (is it not done?) but I'll start doing so when I return. In the meantime, it's all on the poster!
On the off chance anyone is going to Brno next week then come and find me and say hello.
The science poster is a bizarre and demoralising ritual. You know that hardly anyone will see it (at one of my conferences there will be something like 500 posters), but you daren't not do it properly just in case. So you spend days putting this thing together, £30 getting it printed, only to have hundreds of people walk straight past it. Who knows, maybe you can catch one or two people who will write down a reference.
Anyway, this is what I'm going to be standing in front of next week, I think it's quite pretty:
It occurred to me that I haven't really blogged about my own work (is it not done?) but I'll start doing so when I return. In the meantime, it's all on the poster!
On the off chance anyone is going to Brno next week then come and find me and say hello.
Labels:
announcements,
communication
Sunday, 25 April 2010
Using a blog as a Logbook
Last productivity post for a while I promise, then back to proper physics.
I'm trying out using a private blog (secured and unlisted etc) as a logbook. Logbooks are so important, the first time you realise you need one it's too late. My reasoning for going online goes:
I've chosen to use WordPress for my logbook for one simple reason. The LaTeX integration is fantastic. It's so good I'd consider moving this blog if it wasn't such a pain (for the record I otherwise like blogger). In wordpress you do this:
which would look like
although the superscripts do appear to have messed up the alignment... Otherwise it does a brilliant job at interpreting the tex and inserting the image. If you need a lot of LaTeX then there are programmes that convert between regular .tex files and the wordpress format.
There are similar things available for blogger but I think you lose your source code in a more drastic way. Anyway, I'm going to see how it goes.
I'm trying out using a private blog (secured and unlisted etc) as a logbook. Logbooks are so important, the first time you realise you need one it's too late. My reasoning for going online goes:
- I can access from anywhere, including emailing in posts from my phone. For example I could take a photo of a whiteboard discussion and send it to the blog so I won't forget about it.
- It's safely backed up on the servers of whoever's hosting it.
- I'm more likely to actually make entries because it's easy.
- A blog is much like a logbook anyway so it's naturally suited.
I've chosen to use WordPress for my logbook for one simple reason. The LaTeX integration is fantastic. It's so good I'd consider moving this blog if it wasn't such a pain (for the record I otherwise like blogger). In wordpress you do this:
I will now insert an equation here, $latex E=mc^2$, inline with the text.
which would look like
although the superscripts do appear to have messed up the alignment... Otherwise it does a brilliant job at interpreting the tex and inserting the image. If you need a lot of LaTeX then there are programmes that convert between regular .tex files and the wordpress format.
There are similar things available for blogger but I think you lose your source code in a more drastic way. Anyway, I'm going to see how it goes.
Labels:
software
Wednesday, 21 April 2010
Pipes and Python
I spent ages writing a post about some tricks I use to do quick analysis of data but it got incredibly bloated and started waffling about work flows and so on. Anyway, I woke up from that nightmare so I thought I'd just bash out a couple of my top tips.
This is a pretty nerdy post, you may want to back away slowly.
Pipes
Pipes are, in my opinion, why the command line will reign for many years to come. Using the pipe I can quickly process my data by passing it between different programmes gradually refining it as it goes. Here's an example that makes a histogram (from a Bash terminal):
The first command prints the data file. The
So we've used three programmes with a single "one liner" (some of my one-liners become ginormous). Once you start getting the hang of this sort of daisy chaining it can speed things up incredibly. One bit that took me a while the first time was the histogram programme. This took an annoying amount of time to set up because I used C.
This is where Python now comes in.
Python
I won't even try to give a Python tutorial. I'm a decade late to the party and have barely scratched the surface. However, I've found that for relatively little effort you can get access to thousands of functions, libraries and even graphics. Most importantly you can quickly write a programme, pipe in some data, and do sophisticated analysis on it.
With the scipy and numpy libraries I've done root-finding and integration. The pylab module seems to provide many of the functions you'd get in MatLab. Python is a bit of a missing link for me, it's much lighter than huge programmes like Mathematica or MatLab and I just get things done quickly. Here's that histogram programme, Python style.
Which conveniently detects how many dimensions we're histogramming in so you don't need two programmes. This is pretty short for a programme that does what it does.
I hate wasting my time trying to do something that my brain imagined hours ago. I wouldn't say that these techniques are super easy, but once you've learned the tools they are quick to reuse. I'd say they're as important to my work now as knowing C. Got any good tricks? Leave a comment.
Something less nerdy next week I promise.
This is a pretty nerdy post, you may want to back away slowly.
Pipes
Pipes are, in my opinion, why the command line will reign for many years to come. Using the pipe I can quickly process my data by passing it between different programmes gradually refining it as it goes. Here's an example that makes a histogram (from a Bash terminal):
> cat myfile.data | awk 'NR>100 {print $5}' | histogram | xmgrace -pipe
The first command prints the data file. The
|
is the pipe, this redirects the output to the next programme, Awk, which here we are simply using to pick out the 5th column for all rows over 100 and print the result. Our pruned data is piped down the line to a programme I made called histogram which does the histogram and outputs the final result to my favourite plotting programme to have a look at it.So we've used three programmes with a single "one liner" (some of my one-liners become ginormous). Once you start getting the hang of this sort of daisy chaining it can speed things up incredibly. One bit that took me a while the first time was the histogram programme. This took an annoying amount of time to set up because I used C.
This is where Python now comes in.
Python
I won't even try to give a Python tutorial. I'm a decade late to the party and have barely scratched the surface. However, I've found that for relatively little effort you can get access to thousands of functions, libraries and even graphics. Most importantly you can quickly write a programme, pipe in some data, and do sophisticated analysis on it.
With the scipy and numpy libraries I've done root-finding and integration. The pylab module seems to provide many of the functions you'd get in MatLab. Python is a bit of a missing link for me, it's much lighter than huge programmes like Mathematica or MatLab and I just get things done quickly. Here's that histogram programme, Python style.
#! /usr/bin/env python
import sys
import pylab
import numpy
# Check the inputs from the command line
if len(sys.argv)!=3:
print "Must provide file name and number of bins"
sys.exit(1)
# Read in the data file
f = open(sys.argv[1],'r')
histo=[]
for line in f.readlines():
histo.append(map(float, line.split()))
dimension = len(histo[0])
if dimension == 1:
pylab.hist(histo, bins=int(sys.argv[2]))
pylab.xlabel("x")
pylab.ylabel("N(x)")
pylab.show()
elif dimension == 2:
# Need to chop up the histo list into two 1D lists
x=[]
y=[]
for val in histo:
x.append(val[0])
y.append(val[1])
# This function is apparently straight out of MatLab
# I killed most of the options
pylab.hexbin(x, y, gridsize = int(sys.argv[2]))
pylab.show()
Which conveniently detects how many dimensions we're histogramming in so you don't need two programmes. This is pretty short for a programme that does what it does.
I hate wasting my time trying to do something that my brain imagined hours ago. I wouldn't say that these techniques are super easy, but once you've learned the tools they are quick to reuse. I'd say they're as important to my work now as knowing C. Got any good tricks? Leave a comment.
Something less nerdy next week I promise.
Labels:
software
Wednesday, 7 April 2010
Bootstrapping: errors for dummies
The trouble with science is that you need to do things properly. I'm working on a paper at the moment where we measured some phase diagrams. We've known what the results are for ages now, but because we have to do it properly we have to quantify how certain we are. Yes, that's right. ERRORS!
I've come on a long way with statistics, I've learned to love them, I defy anyone to truly love errors. However, I took a step closer this month after discovering bootstrapping. It's a name that has long confused me, I seem to see it everywhere. It comes from the phrase "to pull yourself up by your boot straps". My old friend says it's "a self-sustaining process that proceeds without external help". We'll see why that's relevant in a moment.
Doing errors "properly"
Calculating errors properly is often a daunting task. You can spend thousands on the software, many people make careers out of it. This will often involve creating a statistical model and all sorts of clever stuff. I really don't have much of a clue about this and, to be honest, I just want a reasonable error bar that doesn't undersell, or oversell, my data. Also, in my case, I have to do quite a bit of arithmetic gymnastics to convert my raw data into a final number so knowing where to start with models is beyond me.
Bootstrapping
I think this is best introduced with an example. Suppose we have measured the heights of ten women and we want to make an estimate of the average height of the population. For the sake of argument our numbers are:
in cm
The mean is 157.95cm, the standard deviation is 16.88cm. Suppose we don't have anything except these numbers. We don't necessarily want to assume a particular model (Normal distribution in this case), we just want to do the best with what we have.
The key step with bootstrapping is to make a new "fake" data set by randomly selecting from the original (allowing duplicates). If the measurements are all independent and randomly distributed etc, then the fake data set can be thought of as an alternate version of the data. It is a data set that you could have taken the first time if you'd happened to get a different sample of people. Each fake set is thought equally likely. So let's make a fake set:
Mean=161.36cm, standard deviation = 18.5935
As you can see, there's quite a bit of replication of data. For larger sets it doesn't look quite so weird. On average you keep about 60% of the original data and the rest is replicated. Now let's do this again lots and lots of times (say 10000) using different fake data sets each time, generating different means and standard deviations. We can make a histogram
From this distribution we can estimate the error on the mean to whatever confidence interval we like. If it's 67% (+/- sigma) then we can say that the error on the mean is +/-5.2cm. Incidentally that's nearly what we'd get if we'd assumed a normal distribution and done 16.88/sqrt(10). Strangely the mean of the means is not 157.95 as the input data was, but 160.2. This is interesting because I drew the example data from a normal distribution centred at 160cm.
We can also plot the bootstrapped standard deviation.
What's interesting about this is that the average isstd=15.2 whereas the actual standard deviation that I used for the data was 19.5. I guess this is an artefact of the tiny data set. That said 19.5 looks within "error".
So, without making any assumptions about the model we've got a way of getting an uncertainty in measurements where all we have is the raw data. This is where the term bootstrap comes in; the error calculation was a completely internal process. If it all seems a bit too good to be true then you're not alone. It took statisticians a while to accept bootstrapping and I'm sure it's not always appropriate. For me it's all I've got and it's relatively easy.
To make these figures I used a python code that you can get here. Data here.
Update: It's been pointed out to me that working out the error on the standard deviation is a bit dodgy. I think that the distribution is interesting - "what standard deviations could I have measured in a sample of 10?" - but perhaps one should be a little careful extrapolating to the population values. Like I said, I'm not a statistician!
I've come on a long way with statistics, I've learned to love them, I defy anyone to truly love errors. However, I took a step closer this month after discovering bootstrapping. It's a name that has long confused me, I seem to see it everywhere. It comes from the phrase "to pull yourself up by your boot straps". My old friend says it's "a self-sustaining process that proceeds without external help". We'll see why that's relevant in a moment.
Doing errors "properly"
Calculating errors properly is often a daunting task. You can spend thousands on the software, many people make careers out of it. This will often involve creating a statistical model and all sorts of clever stuff. I really don't have much of a clue about this and, to be honest, I just want a reasonable error bar that doesn't undersell, or oversell, my data. Also, in my case, I have to do quite a bit of arithmetic gymnastics to convert my raw data into a final number so knowing where to start with models is beyond me.
Bootstrapping
I think this is best introduced with an example. Suppose we have measured the heights of ten women and we want to make an estimate of the average height of the population. For the sake of argument our numbers are:
135.8 | 145.0 | 160.2 | 160.9 | 145.6 |
156.3 | 170.5 | 192.7 | 174.3 | 138.2 |
The mean is 157.95cm, the standard deviation is 16.88cm. Suppose we don't have anything except these numbers. We don't necessarily want to assume a particular model (Normal distribution in this case), we just want to do the best with what we have.
The key step with bootstrapping is to make a new "fake" data set by randomly selecting from the original (allowing duplicates). If the measurements are all independent and randomly distributed etc, then the fake data set can be thought of as an alternate version of the data. It is a data set that you could have taken the first time if you'd happened to get a different sample of people. Each fake set is thought equally likely. So let's make a fake set:
156.3 | 192.7 | 160.9 | 135.8 | 135.8 |
156.3 | 156.3 | 170.5 | 156.3 | 192.7 |
As you can see, there's quite a bit of replication of data. For larger sets it doesn't look quite so weird. On average you keep about 60% of the original data and the rest is replicated. Now let's do this again lots and lots of times (say 10000) using different fake data sets each time, generating different means and standard deviations. We can make a histogram
From this distribution we can estimate the error on the mean to whatever confidence interval we like. If it's 67% (+/- sigma) then we can say that the error on the mean is +/-5.2cm. Incidentally that's nearly what we'd get if we'd assumed a normal distribution and done 16.88/sqrt(10). Strangely the mean of the means is not 157.95 as the input data was, but 160.2. This is interesting because I drew the example data from a normal distribution centred at 160cm.
We can also plot the bootstrapped standard deviation.
What's interesting about this is that the average is
So, without making any assumptions about the model we've got a way of getting an uncertainty in measurements where all we have is the raw data. This is where the term bootstrap comes in; the error calculation was a completely internal process. If it all seems a bit too good to be true then you're not alone. It took statisticians a while to accept bootstrapping and I'm sure it's not always appropriate. For me it's all I've got and it's relatively easy.
To make these figures I used a python code that you can get here. Data here.
Update: It's been pointed out to me that working out the error on the standard deviation is a bit dodgy. I think that the distribution is interesting - "what standard deviations could I have measured in a sample of 10?" - but perhaps one should be a little careful extrapolating to the population values. Like I said, I'm not a statistician!
Labels:
probability,
statistics
Wednesday, 24 March 2010
Even colder still
In a previous post I was talking about how you can use a laser to cool atoms. By tuning the laser to just below the energy of an atomic transition you can selectively kick atoms that are moving towards the laser. If you fire six lasers in (one for each side of the cube) you can selectively kick any atom that is trying to leave the centre. So we've made a trap!
There is a hitch unfortunately. There is a minimum to which one can cool the atoms, once the atoms have an energy that is comparable to the photons coming from the laser then that's about as low as they can go. After all, there's only so much you can cool something by kicking it. We're already pretty cold - around 100 micro Kelvin - we'd like to go a bit colder if we can. Now we're into magnetic traps.
Magnetic Traps
Up to now we've been acting quite aggressively towards the atoms - kicking anything that's moving too quickly. To do better we're going try and round them up where we can control things better. Fortunately there's a neat way to do this. We can make use of an inhomogeneous magnetic field and the Zeeman effect.
If you apply a magnetic field to our gas of atoms then the magnetic dipoles of the atoms tend to line up with the field. Being quantum physics they can only do so in a discrete number of ways. What happens is that the transition that used to be a line splits and shifts into a number of different lines.
If we use a stronger field then the shift is larger. We can finely tune the energy at which our laser will interact with the atoms. So now we do this; if we put a magnetic field that is zero in the middle of the trap and gets bigger as you move away from the centre (you can do this) then we can control how hard we kick the atoms depending where they are. If we do it right then inside the trap we hardly kick them at all and outside trap we kick them back in.
Evaporation
We've managed to confine the atoms in our trap, the final step is to switch off the lasers (to stop all that noisy kicking and recoiling) and to try and use evaporation to get rid of as much energy as possible. It is understandably quite complicated to stop them all flying out once you've switched off the lasers and unfortunately it's at this point I start getting lost! The actual cooling mechanism is nothing more complicated than why your cup of tea goes cold.
After all this we're down the micro Kelvin level - a millionth of a degree above absolute zero! At these sort of temperatures the atoms can undergo a quantum phase transition and become a Bose-Einstein Condensate (BEC). This is a new state of matter, predicted by theory and finally observed in the nineties. As far as I know this is as cold as it gets anywhere in the universe.
Well I think I'm done with cooling things now. It starts off beautifully simple and then gets a bit harder! Needless to say I salute anyone that can actually do this - it's back to simulations for me.
EDIT: I over-link to wikipedia but this is a good page on Magneto-optical traps
There is a hitch unfortunately. There is a minimum to which one can cool the atoms, once the atoms have an energy that is comparable to the photons coming from the laser then that's about as low as they can go. After all, there's only so much you can cool something by kicking it. We're already pretty cold - around 100 micro Kelvin - we'd like to go a bit colder if we can. Now we're into magnetic traps.
Magnetic Traps
Up to now we've been acting quite aggressively towards the atoms - kicking anything that's moving too quickly. To do better we're going try and round them up where we can control things better. Fortunately there's a neat way to do this. We can make use of an inhomogeneous magnetic field and the Zeeman effect.
If you apply a magnetic field to our gas of atoms then the magnetic dipoles of the atoms tend to line up with the field. Being quantum physics they can only do so in a discrete number of ways. What happens is that the transition that used to be a line splits and shifts into a number of different lines.
If we use a stronger field then the shift is larger. We can finely tune the energy at which our laser will interact with the atoms. So now we do this; if we put a magnetic field that is zero in the middle of the trap and gets bigger as you move away from the centre (you can do this) then we can control how hard we kick the atoms depending where they are. If we do it right then inside the trap we hardly kick them at all and outside trap we kick them back in.
Evaporation
We've managed to confine the atoms in our trap, the final step is to switch off the lasers (to stop all that noisy kicking and recoiling) and to try and use evaporation to get rid of as much energy as possible. It is understandably quite complicated to stop them all flying out once you've switched off the lasers and unfortunately it's at this point I start getting lost! The actual cooling mechanism is nothing more complicated than why your cup of tea goes cold.
After all this we're down the micro Kelvin level - a millionth of a degree above absolute zero! At these sort of temperatures the atoms can undergo a quantum phase transition and become a Bose-Einstein Condensate (BEC). This is a new state of matter, predicted by theory and finally observed in the nineties. As far as I know this is as cold as it gets anywhere in the universe.
Well I think I'm done with cooling things now. It starts off beautifully simple and then gets a bit harder! Needless to say I salute anyone that can actually do this - it's back to simulations for me.
EDIT: I over-link to wikipedia but this is a good page on Magneto-optical traps
Labels:
experiments,
physics
Wednesday, 17 March 2010
Ghost Jams
via Lester, a nice video showing ghost jams in action
See New Scientist for more.
The drivers were asked to drive around at a constant speed. For a while this works OK, eventually a ghost jam develops and propagates at the same speed that they're observed in real traffic. I don't know if they tried to apply any external stimulus to see if they could guide it better.
See New Scientist for more.
The drivers were asked to drive around at a constant speed. For a while this works OK, eventually a ghost jam develops and propagates at the same speed that they're observed in real traffic. I don't know if they tried to apply any external stimulus to see if they could guide it better.
Monday, 22 February 2010
Simulating a molecule with a quantum computer
Simulating a molecule
There's a fairly nifty paper out in PRL on simulating a molecule with a quantum computer. In principle doing calculations on quantum systems will be much faster with quantum computers (when they become a reality) thanks to being able to hold the computer in a superposition of states. These guys have had a bash using an NMR based "computer" - it's pretty fun.
There's a fairly nifty paper out in PRL on simulating a molecule with a quantum computer. In principle doing calculations on quantum systems will be much faster with quantum computers (when they become a reality) thanks to being able to hold the computer in a superposition of states. These guys have had a bash using an NMR based "computer" - it's pretty fun.
Labels:
experiments,
links
Tuesday, 16 February 2010
Help with twitter name
What do you think this blog's twitter feed should be called?
KineticallyConstrained is a bit long (will hurt the retweets)
KineticCon?
KConstrained?
Kinetically?
KinCon (taken)
TwittersPointlessDontBother?
So many important decisions...
KineticallyConstrained is a bit long (will hurt the retweets)
KineticCon?
KConstrained?
Kinetically?
KinCon (taken)
TwittersPointlessDontBother?
So many important decisions...
Labels:
announcements,
technology
Wednesday, 10 February 2010
Do you like my new header?
OK, so I'm no Banksy, but I do like green. I'll probably be playing around with themes a bit over the next few weeks.
Hopefully the header captures how "kinetically constrained" can apply to complex statistical systems and that sort of stuck feeling that I can never quite shake off. Look at me full of bollocks, maybe I can get on Newsnight review or something.
Hopefully the header captures how "kinetically constrained" can apply to complex statistical systems and that sort of stuck feeling that I can never quite shake off. Look at me full of bollocks, maybe I can get on Newsnight review or something.
Labels:
announcements
Tuesday, 9 February 2010
How should we teach Maths
I came across this new feature in the NYT via Science Blogs by Steven Strogatz. You may remember him from his paper with Duncan Watts on small-worlds that arguably kick started modern network theory. It looks like it's going to be a regular series so I highly recommend adding the feed to your rss reader.
The article that first caught my eye was called Rock Groups. It starts by differentiating between the serious side of arithmetic and the playful side. This is something I've long gone on about but never quite had the nice way of putting it like these guys do. Maths teaching for kids is like torture. I was having a discussion a while ago where I questioned whether we really needed to recite endless times tables aged 10 years old. A suggestion that drew scorn from my opposite number. But really, why?
The book that is heavily quoted in the article, "A Mathematician's Lament" by Paul Lockhart. It starts with a musician having a nightmare that children are not allowed to touch an instrument until they have mastered the theory of music and how to read a score. Only after many painful years are they allowed to lay their hands on an instrument.
This is a powerful analogy. You don't have to learn all the nuts and bolts of mathematics before you can start playing with numbers. Back in the Strogatz article he shows how much you can discover without being able to do any addition at all, just by grouping rocks. I wish I could quickly multiply two large numbers in my head but it wouldn't make me a better mathematician. It's like arguing that the best playwright should be able to spell every word in the dictionary.
The beautiful thing about the rocks is that it shows how much you can learn about number by pushing things around with your hands and being creative. Perhaps all those people who complain to me that "oo I could never do maths me" would have enjoyed it more if it was based on this rather than being expected to master "a complex set of algorithms for manipulating Hindi symbols".
Make sure you keep up with the Strogatz series. I found a pdf of the essay that inspired the Lockhart book. If I ever get through my Christmas backlog I might get around the getting the book.
The article that first caught my eye was called Rock Groups. It starts by differentiating between the serious side of arithmetic and the playful side. This is something I've long gone on about but never quite had the nice way of putting it like these guys do. Maths teaching for kids is like torture. I was having a discussion a while ago where I questioned whether we really needed to recite endless times tables aged 10 years old. A suggestion that drew scorn from my opposite number. But really, why?
The book that is heavily quoted in the article, "A Mathematician's Lament" by Paul Lockhart. It starts with a musician having a nightmare that children are not allowed to touch an instrument until they have mastered the theory of music and how to read a score. Only after many painful years are they allowed to lay their hands on an instrument.
This is a powerful analogy. You don't have to learn all the nuts and bolts of mathematics before you can start playing with numbers. Back in the Strogatz article he shows how much you can discover without being able to do any addition at all, just by grouping rocks. I wish I could quickly multiply two large numbers in my head but it wouldn't make me a better mathematician. It's like arguing that the best playwright should be able to spell every word in the dictionary.
The beautiful thing about the rocks is that it shows how much you can learn about number by pushing things around with your hands and being creative. Perhaps all those people who complain to me that "oo I could never do maths me" would have enjoyed it more if it was based on this rather than being expected to master "a complex set of algorithms for manipulating Hindi symbols".
Make sure you keep up with the Strogatz series. I found a pdf of the essay that inspired the Lockhart book. If I ever get through my Christmas backlog I might get around the getting the book.
Labels:
books,
communication,
links,
maths
Thursday, 28 January 2010
Laser Cooling
Last semester I was helping out teaching a bit of quantum and atomic physics. It was quite fun going back to stuff I was a little hazy on the first time. I finally understand the periodic table for one thing. Another thing that I knew about but never really got the detail is laser cooling. This is really nice, I'll blast through it here. Watch out for the stat-mech bit, blink and you miss it.
In an atom electrons are not free to sit anywhere they want (more or less), they inhabit precisely defined quantum states that have well defined energies, angular momenta etc. Therefore if you give an atom a kick then it will release the energy you give it in precisely defined packets of energy. So if you take the light emitted by the atoms and put it through a spectrometer (could just be a prism) you'd see something like this, from here, for sodium.
You'll recognise the orange line from the street lamps that are slowly on their way out. I did a version of this experiment when I was an undergrad where we did the opposite, we shone white light through sodium gas and while most of it goes through the frequencies that match the right transition frequencies get absorbed and are missing from the final spectrum. Might look like this, ish
Notice that the lines aren't all that sharp whereas I said they should be precise lines. This is for a number of reasons. One is that the uncertainty principle doesn't like precise energies. There's an uncertainty attached to the lifetime of atomic transitions or collisions. Another, more important effect is Doppler shifting due to the temperature of the gas. We can assume that the atoms in the gas have a distribution of velocities that comes from the famous Boltzmann distribution
Light emitted from a moving atom will be Doppler shifted which will take our precise emission line and spread it out around the average. This property turns out to be very useful and what we'll use. First a mention about the laser.
Lasers are brilliant. With a laser you can send in a beam of photons with a highly tuned narrow band frequency. When a photon hits with a frequency that matches the absorption frequency of the atom, they collide and scatter. When it's too much or too little it will most likely just go straight through.
So finally we get to how you cool the gas. If you send in a laser pulse into a warm gas of atoms then different atoms will see different things. Thanks to the Doppler shift, an atom moving with speed, v, will see the laser frequency, f_0, Doppler shifted to (c = speed of light)
Atoms moving away from the laser see it red shifted (lower frequency), atoms moving toward the laser see it blue shifted (higher frequency). If we tune the laser to just below the absorption frequency of the atom then the only atoms that collide with the beam are those moving towards it (the ones that see the blue shift).
Were it not for the precision of the transition level the laser would equally kick atoms moving towards it and atoms moving away - adding no net energy into the system. However, if we only collide with atoms moving towards the beam then we can actually remove energy. What's even more staggering is that this actually works!
Laser cooling can make things seriously cold. You may have seen the headlines that the LHC is colder than space. Impressive given the size of the thing, but space is about 2 Kelvin. This is peanuts compared to laser cooling. This can get a gas down around 1 mK - that's a factor of a thousand. You can get even colder with new techniques but somehow laser cooling pleases me the most.
So that's laser cooling. It's beautifully simple, uses basic ideas from quantum mechanics, relativity, statistical mechanics and then makes something brilliant thanks to a laser.
In an atom electrons are not free to sit anywhere they want (more or less), they inhabit precisely defined quantum states that have well defined energies, angular momenta etc. Therefore if you give an atom a kick then it will release the energy you give it in precisely defined packets of energy. So if you take the light emitted by the atoms and put it through a spectrometer (could just be a prism) you'd see something like this, from here, for sodium.
You'll recognise the orange line from the street lamps that are slowly on their way out. I did a version of this experiment when I was an undergrad where we did the opposite, we shone white light through sodium gas and while most of it goes through the frequencies that match the right transition frequencies get absorbed and are missing from the final spectrum. Might look like this, ish
Notice that the lines aren't all that sharp whereas I said they should be precise lines. This is for a number of reasons. One is that the uncertainty principle doesn't like precise energies. There's an uncertainty attached to the lifetime of atomic transitions or collisions. Another, more important effect is Doppler shifting due to the temperature of the gas. We can assume that the atoms in the gas have a distribution of velocities that comes from the famous Boltzmann distribution
Light emitted from a moving atom will be Doppler shifted which will take our precise emission line and spread it out around the average. This property turns out to be very useful and what we'll use. First a mention about the laser.
Lasers are brilliant. With a laser you can send in a beam of photons with a highly tuned narrow band frequency. When a photon hits with a frequency that matches the absorption frequency of the atom, they collide and scatter. When it's too much or too little it will most likely just go straight through.
So finally we get to how you cool the gas. If you send in a laser pulse into a warm gas of atoms then different atoms will see different things. Thanks to the Doppler shift, an atom moving with speed, v, will see the laser frequency, f_0, Doppler shifted to (c = speed of light)
Atoms moving away from the laser see it red shifted (lower frequency), atoms moving toward the laser see it blue shifted (higher frequency). If we tune the laser to just below the absorption frequency of the atom then the only atoms that collide with the beam are those moving towards it (the ones that see the blue shift).
Were it not for the precision of the transition level the laser would equally kick atoms moving towards it and atoms moving away - adding no net energy into the system. However, if we only collide with atoms moving towards the beam then we can actually remove energy. What's even more staggering is that this actually works!
Laser cooling can make things seriously cold. You may have seen the headlines that the LHC is colder than space. Impressive given the size of the thing, but space is about 2 Kelvin. This is peanuts compared to laser cooling. This can get a gas down around 1 mK - that's a factor of a thousand. You can get even colder with new techniques but somehow laser cooling pleases me the most.
So that's laser cooling. It's beautifully simple, uses basic ideas from quantum mechanics, relativity, statistical mechanics and then makes something brilliant thanks to a laser.
Labels:
experiments,
physics
Subscribe to:
Posts (Atom)