“The history of mathematics is a Markov Chain”. This is a joke (probably not apparent to non-mathematicians!), but like the best jokes, is based in truth.
Because not everyone here knows what a Markov Chain is, I should explain. A Markov Chain describes things that change in a way that has no memory. What happens in the future doesn’t depend on what happened in the past. Picture a drunk, staggering home after a night out. Each step he takes is in a random direction. He might recognise the local shop, and walk towards it – but if he gets lost, and finds the shop again, there is nothing to stop him making the same mistake twice and walking in circles – because he can’t remember where he’s been, but only where he is.
The joke says that mathematics reinvents the same concepts over and over – which is true of this concept. It was independently invented in physics by Einstein for his description of Brownian Motion, and in mathematics by Andrey Markov in work on probability theory.
A post at Scienceblogs reminded me of this, and it is interesting because (as they argue well) all of Science is like this. Scientists are not Historians, so we only remember what is important enough to get into textbooks. If it doesn’t make it it – the next generation don’t know about it, and it becomes forgotten, doomed to be repeated again and again.
I wonder if the Internet will help us overcome this? When (say) PhD students of 2050 do a literature search for some obscure gene, will they find information from now about it in databases and papers, and will that be of any use to them? Can we use this to allow science to progress in useful directions, by remembering those that were failures? Or will failures of the past be viewed as caused by ignorance or lack of equipment, that enlightened folk of “today” can deal with without problem?
Without it, Science will be doomed to proceed as a Drunkard’s Walk, lurching between discoveries on the same old path of failures. We can easily explore the area around the pub like this, but it takes an awfully long time to stumble back home.
So the purpose of this blog, in a rough sense, was to give my brain something to think about. I’ve noticed that compared to my Uni days, I spend a lot more time watching TV than I used to. I also feel that I spend a lot less time thinking in a meaningful way. Correlation or causation?
A lot has been written about Too Much TV Making You Stupid. We KNOW this, from our very souls, anyone who ever read a newspaper or watches the news or just speaks to people knows this. It is anecdotally evident that having the brain mildy activated seems to prevent it from trying to make its own entertainment, i.e. thinking. Certainly regular brain activity prevents brain decay. It seems plausible that I’m not stimulating my brain in the right way any more.
There are plenty of other reasons for feeling slow. Maybe I get enough stimulation anyway from my research to get any more from this, and it really is an age effect? Or perhaps my youthful thoughts were, in fact, pretty stupid – and now I don’t need to go down those fruitless routes again? Maybe its a nostalgia effect – I’m the same as I ever was, but I just don’t know it. My favourite theory is that its the reduction in social interaction since moving to Aberdeen. I have the feeling that most ideas are built in conversations, not in sitting quietly. But I’m no expert – though I’ve read very little about this, I’m not convinced science knows much more than I do…
Anybody else managing to think as they get older? Do you find your imagination limited to the same old thoughts you always have had, jokes that aren’t funny and never were, ideas that relate only very directly and practically to life?
Blotworst: Blood sausage (missing umlauts!)
Chorizo: A type of ham
Pescado: Also fish
Vegi burger: Apparently in the US they sometimes use animal fats to cook everything in burger chains…
Whilst Beer hardly counts as thinking, thinking about beer does, and since I’m writing about it here, I’m thinking about thinking about beer. For others who do, I’m contributing to Andy’s Beer Blog. Check it out.
Finally, an all purpose resubmission cover letter.
Or maybe you are just too stupid. These people would like to sell you a throw featuring the bacterial flagellum, which are a pretty darned complex piece of kit that allow bacteria to move around. But as explained here there is a perfectly sensible explanation for them, from origin to as we see them now.
Why are people arrogant enough to assume that because they don’t understand something, it is beyond understanding? After over 6000 years of history, we still make the same mistakes. Of course, thinking we understand something is an equally dangerous error: we should be always open to the possibility that we are wrong. All people, including scientists, make both types of mistake – but science as a system is open to change, and Religion is not. This is why religion should never have anything to do with science, and why it will keep being forced back into the realm of the unexplained. Its up to the individual to decide if God has a place for themselves – they should not use God to try to explain the world, nor science to disprove God.
It seems that I have started blogging about science at the wrong time – there will soon be nothing to talk about. This is according to Wired: The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Roughly speaking, we are not going to need scientific theory anymore, because the data are much better a description without it. (Incidentally, google has all the data.)
Of course, such a revelation has made waves in the press and of course the blogosphere. As a theoretician I feel somewhat defensive about having a job, so I’m going to join the crowds by explaining why this just very very wrong. Its quite simple really. The Scientific method works as follows:
- Observe a phenomenon (i.e. data).
- Form a hypothesis about the cause (i.e. explain the data).
- Think of a new way to test the hypothesis (i.e. get more data).
- Perform the test. If it fails, go to 2. If it succeeds, go to 3.
- Use the hypothesis as a prediction.
The scientist keeps gathering data and the hypothesis gains evidence and becomes accepted over time – unless of course some new piece of evidence is found that cannot support the hypothesis. Hence science is formed as a set of theories that can change with the evidence. The hypothesis can be used to make predictions, and provides an explanation for the phenomenon.
The new concept notes that we have more evidence than we know what to do with right from the beginning. In fact, by statistically describing the data, we can skip straight from 1 to 5: the data is the model. Predictions can be made without ever having to think about what the data means.
This sounds great, until one thinks a little about the nature of prediction. There are two main types of prediction: interpolation and extrapolation. Interpolation means considering what happens in between two regions that we have measurements for, and statistical models are perfect for this. Extrapolation means considering things outside the measured data. Statistical techniques are really bad for this, because two descriptions can be just as good for the data itself, whereas they give wildly different predictions outside of it. The only solution is to know what on earth is going on and to predict based on that. The only way to achieve this is to build a model for its behaviour, based upon repeated testing of what it does.
To give an example: in my own research I consider the behaviour of bacteria in the human gut. You can’t measure it. People eat food, and stuff comes out the other end. That’s all you get. But: you can model the gut by building an experiment with similar properties. And you can model the experiment with maths, and use that to predict what is happening in our gut. You can’t do this any other way because the data don’t tell you anything about what is going on inside the gut itself. You can mathematically PROVE that there is not enough information in a googleplex of measurements on a live person. You need the experiment, and to include the differences between the experiment and the body, you need the model.
Google data may be coming out of our arses, but we’ve no idea where its been in between.
This is a blog by Daniel Lawson about Science, life, geekiness and anything else that occupies my mind whilst within range of a computer. Welcome! I hope it develops into something interesting enough to bring you here.