The replicated typo blog has two great posts summarizing the current state of animal cognition research. You might remember that I blogged about this from a personal, and also a more “philosophical” point of view… using the possibility that animals have similar feelings to us to argue for vegetarianism.
The broad brush is that animals pass most of the tests that humans do:
“1. Mirror self-recognition
2. Tests of metacognition;
3. Metacognition of others’ mental states”
Mirror self-recognition is the realisation that the animal in the mirror is really you. This test has now been passed by a wide range of species: “including the great apes (chimpanzees, bonobos, orangutans, gorillas), but also elephants, dolphins and magpies (Blackmore 2010: 210-214)”. Humans pass this around age 3.
Metacognition means having an understanding of your own thought process. The standard lab-test would be whether you know that you know something, tested by a reward system for getting a question right, or a lesser reward declining to answer. Many higher primates seem to pass this test.
The final test is the understanding that others may or may not know something. On this the results are ambiguous. Chimpanzees definitely know whether another chimp may know something, e.g. you can only steal some food if the head honcho doesn’t know about it. But they only partially pass other tests, and perform worse than humans.
So basically, “animals other than humans are conscious and have subjective experiences that rely on some degree of consciousness” (Burkhard & Bekoff 2009: 42).
(All quotes and references are taken from the replicated typo blog and you should go there for the full story).
Ingenious Monkey has linked to two excellent talks on morality that are definitely worth a read. These are my thoughts leading from them.
The first is Jonathan Foer speaking on the morality of what we eat. He is a vegetarian, and provides a very compelling discussion on it, but takes a very inclusive point of view that I admire. His basic point is that most people care. They might make different decisions about how to change the world, but that every person who makes a conscious decision to do something good is on the same side. In this sense, people who eat free range meat are essentially the same as vegans: they are both making moral choices about how their actions will impact the world. In this sense, people who care are all on the same side. against people who do not.
His argument is mostly about the impact of choosing what we eat: in America, the food industry is so unrestricted that it’s practices would be abhorrent to everybody if they knew what was going on. In Europe the situation is somewhat different, with slightly stronger restrictions preventing the most cruel practices. However the basic point stands: most would know just by visiting a high density farm that its approach was immoral.
I find his faith in humanity somewhat comforting, and it is very reassuring to hear his stories of his grandmother in the war describing why as a homeless starving Jew scavenging for food she would still not eat pork: “If nothing matters, there’s nothing to save”. Most people would say that some thing do matter, caring for others and ourselves matters, and that is why there is a point to living.
However, it is also somewhat naive, and my main thoughts watching this were things don’t matter as much to people as they might claim. How many people would really stand by their values in the face of death? How many people will change the way they behave in order to be consistent with what they believe, rather than chaging what they believe to fit with how they live their lives? These are tough questions, but I wonder, how many of the food company employees, that see the terrible conditions on these awful farms he discusses, how many of them care ? I bet that its most of them. However, when faced between choosing whether to care about the conditions that they subject animals to, they teach themselves not to care. People are good at that. So is it really enough to appeal to people’s sense of morality, when we all know deep down that we would sacrifice most of our morals if we felt we had to.
Or perhaps I’m being too cynical: watch it and see for yourself. I guarantee that anyone who cares will find something to like in his arguments.
(As a side point, he claims that the worst thing to eat in terms of its impact on the lives of others is eggs. Even free range eggs, in America at least, are produced under the most shocking conditions. This upsets me because as a conscientious vegetarian I rely on eggs for lots of things… I need to look up what “free range” means in the UK.)
The second speaker is Sam Harris speaking at TED on why science can answer moral questions. This is an extremely important topic: I can’t emphasise enough how important this is. Essentially, he points out that on questions of moral values, we have learned to consider all societies as equal. If a country believes that women in their own country should wear a burka, we say that is fine. If they believe in corporal punishment, we say let them do it. This is despite strong opposition to these things at home – we can believe that things are wrong for some people and right for others.
Harris argues that actually, if we take a broad definition of what morality is for (to maximise the well being of conscious creatures) then there are provably right answers for what are the best set of moral values. These facts can be obtained through the application of science to the brain, and to society. For example, does corporal punishment increase the physical and mental well being of all that grow up and then live in the society, or does it not? This has a simple, “yes or no” answer that we can obtain through scientific study.
We should stop pretending that all opinions on morality are equal. Some moralities are better than others, and people who have attempted to find out the right best answers via rational scientific approaches will be better placed to judge than those who haven’t. In other words, we should stop the pandering to all cultures (including our own) and focus on making changes that are in everyone’s best interest.
Not everything is getting put up for question here. There are some issues that are genuinely different in different societies: he shows a plot of the “morality landscape” that has several maximum points on. If such differences exist they should be respected – his point is that there are some universal truths to morality and we should not pretend to be ignorent of them.
I really like his point, and it is a very important and well made one. There are however some dangers. What about when we disagree over the best thing to do? Presumably the correct answer would be to change nothing until the answer could be decided by science – but this could take hundreds of years to resolve in some cases, particularly if there was a dispute that involved claiming that the “best” thing would make life worse in one particular cluture. But without such an approach, we would be seen as imposing our own moral system onto others, which could cause more problems than it solves. Diversity is celebrated mostly because its too much trouble to try to prevent people doing what they want. Additionally, moral codes are used as identifiers: if you tell Muslims they shouldn’t force women to wear burkas, then many women may actually choose to wear them, simply to state to the world, “I am Muslim, and it is important to me”. This will reinforce the wearing of burkas and make the society less likely to permit non-wearing as acceptable. (I should say that burkas are not legally compulsory in most Muslim states – but the social implications of not wearing them vary to the point that they could be considered as effectively compulsory in some cases.)
But generally, I’m very much for this. We really do need to stop pretending that all opinions on morality are equal, because they quite simply are not. And that is not just my opinion: some moral frameworks really do result in higher well being for all than others.
There are dinosaurs in my garden. They watch me warily with beady eyes as they hunt for food. Suddenly, a larger dinosaur appears and the smaller ones scatter in a frenzy of activity; it bites some food which dangles helplessly from its mouth.
Luckily for me, these dinosaurs are not about to break through the window and feast on my flesh. That is because they are about the same size as a crow, because the one I’m looking at is, in fact, crow.
You may have heard that birds evolved from dinosaurs. Well, according to scientific nomenclature, that means that birds actually are dinosaurs. The following evolutionary tree from The Loom illustrates why:
The only groups that it makes (evolutionary) sense to give a name to are mono-phylectic: all descendents from a particular organism. For example, all dinosaurs have a common ancestor, which is the far left of the figure. If you look at the bottom of the figure, you’ll see that birds share this common ancestor, and therefore are dinosaurs.
The confusion arises because dinosaurs were discovered before we understood their evolutionary relationship to living organisms. So we called all of the extinct creatures we found fossilised in rocks by the one name, dinosaur (“terrible lizard”). Once we discovered that birds were direct descendents from dinosaurs, we were already using the name to mean only the scary lizard type dinosaurs, rather than the winged feathered friends we feed seed to. So now, the correct term for a “dinosaur” is “non-avian dinosaur” (i.e. all the dinosaurs except the birds).
Similarly, birds are descended from reptiles, so they technically are reptiles. Again, in everyday language it is useful to talk about all reptiles except the birds. However, turtles turn out to be less related to crocodiles than birds are, for example, so they all have to be technically called reptiles. (Birds, crocodiles and the extinct dinosaurs are all called archosauria together).
Of course, birds changed a lot since the time of the dinosaurs, so they are really very different. All this just goes to show that technical language is always going to be at odds with everyday language, even though the technical words (technically speaking…) hold more real information.
I recently discussed whether races exist and claimed that we should ignore apparent differences between people in the name of morality. A comment about IQ differences made me realise that the main reason that we should ignore these differences is because there is a feedback between how we see the world, and how it is. This probably applies to lots of social phenomena, but I’m thinking about observed differences in IQ.
I was fairly ignorant about IQ research; it is not very evenly discussed in the media. So it came as a shock to me to discover that social scientists have observed large differences in IQ between different groups of people. After accounting for all the variables the scientists could think of, it still turns out that black people lag white (and white lag asians) by an appreciable amount. The widespread conclusion is that this must have a genetic explanation.
On matters on measuring IQ, and what it means, I bow to the expertise of the experts. However, a brief examination of their statistical methods I became concerned about the definition of “accounting for variables”. Accounting for, say income, means looking at whether income correlates with IQ, and subtracting some multiple of income from IQ so that there is no longer any correlation. (There are also more sophisticated methods trying to achieve a similar aim.)
The correction above works when the effects are “linear”. But IQ is not at all linear. The IQ of a parent affects the IQ of a child, via poor nutrition in the womb, lower access to education, larger family sizes, different childhood priorities, amongst other things. This makes it much harder to understand in IQ (although the more sophisticated methods mentioned above are designed to correct for these, to some extent).
But much worse than this is social feedback. We know that perceived IQ can affect peoples actual IQ; if the world considers them stupid (say, relative to the average IQ), then this can make a person consider themselves stupid. If someone considers themselves stupid then they are unlikely to persue an intellectual lifestyle. This leads to a low measured IQ, which is passed onto children in a “viscious circle”. On top of that, genuine discrimination can act and make the problem dramatically worse.
Can the model above “correct” for this sort of bias too? It depends how the bias works in reality; but for a broad range of possibilities the answer is “No”. It could even be mathematically impossible.
Lets assume from now on there is no genetic difference in IQ between two groups – which could be “black” and “white”. We will also assume that investment in IQ is the same for both groups, and there is no unfair discrimination. But IQ changes slowly; the IQ of a person’s parents affects their own IQ.
Without any investment IQ is at some minimum level Imin. The initial IQ of the two groups is above this. The investment level is called r, which controls the rate that IQ grows towards Imax, where it stops. This is described by the following equation for “Change in IQ” (per generation) for group i :
This says that the change in IQ increases at rate r towards Imax (if r is positive) or towards Imin (if r is negative). A genetic difference in IQ would mean that Imax was different between the two groups. The standard methods would correctly calculate Imax in this case, and so determine if there were an innate difference in IQ.
Now we introduce social bias, which acts to change the rate r that IQ changes. The bias b is proportional to how far IQ is from the average over all groups. The bias parameter b controls how much a 1 point difference from the average IQ (over both groups) slows growth by. Now the equation looks like this:
This is the same equation when b=0, but the growth rate is lower for IQ less than the average IQ (I with a line over it) when b>0. Even with bias, things might not be too bad. Here is what happens when we start the two populations close to each other:
Without social bias, both IQs increase towards the same Imax. With the social bias, the group with the lower initial intelligence increases less rapidly, but still goes to the same Imax. The above methods would struggle, but eventually get the correct answer for Imax in this case.
Now consider what happens when the initial difference is a bit larger.
In this case with social bias, the IQ of the population that starts lower goes down! The social bias leads to a growing difference in IQ between the two groups, and so the lower group “gives up” – perhaps surviving by focussing on avenues that don’t require a high IQ, or a high perceived IQ.
The above “correction” method fails entirely in this scenario. Its extremely difficult to detect whether an observed difference in IQs is due to this sort of social feedback with the same Imax, or due to a real difference in Imax between the two populations.
Of course, this model is wrong. Its far too simple and only captures some rough features of the truth about IQ. But a huge class of models exhibit this sort of behaviour – called bistability – that can lead to two “genetically” identical populations ending up with different observed IQs, and we certainly don’t know enough to rule them out at the moment. I found very little work trying to look into models for how IQ might change. Until this possibility is ruled out, observed IQ differences should (scientifically!) be attributed to social feedback acting on historical differences. If scientists don’t address the problem of social feedback, we can’t expect the world to! Instead, if we assume that the observed difference is probably a genetic difference, this will increase the overall bias b and we may never know the truth.
Notes: I did a very brief literature search, detailed in comment number 2 of this post. For completeness, this is repeated below.
The mathematical model is entirely arbitrary and comes from the simplest model that is bistable (which fortunately is a plausible first model in this case). I don’t have any good references for models although Turchins 2002 book “Historical Dynamics” contains a nice summary of what level of complexity is required to produce which features. “Accounting for variables” above means performing a linear regression (sometimes twice), which is a standard statistical procedure detailed in any statistics book. Clearly for this criticism to be watertight, I need to establish that nobody has tried to fit dynamical models such as the one above to IQ data. I don’t know whether anything like that exists in the literature (I bet it does somewhere) but it is not a standard feature of IQ studies. I’ve seen it in other contexts, such as IQ of individuals over time, but not for social groups. To be convincing, the above model would have to be replaced by one in which different individuals have different IQs, and real decision processes are used to establish how the bias behaves. This sounds like a difficult, but not impossible task.
My IQ literature search: Nature recently featured a two-sided discussion on whether science should even study IQ in the context of race at all (http://www.nature.com/nature/journal/v457/n7231/full/457786a.html). As you point out, twin studies show remarkable correlation between genetics and IQ. This is no surprise – we all believe that intelligence is inherited to some degree within families. Additionally, there is a consistent IQ difference of 4-5 points between the sexes (Lynn, Personality and Individual Differences
February 1998, 24:289-290; Blinkhorn, Nature 438:31-32 2005). There are huge continental differences of tens of points (nicely illustrated by http://alfin2100.blogspot.com/2009/04/iq-by-nation-iq-by-race-us-iq-inherited.html), and large (10 point) geographical differences between areas of the single country of Italy (Lynn 2010, Intelligence 38:93-100). A summary of such results is given by (Rushton and Jensen 2005, Psychology, Public Policy, and Law 11:235–294).
These researchers look for differences and find them. The problem is the circular nature of intelligence: low IQ in parents leads to poor nutrition, low childhood support, poor education, and hence low IQ in children (this is called the “self-fulfilment hypothesis”). Again, the evidence that such a cycle exists is uncontroversial – whether it explains the whole distribution of Iq’s is strongly disputed. Much apparently “genetic” variation in IQ is explained by conditions in the womb (Devlin et al. . 1997, Nature 31;388(6641):468-71). Perceptions of IQ change behaviour and hence lifelong learning potential (The Confounding of Perception of I.Q. on a Measure of Adaptive
Behavior, Bobner, Ronald F et al – sorry this is a conference proceeding; also Sutherland and Goldschmid 1974, Child Development, 45:852-856). Early parenting factors are important for long term academic achievement (Englund et al. Journal of Educational Psychology, 2004), which means that family size and social class are going to be important too.
The consensus in the literature is that self-fulfilment does occur but nobody has modelled it in such a way that it accounts for all observed IQ differences (Jussim and Harber 2005, Personality and Social Psychology Review, 9:131-155; also http://psychology.uwo.ca/faculty/rushtonpdfs/PPPL1.pdf).
I recently posted a discussion on whether races exist. I argued that races might exist, but that it wasn’t useful to use the word.
A comment by JL made me rethink my argument. I haven’t changed my conclusions (notice how rarely this happens? Its almost like we we use logic to justify our conclusions rather than to deduce our conclusions… but that is a different post entirely). But I have realised that I missed an extremely important point, one which changes the whole concept of scientific hypothesis.
Whether we believe a-priori that IQ differences between races exist can affect whether it is true.
Consider this. Imagine that scientists say “differences in IQ between races might exist”. We all see differences in IQ in the real world. People say, “yes, this could be true”, and act accordingly. Perhaps, all other things being equal, schools invest in children from the perceived higher IQ race (lets call them race 1). Perhaps people give jobs to those people from race 1 preferentially – all other things being equal. Of course, when someone from race 2 is better for the job, they get it.
This generation of children grow up; they are educated in the same way as their parents; they get jobs the same way as their parents. They have children, and so it goes on.
Now imagine that IQ is determined by both race and upbringing. People from race 2 have, on average, worse jobs. They can’t afford high quality education. So they do, in fact have lower IQ. The scientists can measure this; the hypothesis is confirmed. Breaking news!
Does this all sound familiar? That’s because it already happened. It is of course trivial; both scientists and non-scientists alike have seen this in action. But the ramifications for social science are immense. Normally science works by starting with a “null hypothesis” (how we believe the world might work) and comparing it to a “hypothesis” (something we want to test). But in measuring IQ differences, our choice of “null hypothesis” can affect the truth of the hypothesis! If we say, as above, that differences might exist in our null hypothesis, then they do. If we instead choose the null hypothesis that all races are equal – and insist on this to the world at large – then, and only then, might we be able to measure that IQ differences do not exist.
In other words, the whole world is an experiment: by banning racism we have started a test of whether racial differences really exist. Only time will tell if this is true or false, whether IQ differences exist are equalising. But this is only possible since we chose to treat the world as if racial differences do not exist.
This conclusion could be reached on any social science problem where our measures are imperfect. The problem lies in measures of IQ being biased to an unknown degree by upbringing; but finding a perfect measure is a hopeless task. It means that science has to work intimately with policy; to measure people we have a scientific and moral obligation to treat all people as equal, because without doing so we can never know if they are.
Whether people really are equal is in some sense irrelevant. They can only be equal if we assume that they are.
Don’t drink red wine! You’ll increase your risk of colon cancer!
But wait, it also lowers the risk of lung cancer. Cheers! Mine’s a large.
The nutritional science media splits things up into those that will kill you and those that will save you. Sometimes they are the same thing. How are we to figure out what we should actually eat?
The first thing is that you need to know the absolute magnitude of the affects. It doesn’t matter if the chance of death is increased by 500% if there was only a one in a million chance of getting it anyway. Conversely, a modest increase of 20% could be important if your basic risk is high.
David Spiegelhalter of Cambridge University is advocating a new unit of measurement for risk – the micromort. This is a one-in-a-million chance of dying in a day. We are all exposed in everyday life to a background 50 micromorts of danger – i.e. 50 out of every million people die of normal (but non-natural) causes per day. Additional activities we do can increase our risk relative to this average. You need to clock up a further 50 in order to double your chances of dying.
So how can we “spend” our micromorts? Well, we can travel – 200 miles by car costs 1 micromort, as does 20 miles by bike or a paltry 6 by motorcycle. However, to measure the real value in doing something you need to allow for all the benefits and dangers. Its well known that cycling brings health benefits that outweigh the risks when compared to driving, so we must “gain” some morts in fitness benefits. But at least we can compare the risks.
For example, Equasy (addiction to horse riding) is as dangerous as Ecstasy. The risk of dying (in micromorts) for taking a pill of ecstasy is roughly the same (or less) than taking a ride on a horse – both around 1. Now of course there are other factors – long-term problems associated with addiction to an illegal drug – but the point remains that drug laws cannot be justified on the basis of absolute risks alone. Some things we think of as dangerous, well, aren’t. And other things really are.
Returning to the wine, often you don’t get the information needed to calculate risk in a media article. I couldn’t obviously see the relative risks associated with the wine in the top two articles, for example. But these numbers are definitely out there, and they can be measured in a simple and clear way. Why not tell us that, so that we can make an informed decision?
When I step on my Wii Fit and it tells me I’ve gained 2lb, how worried should I be? Well, that depends on how variable my weight is, day to day. Anna and I have done a simple study, and found that each others weight accounts for 50% of the variation in our own weight. And large variation occurs over the scale of days – meaning that it is all water. Our weights are (on a day to day basis) determined by the things we share in common – food and drink intake.
There are some surprising results here. The range of values is 5% of the average – meaning that if I weighed 10 stone, I could measure myself twice in a month and differ by half a stone! the standard deviation is 1%, meaning that though I weigh on average 10 stone I would on an average day be 1.4lbs away from it. And daily we vary by 0.8%, so I’d differ by just over a pound on average. So for every day with no change, there is a day with 2lbs difference.
The numbers become more meaningful when our weights are compared. My weight and Anna’s correlate at 49%, meaning that half of our variation is explained by a common factor. During the months of note taking, I was cycling to work and Anna was exercising at home. We were getting exercise together only really at weekends. But we ate dinner together every evening, and we drank beer and wine at the same times. That is what is controlling that 50%. And because it comes off so quickly, its can only be weight stored as water – we vary this much simply by varying how much water we are retaining in our bodies.
Most trends in weight are gone in 4 days, but there is strong evidence (p<0.001) for a (weak) trend over the study period. Yet our weights now match the mean of the data, so this trend is also variation – its just happening over very long times. In other words, we vary day-to-day, and we vary month-to-month, yet we don’t vary year-to-year.
Problems with the study
To start with, the data isn’t taken over a very long time (or for enough people). It would be interesting to see if there were weekend effects or monthly effects. Secondly, we didn’t record any useful information about food intake, exercise levels, etc, so we can’t examine where the correlation really does come from and what other factors help to explain it. Additionally, like all long-term measurements, the conditions aren’t always identical. The readings are all in the mornings but sometimes before, sometimes after breakfast.
However, the weights recorded here are statistically identical to our recent weights so they were taken from our average variation – there were no long term trends that could have effected them.
Don’t fret small changes in weight! It takes a long time to lose fat, and small changes in water retention can mask it all. What we eat clearly does matter a lot, but over the long term it comes down to the simple equation:
Weight gained (energy units) = energy consumed – energy used
Over the short term, all diets will simply change water retention, so keep an eye on your weight over months to be sure that the trend is real! Even if weight is gained or lost for a month it would return to where it was if there are no lifestyle changes. Simply put: lifestyle determines weight, and that is a very difficult thing to modify.
And don’t let Wii Fit tell you off for a couple of extra lbs 🙂
* Units of measurement
In order to protect Anna’s and my own privacy on the web, the results have been presented in convenient units. Anna’s weight is measured in “Metric Anna’s”, so the average weight is one. Dan’s weight is measured in “Imperial Stormtroopers”, since he is one (in his head at least) and therefore his weight also averages to 1.
Interestingly, the “Imperial Stormtrooper” is also the traditional unit of measurement for ineffectiveness – 1 Stormtrooper achieves exactly nothing, although it can shoot wildly and miss. However, this causes problems in this study when Dan measures more than 1 Stormtrooper, as he becomes negatively effective. This is sometimes apparent when he washes up, as plates can mysteriously get dirtier with washing. The measurement for Annas used to be Imperial as well, but they declared themselves Queen and insisted the servants had to do the washing up (clearly a bad idea with only Stormtroopers around). Hence the need for a more modern measurement that neatly averaged to 1 as well as tidying up after themselves in the kitchen.
This news is related to the research I did when working at BioSS in Aberdeen:
There are two important facts in here:
- that only 10% of cells in our body are human: the rest are bacteria and other microorganisms, and these vary a lot between people.
- What we eat affects which bacteria thrive inside our digestive system, and which bacteria are inside us affects us dramatically. For example, our bacteria can change whether we get fat when eating food. It can also affect our chances of getting cancer.
The findings of this research are a little depressing for anyone wanting to lose weight: if you are fat, your gut bacteria will give your more calories back from food than if you are thin. But if you diet properly you can get a thin person’s gut bacteria whatever your weight – so all is not lost!
The bugs may have power over you, but you can still control the bugs.
Most games are idle distraction from reality. However, sometimes we can learn things from them. I think I’ve uncovered something very important playing Civilisation 4.
How, you might well ask? In the game, you control a civilisation from its origin through to the colonisation of another planet. Your civilisation grows from a small band of farmers to a world spanning empire. And here is the thing: it only ever gets bigger and better.
Can you think of another empire throughout history for which this is true? There isn’t one. The ancient Mesopotamian empires were very short lived. The Greeks were culturally powerful but soon lost their influence. The Romans controlled the basin of the modern world and an unmatched army yet fell within a few hundred years of their empire being established. China failed to capitalise on its huge cultural, scientific and organisational lead in the European dark ages, was repeatedly overtaken by barbarians, and later dominated by European merchants. Simply put: in history the powerful have always failed to keep their power.
Why is this – what is missing from the game? Civilisation is designed to be fun, not realistic – perhaps it misses out some key scientific knowledge. After a years worth of scientific reading, I can conclusively say – nobody knows! I find this shocking, and exciting. There is huge potential here for research – about a fundamental process that has shaped our world as much as religion, and will determine our future.
I don’t mean to say there is nothing known. There are several good books on the subject – “Historical Dynamics: Why States Rise and Fall” by Peter Turchin is a good place to start, as is “Guns, Germs and Steel” by Jared Diamond. There are three basic explanations offered:
- Internal economics. As a state gets powerful, it develops various methods for doing things. These might be successful initially but eventually they cause problems, and the society can’t change as fast as some other, weaker societies. Essentially: a strong society causes problems that bring about its decline.
- External events, such as barbarians and other empires.
- Environmental change. This can be either caused by the society (so is really internally caused) or natural changes such as mini ice-ages (and thus an external event).
Clearly, external events aren’t enough on their own to explain why a big empire falls, because bigger societies have more resources available to cope with the event than smaller ones. So there must be some internal explanation, and there is little agreement about how different societies cope with things so differently.
I’ll make another blog post another time to describe some things that cause societies to become weaker, and whether they mean dramatic changes for the future of our society. But in the current knowledge there is:
- No causal understanding of what leads to societies weakening, nor when. (1)
- No accepted way to interpret the evidence to support or reject explanations.
What does this mean, in terms of computer games? It means there are good set of ideas of how societies might get weaker, but no knowledge of how “game rules” can be made from these. And nobody really knows which rules influenced the decline of specific empires in history.
Both of the issues could be addressed through a mathematical framework for societal change (which the game of Civilisation actually is!). So, to get a more realistic game of civilisation, we need to do some fundamental research – maybe Firaxis Games will pay my wages?
Note (1): Turchin’s book is actually the first to try to address this by using mathematical models, but he focusses more on larger scale issues such as european versus eastern influence (which he calls “World Systems”). This sort of modelling is the only way to establish that a given mechanism is really causal of society weakening, and under which conditions.
This very interesting article by a scientist who was a former world class cyclist discusses why taking drugs is likely to be the norm for many high end athletes. Why? Because the system favours it. This can, surprisingly, be demonstrated very easily with mathematics known as “Game Theory” (don’t worry, the article is free of maths).
Roughly speaking, taking beneficial drugs (“doping”) is worth it because it both increases your chances of winning, and decreases your chances of being cut from the team. Playing by the rules is the “suckers” option, since your chances of getting cought are so low.
Of course, the analysis is somewhat simplistic. In reality, whether someone should (rationally) cheat depends on their skill level, whether their teammates are cheating, whether their team supports the cheating, and more. The suggestions for fixing the cheating problem do address these issues, however. They include: punishing the whole team if one is found cheating, lifetime bans for the cheater, and allowing cheaters to speak out after the event without punishment. This will increase the chances of being caught, and decrease the likelihood of teams having “organised” cheating. It will also help to identify teams that persistently cheat so that they can be more extensively tested.
The theory is based on the “prisoners dilemma”. In sports terms, the problem can be written in terms of a “payoff” (i.e. money) that the athlete may expect depending on whether they cheat, and whether their opponent does. If both players don’t cheat, they both have even chance to win. If they both cheat, then they both have an even chance to win but might get caught. But if one cheats and the other doesn’t, the cheater will most likely win and most likely not get caught.
|Your action||Opponent action|
From this, it is clear that whatever the opponent does, it is best to cheat. The dilemma is that if neither player cheats, then they are both better off than if they both cheat! Of course, the simplest solution is to reduce the payoff for cheaters, so that whatever the opponent does, your payoff is higher for not cheating.