How to increase co-operation through Tit-for-Tat

Adapted extracts from a paper by Kevin McFarlane entitled 'The Rational Self-Interest of Reciprocity - Robert Axelrod and the Evolution of Co-operation'. This is one of the excellent occasional papers produced by the Libertarian Alliance, 25 Chapter Chambers, Esterbrooke Street, London SW1P 4NN. Kevin McFarlane's paper explores Robert Axelrod's book The Evolution of Co-operation, first published by Basic Books of New York in 1984 and by Penguin in the UK in 1990.

Robert Axelrod begins his analysis by explaining the so-called Prisoner's Dilemma game, devised originally by Merrill Flood and Melvin Dresher in about 1950. In this game there are two players who are awarded differing points according to whether they co-operate or defect. (To defect means not to co-operate.) The game works like this.

Let the two players be A and B. If A and B both co-operate they both get 3 points. This outcome for both players is called R, the reward for mutual co-operation.

If A co-operates but B defects then A gets 0 points and B gets 5 points. A's outcome is called S, the sucker's pay-off.

Finally, if A and B both defect they get 1 point. This outcome for both players is called P, the punishment for mutual defection.

Axelrod organised several computer tournaments in which the participants' computer programs were to play the game of iterated Prisoner's Dilemma on a round-robin basis, ie, every computer program was to play every other program and was also to play against a copy of itself. The winner was to be the program which amassed the greatest number of points summed over all interactions. The program which won the main tournaments was the simplest and shortest program of all. It was called Tit for Tat. It always initially offered co-operation, but would respond to a defection move with its own defection move.

The broad conclusions which Axelrod draws from his analysis are these

(1) Co-operation can get started even in a world of unconditional defection (everyone following a policy of always defecting). It can evolve from small clusters of individuals who base their co-operation on reciprocity and have even a small proportion of their interactions with each other. But it cannot emerge if such individuals are too scattered and have a negligible proportion of their interactions with each other.

(2) A strategy based on reciprocity can thrive in a world where many different kinds of strategies are being followed (robustness).

(3) Co-operation, once established, can protect itself from invasion by less co-operative strategies (also robustness).

Axelrod then shows how these theoretical arguments can be applied to various social and biological settings. One example he describes is the Live-And-Let-Live system of trench warfare during World War 1. This is a particularly illuminating example since here we had co-operation between groups who were most definitely not supposed to co-operate since they were at war with each other! Axelrod explains how the conditions of trench warfare met the technical conditions of the Prisoner's Dilemma.

'Both sides tended to co-operate, but would respond to a defection by the other side. This behaviour tended to occur despite the best efforts of the opposing high commands to prevent it'

What made trench warfare different from most other combat was that the same small units faced each other in immobile sectors for extended periods of time. It is this fact that makes the Tit for Tat strategy viable. Axelrod describes how both sides did indeed follow a Tit for Tat strategy, which meant that both sides tended to co-operate, but would respond to a defection by the other side. This behaviour tended to occur despite the best efforts of the opposing high commands to prevent it.

Axelrod provides four suggestions for doing well in an iterated Prisoner's Dilemma.

(1) Don't be envious of the other player.

(2) Don't be the first to defect.

(3) Reciprocate both co-operation and defection.

(4) Don't be too clever.

With both conscious foresight and the ability to shape our environment, human beings can actively promote co-operation. Axelrod gives five broad ways in which we can do this.

(1) Enlarge the shadow of the future. In other words, arrange things so that possible future interactions are sufficiently important.

(2) Change the pay-offs. These consequences, fines or imprisonment, are such that the play-offs for not co-operating are not as attractive as would be the case if the laws were absent. Thus co-operation is induced.

(3) Teach people to care about each other.

(4) Teach reciprocity.

(5) Improve recognition abilities. An example of poor recognition abilities hampering co-operation in a human social environment is that of superpower arms negotiations. In this case, the difficulty is more to do with recognising what the other player has done, rather than with failing to recognise the other player.


You can rate how well you like this idea. Click 0-10 below and press the Submit button.
Bad Idea <- 0 1 2 3 4 5 6 7 8 9 10 -> Great Idea
As of 05/28/96, 8 people have rated this page with the overall rating (0-100%) of: 81%
Previous / Next / Table of Contents