a blog about things that I've been thinking hard about

There is (Almost) No Such Thing as the "Common Good"

22 July, 2005
biological success is always relative

All living organisms live in a world of finite resources. They always have.

So long-term reproductive success is always relative. Whoever wins, others must lose.

Therefore, for humanity, there is no "Common Good".

Other than the continued survival of the human race as a species.

Unless, perhaps, we can avoid the finiteness by expanding into outer space.

tags:

The Definition of "Good"

What is "good"? It's a word that we use all the time – together with its opposite "bad" – but perhaps without thinking about what it really means.

We know that goodness is a subjective notion, because what is good for me may not be good for you. We can also think of "good" in relation to specific goals, like "this is a good saucepan", which implies that it helps one to achieve whatever it is that a saucepan achieves, e.g. cooking food.

The properties of relativeness to goals and subjectivity can be reconciled if we realise that what is "good" for me is what helps achieve my goals.

Which leads to the question of what my goals are. But first, a question of whether good can be "common".

Agreement about "Good"

If I am driving my car along the road, I generally consider it to be a "good" thing if I don't crash. So that is my subjective notion of good, and presumably relates to goals I have which involve not getting killed or injured or having to repair my car.

If there happen to be four other people in the car with me, quite likely they also consider it to be "good" if the car doesn't crash. So we could consider that to be a "common good", since we all agree on the goodness of not crashing.

If we consider all the cars that ever travel along that road, and if we also consider some property of the road that influences the probability of a crash occurring, like a deceptive bend, then all the people who travel along that road in cars might agree that it would be "good" to fix the road up to reduce the chance of anyone crashing.

So now we have a common good that is common across quite a large group of people.

By considering decisions and choices that affect the circumstances of larger and larger groups of people, it seems that there is no limit to the number of people who might agree about what is good and what is not good.

In which case, there must be some things which could be judged to be "good" by everyone (or almost everyone) in the whole world. Anything that was judged to be good by everyone would count as being judged according to "the common good".

Common Good Implies Common Goals

If "good" is a word that only has meaning in relation to particular goals, then there can only be a common good for all of humanity if there is some common goal for all of humanity.

So what are our goals, and is there any goal that applies to everyone in the world?

We all have some idea what our goals are, things like:

It seems that we should be able to define some kind of common goal for humanity which involves everyone (or as many people as possible) getting these things.

Biology

Unfortunately, theoretical biology tells us something about all these goals which implies that it is not possible to define a common goal for humanity. This has to do with the ultimate goal which lies behind all these goals, and that goal is long-term reproductive success.

We can measure long-term reproductive success in absolute terms, i.e. considering how many descendants we have. But in the long run, the population of any species that the Earth can support is finite. Also the carrying capacity with respect to any particular species may vary from one time to another. The final result of long-term reproductive success is not a particular level of continual increase, since continual increase is not possible; rather it is either domination of the population, or extinction.

This means that the goal of long-term reproductive success is specifically relative success. And relative success cannot be a common goal, because I can only be relatively more successful if other people are relatively less successful.

By this criterion, anything that is deemed to be "good for humanity", can actually only be good for some people, and bad for others. It is only good for those people who gain more benefit from it than average.

Two Exceptions

There are only two exceptions to this dismal view of common goals of humanity:

Survival of the Species

If the continued existence of the human race is in doubt, then the distinction between relative success and relative failure becomes irrelevant, because extinction is an absolute thing, and we will all be failures, with no descendants at all.

For most of prehistory, there has been very little that any individual could do to affect the continued existence of their whole species, and as a result, we do not see much evolution of adaptations relevant to the whole species rather than one individual. If competition between individuals leads to some circumstance where the whole species goes extinct, then so be it, and that is the end of that species.

Modern human civilisation adds some new possibilities, because:

  1. It is possible for acts of individual humans armed with powerful technologies to make decisions that may affect the future survival of the whole human race.
  2. We can imagine the possibility of extinction (whether by our own efforts or due to some external cause), and we can agree to work together to prevent such an eventuality.

Of course, even while we work on a common goal of preserving the species, we will still all be competing to maintain a larger share of descendants within the future population, and this may still result in technological developments that threaten the extinction of everyone. Whether one goal (survival of the species) can win out against the other goal (relative reproductive success of the individual) is not a fore-gone conclusion.

Expansion into Space

Humans are a naturally expanding species, and some of our basic instincts seem to have to do with expanding into new territories (although often this involves wiping our existing inhabitants first).

If the Earth that we lived on was not finite, then the relativeness of biological success would not be inevitable. For example, me and my descendants could expand at a rate of 1% per century in one direction, and you and your descendants could expand at a rate of 2% per century in a different direction, and your greater success would not necessarily imply my extinction (although a bit of geometrical thinking suggests that a faster expanding population will eventually encircle a more slowly expanding population, and this will then be followed by extinction due to competition at the encircled boundary).

Of course planet Earth is finite. The Universe is infinite, or at least every large. But expansion into space is way more difficult. It may exist as a common human goal in the abstract, but the technological barriers even to human interplanetary space travel are enormous, and true expansion will really require interstellar space travel.

For example, George Bush's Mars plan might seem like a start to some common human goal of colonising space. But carried out to soon, it may simply result in an exhaustion of Earth's local resources whose only consequence is a very limited exploration of one planet by some tiny number of individuals, which falls so far short of actually expansion into space as to be biologically irrelevant.

If the colonisation of the Solar System or other solar systems is to be achievable on a scale sufficient that we can consider all of us as contributing to the future of humanity, it has to be much less resource-intensive than it currently appears to be, and this means that we need to develop a whole lot of basic technology first, before we start planning a launch date for a suicidal mission to Mars by a couple of astronauts.

Defining the "Common Good"

Neither species survival nor expansion into space are biologically programmed into our genes as instinctive goals. Any acceptance by humanity of these goals as common goals, and therefore criteria for "the common good", requires acceptance by that same humanity of the somewhat abstract argument that I have just given (and which might need to be restated by those better able to explain it).

Given that both possible extinction and expansion into space are likely to happen in some distant future, if they ever happen at all, it becomes rather difficult to determine the exact consequence of any decision that we make now. If I invent some new technology, is that going to make self-destruction more likely? Will it eventually help with space colonisation, perhaps in some very indirect manner? And even if I decide that a technology is "bad", and decide to suppress it, what if someone else develops it anyway?

Skepticism about the Common Good: There are Always Losers

If we simply assume that both human extinction and human colonisation of space are very unlikely, then we return to the conclusion that there is no "common good". Whenever someone proposes that something is done for the common good, the reality is that it benefits some people more than it benefits other people, and, given the relative nature of reproductive success, those who benefit less than average actually lose from it.

If we assume that the development of world organisations and international agreements does proceed based on this shaky notion of "common good", then in each case we must ask who are the winners, and who are the losers? And why do the losers put up with it?

A likely answer in many cases to this last question is that the losers will put up with it because they lack clout to do anything about it. Either they are too poor, or disorganised, or they may simply form too small a minority. Or they may just fail to realise that they are the losers. The "improvement" of the world by mutual agreement may actually be a constant "tyranny of the majority", where perhaps 95% of the population agrees on a measure which ensures the eventual extinction of the remaining 5%.

What would be silly of course would be for 95% of the world to accept a measure that only benefits the other 5%. For example, some people say that there are "no winners" in a nuclear war which destroys 95% of humanity. But, as I have been arguing, success is always relative, and as long as a nuclear war does not render the Earth uninhabitable, the "winners" of a nuclear war consist of the survivors, who have what is left of the world all to themselves.

The survivors may, for example, consist of those with access to special government funded bunkers. When Country A and Country B wage nuclear war and destroy the world, it turns out that the real war will be between those inside the bunkers and those not inside the bunkers. And if the people outside the bunkers are smart, they won't let the people inside the bunkers have last say on whether or not there is a nuclear war.

Vote for or comment on this article on Reddit or Hacker News ...