This is the final part of a four-part series which began with “What If?  What If?  Why Shoudn’t?” on Friday, 03-07-2014.

According to an old joke, there are two kinds of people.  One of them says there are two kinds of people ....

I am tempted to say that there are two kinds of attitudes toward transhumanism, the increasingly influential ideology of transcending the limits of human nature.

Most people have never heard of it, or think it is only science fiction.  The other kind are true believers.  They think science fiction is future history.

But of course other views can be taken.  Unlike those in the second group, I don’t think the transhumanist dream will come true.  I don’t think it can.  But unlike those in the first, I take it seriously.  I do so because a surprising number of people in the military, in government, and in industry take it seriously, and they can do a lot of harm by trying to make it come true.  Though human beings cannot transcend their natural limits, they can damage each other badly in the attempt.

In the last post I gave the example of altering soldiers, so that they no longer need to sleep and can fight for days on end.  Yes, military planners are already looking forward to the prospect.  No more to repose in that sweet slumber which knits up the raveled sleeve of care:  Can you think of a more crippling, debasing “enhancement”?

Other people look forward to living forever.  It takes but a moment of thought to realize that even if this were possible, it would require putting an end to the sweet cycle of the generations, of bearing and begetting and raising families.  Could a world made empty of the laughter of children be endured?

Still others dream of a world in which no one needs to be constrained by actual reality, because we will spend all our time in virtual reality where we can have whatever we want.  Never mind for the moment whether what isn’t real could be b better than what is.  Just ask yourself which experiences in life have made you better.  Were they the ones in which you got whatever you wanted?

But suppose you could transcend your nature.  How would you even choose what to be?  What we find attractive is determined by what we already are; but in this case we would not be transcending ourselves after all.  Socrates tells a story of the gods of the underworld offering those who have died free choice of what lives they will have next.  Their choices are determined mostly by the afflictions and unfulfilled desires of the lives they have just experienced.  Orpheus chooses to be a swan next time around, because, having been murdered by women, he is unwilling to be conceived and born of one.  Ajax chooses to be a lion, because, having suffered injustice from the superior might of another, he is determined to be strong.  Thersites, the buffoon, chooses the life of an ape.  One former citizen of a well-ordered state chooses the life of the tyrant Theleus, expecting that by power and cruelty he will possess every object of his lust, never dreaming that just because of his cruelties and lusts, his wife will prepare him a dinner from the corpse of his son, whom she has murdered in revenge.

I am with John Donne:  “Affliction is a treasure, and scarce any man hath enough of it.”  But Donne adds that to be matured and ripened by it, one must be made fit for it by God.  This will not be a welcome reflection for those who desire to be God themselves.

One young person with whom I spoke remarked that for him the attraction of transhumanism was not that we might live forever or acquire other-than-human abilities, but that we might reprogrammed to be perfectly good, so that henceforth we never chose what is wrong and always chose what is right.  There are several problems with this proposal.  One is that our military, political, and industrial reprogrammers would not work with such motives.  The most likely outcomes are that the great majority of people would be made tools of a few people in control, or that all people would be made tools of a process that was no longer in anyone’s control.

The other problem is that we cannot be programmed to discern good and evil, because good judgment is not captured by algorithms.  Not even good mathematical judgment can be.  Ever since the revolutionary proofs of the young Kurt Gödel, it has been known that mathematics is formally incomplete.  No matter how many axioms we have, there will always be some true theorems the truth of which cannot be decided from the axioms alone.  Although the mathematician may add new axioms to settle the question of their truth, the system of axioms itself cannot tell him which axioms to add.  Among other things this shows that the mind of the mathematician is exercising powers we do not understand, which transcend any possible computational process.

If programming falls short even in the discernment of mathematical truths, how much more will it fall short in the discernment of ethical truths!  We cannot be programmed to be good; we can only learn, with God’s help, to be good.  Since the foundation of moral judgment is not a program, it follows that if we try to convert it to a program, we will not improve it, but only destroy it.

We cannot transcend the limits of human nature, though we might maim our nature trying.  Whether the maimed beings would be able to recognize themselves as maimed is another question.

There might be no coming back.