Monthly Archives: September 2010

Our Premature Jump to the New Millennium

Most people took the transition from 1999 – 2000 as the transition to new millennium.
Most people took the transition from 1899 – 1900 as the transition to new century.
Most people took the transition from 2009 – 2010 as the transition to 2nd decade.
They were wrong!

—————
There are many topics which provoke debate and even hostility, such as politics and religion. Wise barbers and taxi drivers avoid such topics for fear annoying or disturbing their customers. Better to stay with talk about the weather or sports, but even talk about football or baseball can be sensitive subjects. In this blog I don’t avoid topics which can be minefields of controversy such as subjects of political thought, economic theory, religion and philosophy. Hopefully readers will find much to stimulate thought and even find some discussions to be educational. Readers will also find much that will disturb their ordinary ideas and even provoke angry responses. That is fine as long as we keep the discussion civil; I welcome disagreement. It challenges me to clarify and improve my philosophical expression.

The Puzzle of New Millennia, New Century, and New Decade:

Contrary to political and religious topics which are important and controversial, there are some which, although somewhat trivial, provoke much dispute. One of this was the question that some of us posed before the turn of the century: When does the new millennium really begin? As most of you recall, the world in general celebrated the start of the new millennium on January 1, 2000. But this bothered a few of us. We took on the role of spoil sports and pointed out that the world was premature in their celebration by one year. The new millennium did not start until January 1, 2001, we argued. People were annoyed, even irritated, by this reasonable dissent to the popular opinion. Personally I annoyed and irritated a few friends and colleagues by arguing that the transition to the new millennium was the transition from the year 2000 to 2001, not the transition from 1999 to year 2000, as most people thought. Along with other knit-pickers like myself, I argued that when we count decades (or ten of any countable items, e.g., coins in my pocket, beans in a bag, people in a room) we start with the first item and recite “one,” add 1 for each year, and finish with the tenth year (tenth item) and recite “ten.” In other words, the decade runs from 1 through 10, the century runs from 1 through 100, and the millennium runs from 1 through 1000. So, 10, 100, and 1000 are the numbers that indicate the end of the decade, century, and millennium respectively (not the numbers 9, 99, and 999 respectively). The logical conclusion is that years 11, 101, and 1001 mark the start of the succeeding decade, century, or millennium. From which it follows logically that year 2001, not year 2000, was the start of the new millennium ten years ago. The new millennium should have been celebrated on January 1, 2001!

But notwithstanding the logical validity of these arguments, the world celebrated the new millennium on January 1, 2000. My analytical, logical friends and I marveled at what appeared to us as a nearly universal confusion. Was this general confusion just a phenomenon connected with the excitement of the new millennium? Added to this great anticipation of the imminent new millennium was the general apprehension, even fear, about how our computer systems would handle the change of year designation from ‘1999’ to ‘2000.’ People feared that computer systems would crash and many vital functions disrupted. But, as most will recall, computer specialists prepared early for the transition and things went smoothly for most companies and government agencies. The world as we know it did not end on January 1, 2000! But getting back to the premature observance and celebration of the new millennium, let’s ask again: Was this error one that was unique to the transition to the new millennium?

After some reflection and brief study, I found that this general error and confusion was not limited to the transition to a new millennium at years 1999-2000-2001. It has occurred also with respect to transitions between centuries and decades. Newspaper and history book accounts from the year 1899 indicated that folks back then also celebrated the transition to the new century prematurely, on January 1, 1900. They did not wait for the correct start of the twentieth century, January 1, 1901. Were they simply too impatient? Likewise, most people think that this current year of 2010 marks the start of the second decade of the twenty-first century (as indicated by my informal, unscientific polling of relatives, friends and neighbors). But when we apply the logical arguments stated above, we see that the start of the second decade does not happen until the year 2011. Our high school graduating class of 1960 recently held a reunion. Most of my old classmates held that our graduating class was the first of the 1960 decade. But, again, simple logic shows that 1960 was the final year of the 1950 decade, not the start of the 1960s.

Grouping numbers with other numbers that look the same.

So what is happening here? Surely people are not that illogical. Are they simply too impatient and prematurely mark transitions from one decade, century, or millennium to the next, mistaking the transition from 1999 to 2000 for the correct transition from 2000 to 2001? My initial diagnosis relies on simple folk psychology: People tend to group numbers with sets of numbers that look the same, rather than consider what those numbers signify. For example, the year 1960 looks like the years 1961 – 1969. These are the sixties, after all! The year 2010 looks like the years 2011 – 2019, and therefore belongs with that group. And the year 1900 looks like the years 1901 through 1999, and belongs with that group. Finally, anyone can see that year 2000 resembles years 2001 through 2999, and therefore belongs with the new millennium! Most people are prone to this natural way of thinking; and are not too impressed by logical arguments that demonstrate that our natural inclination to group numbers by appearance results in error. Most of those friends and colleagues that I approached on this question of the correct transition time, either laughed at me or expressed annoyance that I spent any time on such a trivial topic. The transition to the new millennium happens when the world agrees that it happens, and we don’t have time for the technical dissent (on a triviality) of a few mathematicians and logicians!

Yes, they’re surely correct. This is surely not a crucial issue which can affect what happens in our world. People observe and celebrate transitions when they think it appropriate. “Let us move on to more interesting and important issues,” seems to be the prevailing attitude. I agree; there are more interesting and interesting issues to discuss. But before wrapping this one up, let me see whether we can find a significant philosophical point behind all this disputation about the start of the next decade, century or millennium. Hopefully the reader will exercise a little more forbearance and stay with me for a few more paragraphs.

Numbers as Labels Versus Numbers as Counting Numbers:

A curious fact about our use of numbers is that we use them in two very different ways, often without noticing this difference. A number can function as a label or name for something; and a number can function as a member of series of numbers. Examples of the use of numbers as labels or names are easy to find: the street number ‘15’ may be used to identify a particular city street; the number ‘2502’ may identify a specific property; or the number “18” a particular floor in a high-rise building. In each case, the fact that the street may not really be the fifteenth street in the city (there are only fourteen streets), or that there aren’t 2501 properties lined up prior to property 2502, or that the high-rise building omits the thirteenth floor, going from floor 12 to floor 14, does not affect the identification of 15th street, or address 2502, or the identification of floor 18 on our high-rise. Here the numbers function as labels or names; they are simply identifiers. They could just as well be names using only alphabetical characters. The number ‘15’ for our street functions the same as the name “Elm” in identifying a specific Avenue. The number ‘2501’ functions the same as a name — e.g. the James place — would work to identify a specific property. This use of numbers (as labels or identifiers only) can apply to our telephone or cell phone numbers, to our social security numbers and other numbers that are assigned to us at various stages of our lives. In earlier years of the telephone, telephone identifiers included words or letters along with numbers. The numbers were not counting numbers.

With the second use of numbers, the function of the number is not limited to identifying the item at issue. If I tell you correctly that the new Chevrolet parked outside my house is the eighth car I have owned, the number ‘8’ could identify the car as car number 8. But the number ‘8’ also tells you that if you count the number of cars I have owned starting with my first as ‘1’, the Chevrolet would be number ‘8’; in other words ‘8’ is part of a series of numbers (counting numbers) representing a series of cars that I have owned.
Along the same line, if I tell that I was the fourth child born to my parents, I not simply labeling myself as the “fourth child,” although this label would be accurate enough. I would also be telling that, counting each child from the first to me, assigning a number to each, you would arrive at the number ‘4’. Child number 4 indicates ‘4’ as a usable label, but more importantly, it indicates my birth as occurring fourth in line. In short, ‘4’ functions as a member of a counting series of numbers. Likewise, if I tell you correctly that this year marked my 41st wedding anniversary, I’m not simply naming or labeling this year as marriage number ‘41’; I’m telling you that 41 years have passed since my wife and I were wed. Count them, one by one, starting with year 1969, and you get ‘41’. The same is true for the use of numbers to indicate our age. This year I am 50 identifies this year as year ‘50’ for me; but it also tells you that if you count the years from my birth starting with ‘1’ and adding ‘1’ for each year until the current year, you will derive my age. The number ‘50’ is part of a series of counting numbers, 1 through 50.

Now let us apply this distinction between numbers as labels and as counting numbers to the question of new millennia, centuries, and decades. The number 2010 is not just a label for the current year; it also functions as a counting number (count the number of years since the start of the decade at year 2001, adding ‘1’ for each year and the total is ‘10’ and you arrive at number ‘2010.’ Hence, 2010 marks the end of the first decade. The number 1900 is not just a label assigned to a specific year. It represents a number in a counting series; in principle, were you to count years from 1 through 1900, adding ‘1’ for each year, after 1900 additions you would arrive at year 1900. But 1900 divided by 100 yields 19. So ‘1900’ marked the end of 19 centuries, with the year 1901 marking the start of the 20th century. Likewise, the year 2000 was not just a label for that year ten years ago. It also indicated that 2000 years had passed since our conventional start of the Julian calendar at year ‘1’. Now when we divide 2000 by 100 we get the answer ‘20’; so 2000 marked the end of 20 centuries, year 2001 as the start of the new twenty-first century. The case for correct identification of the new millennium is easier. Divide 2000 by 1000 and you get ‘2’ with no remainder. So the year ‘2000’ marked the end of the second millennium, with year ‘2001’ marking the start of the new millennium. Hence, when we take into account the use of the numbers ‘2000’ and ‘2001’ as counting numbers, and not simply labels, it follows that January 1, 2001 was the start of the new millennium, and the general observance of the new millennium on January 1, 2000 was an error. Observing the new millennium at the start of year 2000 betrayed a general failure to distinguish between two distinct functions of numbers. This is moderately offensive to anyone who likes to see things done rationally and cleanly.

Does this matter to anyone besides people like me who dwell on these oddities? Maybe not or maybe it could. One could imagine the confusion that might arise when a person on floor fifteen needs emergency attention and the paramedics are sent to floor fourteen instead because someone did not see the difference between ‘15’ used as name or label and its use as part of counting series, a straight count of floors from 1 to 15. Or imagine a stranded soldier who is told that the third platoon will rescue him, but counts only two platoons passing his position and fails to signal his presence and is not rescued. The second platoon passing by was really the ‘third’ platoon. Does this ever happen? I imagine that it does happen and has happened in the past.

Chopra’s Deep Confusion: The Brain & Doubts about the External World

In an article titled “A conversation: consciousness and the connection to the universe” Deepak Chopra recounts an interview (March 27, 2010)** that he held with Dr. Stuart Hameroff of the Center for Consciousness Studies of the University of Arizona.

The interview is interesting on a number of points, e.g., Hameroff’s attempt to explain perceptual consciousness in terms of quantum physics. This is an ambitious project that cries for scrutiny and critique. But presently I shall focus on another aspect of the interview. The interviews discloses some fundamental misconceptions and fallacies committed by both men. Let us look briefly at a few excerpts from that interview and see where they fell into old traps and confusion.

The interview starts with some statements and a question directed to Hameroff by Chopra:

“You’re an anesthesiologist as well as an expert in consciousness. Here’s my question: Our brain inside our skull has no experience of the external world. The brain only responds to internal states like, pH, electrolytes, hormones, ionic exchanges across cell membranes and electrical impulses. So, how does the brain see an external world?”

Right from the start, the good doctors Chopra and Hameroff fall into some basic misconceptions. To recap the main points:
First, they note (Chopra states and Hameroff agrees) that the brain resides inside the skull (obviously!).
Then we have the inference that the brain has no direct experience of the external world: “The brain only responds to internal states.”
From this Chopra raises the profound question: “[H]ow does the brain see an external world?”

The very notion that the “brain sees anything” is suspect. (More on this later.) But for now let’s look at what Hameroff replies to Chopra’s heartfelt question as to the mystery of how the “brain sees the external world.”

“Well that question goes back at least thousands of years, and the Greeks said that the world outside is nothing but a representation in our head. Then of course Descartes recognized the same thing. That the only thing of which he could be sure was that he is, that he is conscious. I think therefore I am. So, we’re not really sure the outside world is as we perceive it. Some people would say it’s a construction, an illusion, some people would say it’s an accurate representation. It’s kind of a mix of views. And then when you add quantum properties to it, it’s really uncertain if the world we perceive is the actual world out there.”

Chopra then brings up the example of seeing a rose:

“So, Dr. Hameroff lets just take an example. I’m looking at a rose, my retinal cells are not actually looking at the rose they’re responding to photons aren’t they?”

This gives the good Dr. Hameroff the opportunity to expound on the processes that go into our “looking at a rose”:

“Yes. It’s also possible that quantum information is transduced in the retina in the cilia between the inner and outer segments before the photon even gets to the rhodopsin in the very back of the eye. So it’s possible that there’s additional quantum information being extracted from photons as they enter your eye through the retina. They might somehow more directly convey the actual essential quality or properties of the rose and the redness of the rose. . . .”

I don’t know about all this extracting of quantum information, but I doubt that there’s anything approaching consensus among physicists and neurologists on these speculations. However, the points I wish to focus on are conceptual points: the identification of the subject who ‘sees’ or doesn’t ‘see’ the external world with the brain; and the inference that all this leads to the ages old skeptical problems about our knowledge of the external world.

Hameroff seems to think that the Greeks (which ones?) held that the “world outside is nothing but a representation in our head” and that Descartes recognized the same thing. In short, we cannot know for certain that the world is anything like what we perceive.

Of course, none of this follows from the initial premise that the brain is located inside the skull and the brain processes our perceiving of the features in the world external to the brain.

The first gross confusion is to hold that the brain is the subject which sees anything. Let us grant that the appropriate sciences can describe and analyze the processes by which the nervous system (sense faculties, brain) enable the animal to perceive and negotiate its environment. But this is an analysis of how the animal (e.g. human, apes, monkeys, dogs, etc.) perceives the world; the brain is a vital element of this process, as are the sense faculties; but the brain is not the subject who sees X (the object of perception) and then faces the problem of connecting ‘X’ to the external world. Furthermore, the skeptical issue (that we face the problem of connecting ‘X’ to the external world) does not follow.

Furthermore, we are not rationally compelled to affirm that “the world .. is just a representation in the head”. Which the of the ancient Greeks held this view? Likewise, there isn’t any cogent argument for inferring the dualistic Cartesian picture (that the mental subject is distinct and apart from the material world). Furthermore, for Descartes the brain, being a physical organ, is found in the ‘external,’ material world. The isolated brain – encased in the skull and separated from the object perceived – which worried Chopra, has nothing to do with Cartesian skepticism about the external world.

At any rate, the skeptical problems outlined by Hameroff have at best a loose connection with Chopra’s initiating question: How does the brain see the external world? Furthermore, any putative skepticism about the external world is in order only if we fall into the initial trap of taking some entity inside the head (the brain?) as the subject who perceives the world. But of course, the animal acting and reacting in its natural, social environment (e.g., the small ape on the tree) is the subject who perceives features of that environment. Hameroff has simply fallen into some basic misconceptions here, misconceptions set up by an even more confused Chopra.

The words used in the title that Chopra gives this dialogue with Hameroff “….consciousness and the connection to the Universe” suggests another fundamental confusion at work here: this is the confused idea that ‘consciousness’ is a mysterious ‘thing’ of sorts, which may or may not be “connected with the universe.” Chopra’s assumption, like many who talk this way, is that consciousness involves more than a commitment to the facts that certain animals (including human in a social setting or small apes sitting on a tree branch) are capable of taking in or being aware of features in their environment. But there aren’t any good reasons for asserting that we’re committed to something called “consciousness.” (Imagine someone proclaiming that in addition to the small ape on the tree, the ape’s consciousness sits there as well.) As some philosophers (e.g.,Gilbert Ryle, Richard Rorty, D.W. Hamlyn) have argued, one can dispense altogether with the idea of consciousness as an entity (?) or as a mental state and still give adequate accounts of all the mental, perceptual capabilities of complex, evolved animals as humans. Science can account for my seeing the rose or being aware of the cool temperature in my environment without anyone having to posit my state of consciousness or an actor called “consciousness.” That I see things and am aware of things is beyond dispute. But this does not commit us to the reality of some mysterious state or entity called “consciousness.”

When we speak of a person being in a state of consciousness, or perception, or awareness – we simply resort to a way of talking. We don’t make an ontological commitment. The same may be said for a statement like: “There was an awareness that we were in trouble.” None of these require that we posit a mysterious state or entity called “consciousness” or another called “awareness,” which may or may not be connected to the external world. Chopra is just falling victim to an age-old confusion here.

All the ensuing talk by Hameroff concerning the “fine structure of the universe,” and “quantum information extracted from photons” is at best questionable speculation, at worst, a bit of New-Age, post-modernistic “mumbo-jumbo.”
————————-

** The full interview can be found at

http://articles.sfgate.com/2010-04-07/news/20840306_1_quantum-information-interview-brain

Charles Rulon: God and the problem of evil

Evil: Immoral, corrupt, sinful, wicked, depraved, harmful, malignant, malevolent, misery, suffering, disaster, ruinous, disease, catastrophe, calamity; anything causing injury, harm and pain.

The Christian god is described as an all-good, all-loving, all-merciful, all-just, all-compassionate, all-knowing, all-powerful, interventionist god. Of course! Who wants to worship a hateful, vengeful, ignorant, absentee god? But if this god really exists, then why has there been so much heinous evil throughout all of human history — endless wars, genocides, famines, tsunamis, hurricanes, earthquakes, floods, droughts, cancers, heart disease, strokes, crippling arthritis, horrible birth defects and parasites—parasites which make up the majority of species on Earth and which spread flesh-eating diseases, bubonic plague, small pox, malaria, cholera, TB and on and on.[i] In response, one only hears God’s maddening silence. Where is His Goodness and Compassion, His Omnipotence? The presence of such horrendous levels of “evil” has been a potent reason for many to turn away from God.
Still, over the centuries Judaeo-Christianity has steadfastly held to the conviction that the universe is good — that it is the creation of a good God for a good purpose. Thus, over the centuries men of the cloth have agonized over all this evil and have attempted to explain away why such horrific levels of pain and suffering have been visited on the innocent. Surely, this wasn’t the best God could do?

Religious explanations

In the face of all this evil, religions struggle to continue to validate and glorify God’s goodness. So in comes Satan. In comes God’s punishment for sinning (“God sent Hurricane Katrina [or 9/11, or the tsunami, or the earthquake, or…] because God is mad at us for allowing pre-born babies to be butchered and homosexuals to run rampant.”). In comes the argument from Free Will (“God gave us free will so that instead of being robots we can freely choose to love Him. But the price paid is that we are now also free to do evil.”). In addition, “We are all depraved and deserve punishment from the very beginning.” Or, “suffering occurs because God’s creation is unfinished. As the universe continues toward perfection, diseases, natural disasters and other forms of evil will disappear.” And let’s not forget that “Suffering is good for us. God uses suffering because it is remedial and medicinal. Pain is the means by which we become motivated to finally surrender to God and to seek the cure in Christ. Suffering is necessary to forge high-quality souls for the afterlife. The point of our lives in this world isn’t comfort, but training and preparation for eternity.”

Skeptical responses

Non-theists respond that the obscene levels of excruciating pain, monstrous suffering and horrible deaths throughout history seem out of all proportion to what one might expect from any kind of god worth worshiping. They respond that “creating Satan” to explain away all evil begs the question of why an all-powerful, all-good God would permit Satan to exist in the first place—a rival who has inflicted so much harm on the good and innocent. And what about all of the scientific, medical and social advances which interfere with these “God ordained” punishments for sin? Are these blasphemous?

God’s inability to eliminate evil

Non-theists also point out that God’s finest creation was filled with so much evil several thousand years ago that this God drowned everyone except for one good family (Genesis flood). But as this family multiplied, evil once again returned with its endless wars, genocides, tortures, inquisitions, witch hunts, hatreds, greed, thefts and so on. Thus, skeptics observe, either this biblical god can’t create evil-free humans, or won’t.
Believers are quick to respond that God gave us free will so that we can freely choose to love Him, which also means that we are free to sin. But so much evil?! So many wars and genocides?! So much cruelty!? Besides, what does free will have to do with cancer, parasites and earthquakes?

Evil & the Old Testament God

Skeptics also emphasize that the Old Testament, the foundation for the world’s three major monotheistic religions, has been described as “… one of the most brutal war mythologies of all time with the enemies of the Hebrew’s tribal god consistently treated as sub-human things.”[ii] If we judge evil by today’s standards in civilized societies, then this tribal god of the Hebrews, observes Richard Dawkins, is “a misogynistic, homophobic, racist, infanticidal, genocidal, filicidal, megalomaniacal, sado-masochistic, capriciously malevolent bully.”[iii]

The existence of evil doesn’t disprove God

Of course, all of these horrendously obscene levels of evil don’t disprove the existence of a god or gods. If “God” exists, perhaps He’s an evil capricious god. Or possibly she’s a loving god with very limited powers. Or perhaps there is some kind of cosmic battle on Earth between the forces of good and evil. Or perhaps the Creator of the Universe doesn’t intervene in human affairs and/or has more important things to do than to worry about human suffering. Insight: Once we introduce non-testable supernatural explanations for the existence of evil, we’re only limited by our very fertile imaginations.

We must move beyond mental gymnastics

All of the above attempts to explain the existence of evil in a good universe created by a good God, plus all the skeptical responses, are simply arm-chair debates. Though satisfying and convincing for many, such mental gymnastics merely gloss over our fundamental ignorance. They don’t move us any closer to empirically verifiable explanations as to why all this evil—diseases, earthquakes, human cruelties—exist in the first place, much less how to reduce their occurrence.

Science and God

Regarding how an all-good God could co-exist with all this evil, most scientists respond that it’s because God is an imaginary being. The last several hundred years of scientific discoveries—from astrophysics, evolutionary biology and biochemistry, to the lack of any solid evidence for the existence of paranormal and supernatural events—all this evidence has reached a critical mass which strongly supports the powerful thesis that there never were any gods in the first place, at least in any kind of manifestation that is of interest to the overwhelming majority of Christians, Muslims, Jews and other religious folk. As scientific knowledge continued to advance over the last 400 years, supernatural explanations for events continued to retreat …and retreat. Many scientists faced with such a consistent trend have extrapolated to what seems an obvious conclusion: All of our earthly gods are non-existent.

Scientific explanations for evil

Scientists (through incredibly hard work over centuries) have been able to arrive at natural explanations for the existence of most of the world’s evils, from diseases, to natural disasters, to the evolution of parasites, to our xenophobic human nature. They have also discovered that our universe and Earth are even more dangerous (evil) than ever imagined.

Some educated Christians who accept modern cosmology and the fact of our biological evolution have responded that since evil can now be explained naturally, there’s no reason to blame God for it. Do they really grock the depth to which scientific discoveries have seriously devastated their core religious dogmas?

The universe and Earth: Our universe is far from being a safe and peaceful backdrop for God’s finest creation, man. Black holes suck in entire star systems. Gigantic explosions at the center of galaxies destroy millions of worlds, many possibly populated with sentient beings. And Earth, itself, is hardly a peaceful setting for God’s favorite species. Catastrophic events — meteor impacts, gigantic volcanic eruptions, ice sheets covering much of Earth, plate tectonic movements tearing apart entire continents — have repeatedly devastated Earth’s surface, resulting in horrifically high death rates, pain and suffering, even numerous mass extinctions over the past 600 million years. Skeptics ask why an all-powerful, loving god would place his favorite creations on a planet destined to experience catastrophic disasters, in such a violent universe.

Natural selection: Natural selection has been the primary mechanism driving evolution for billions of years. With natural selection comes an ‘infinity’ of dead ends, starvation, disease, plagues, cruelty, flawed designs, violent deaths, and a prodigious waste of life. Shortsighted selfishness usually wins out, no matter how much pain and loss it produces in the long term. Parasites are a major out¬come, outnumbering all other species. Very few biologists see evidence of some deity’s hand anywhere in natural selection—particularly evidence that humans were “planned” ahead of time.

Extinctions: Over the past hundreds of millions of years, natural selection, plus the contingencies of history, plus cataclysmic events have resulted in the extinction of over 99% of all the species to have ever evolved on our planet. Such horrendous levels of extinction leave many religious people quite upset. If essentially all of God’s creatures eventually go extinct, doesn’t that imply a God that’s inept, wasteful, careless, cruel, and/or unconcerned with the welfare of His creations?

________________________________________
Notes:
[i] There are over 10,000 species of tapeworms, 2000 species of biting lice and untold numbers of species of harmful bacteria, viruses and protozoa. Parasites have blinded millions of children. The Black Death of 1348 wiped out half the population of Europe. The influenza pandemic of 1918 killed 50 million people. Over 300 million people every year become deathly sick with malaria.

[ii]Campbell, J., 1988. Myths to Live By, p. 181-183.

[iii] Dawkins, R., The God Delusion, 2006. Dawkins based his description on the fact that in the Old Testament this god instructs his followers to kill those who work on the Sabbath (Exodus 31:15; 35:2), to kill children who curse their parents (Exodus 21:17; Lev. 20:9; Deut. 21:18-21) and to stone to death brides found not to be virgins on their wedding night (Deut. 22:13-21). Virgin girls could be offered to an angry mob to protect male guests from harm[iii] The Hebrews’ god also commanded his followers after they defeated enemy cities to slaughter all of the men, the elderly, the crippled, the women, the children—everyone (Deut. 20:16-18; Deut. 7:1-6; Joshua 6:21-24; 10:40).

Did the Twentieth Century mark the failure of Secular Governance?

My colleague, Pablo, started the exchange with some questions posed to another philosophical colleague, Spanos, who had made some earlier claims about the failure of secular politics.

Pablo:

You make an interesting claim: “The claim that naturalism offers better chances for political reconciliation has been falsified by twentieth century experience with naturalism.” Though I’m not quite sure what you mean by ‘political reconciliation’ I’ll take it to mean naturalistic systems of thought have done no better in the political realm than non-naturalistic systems. I’m not sure, either, what naturalistic political systems you have in mind of the twentieth century, but I hope you don’t mean to include fascistic and Marxist systems.

——————————-

Spanos replied:

I did have fascist and Marxist systems in mind. But it’s pretty hard to distinguish between naturalist and supernaturalistic systems of government. Ever since the late middle ages, the Church and the state have been vying to see who is in control. Generally, the state has won that battle. Should we say, then, that all secular systems of government are naturalist systems of government? If not, how would we distinguish naturalist systems of government from other secular systems? Is the United States a naturalist society with a naturalist system of government? Or is it supernaturalist with a supernaturalistic system of government? Or is it a secular, non-naturalistic system of government? Capitalism, certainly, is not a religious system of economic organization. Therefore it must be a secular system. But if it is secular, is it naturalistic? If not, why not?

I would say that Pol Pot and the killing fields of Cambodia illustrate a secular or naturalistic system of government in operation. The Dalai Lama and his response to Chinese oppression illustrate a religious system of government in operation. I do not accept the idea that the Taliban and al Qaeda are typical of Islamic political systems. These, along with Hamas and Hezbollah, are motivated not by religion, but by resentment. They use religion, but religion tries to discourage resentment, anger, hatred, violence, etc. Religion holds up compassion, love, and forgiveness as primary virtues. Religious institutions are the only institutions devoted to the spread of these virtues.

The issue we recently discussed concerning a sanction for moral behavior, and whether naturalism can provide an adequate sanction, probably underlies our different views on this issue of naturalistic vs supernaturalistic systems of government. I believe that morality, deprived of a supernatural base, erodes away. This belief is synergetic with my idea of how Marxism failed. The only goodness left in Russian society today is a remnant of its previous religious culture. It certainly doesn’t stem from Marxism.

————————————

I jumped into the discussion.
Moi: Earlier I had asked Spanos the same question that Pablo asked; namely: what do you mean by naturalism in political or governmental affairs? Now we have some idea as to what he had in mind. (Why couldn’t he use the more familiar term, “secular governance”?)

Of course, with careful “cherry picking” of history and political events, anyone can argue that government independent of religious authority will result in evil and disaster. Likewise, one can argue the same for theocracies or governments closely tied to religious authority. They too can be seen as eventually leading to evil and disaster. (I say to Spanos: two or more can play the same game and get very different results.)

However, I’m amused that someone characterizes “Pol Pot and the killing fields of Cambodia” as illustrative of secular or naturalistic government in action. And of course, the peaceful Dalai Lama illustrates how a religiously imbued authority can work so beautifully. Wow, I didn’t know that! Of course, these claims could just be a couple of HOWLERS that Spanos throws our way to get our reaction. Surely he cannot seriously propose that the actions of the murderous Pol Pot and his boys resulted from the application of science and reason to politics, which is what I understand by “naturalism” in politics. Or does he seriously propose this?

This all strikes me as so simplistic as to be risible. If your government is a religious system in action, it is good. If your government is secular, it is evil. (This probably is not what you’re saying, Spanos. But what you do argue suggests something like this.)

Marxism failed for a number of reasons, only a few of which might be correctly related to the attempt to oppress religious expression/spirituality in the Soviet Union. It oversimplifies history and politics terribly to suggest that religious expression in pre-Soviet Russia resulted in good for the people; and that any governmental action that did not promote such spiritual expression resulted in evil and suffering. It is far too easy — and distorts history —- to suggest that the failure of Marxism shows the triumph of religion. It is much like arguing that the industrial revolution, the rise of science and the enlightenment proved the failure of religious spirituality. “Yes,” I would say to anyone asserting this proposition contra religion. “It sounds good, but let us see the rest of the argument.” I say the same to anyone who tells me that the fall of Marxism somehow vindicates religion and religious authority.

————————————
Spano replied:

It seems that I need to clarify the claim that I made about naturalism and government. I did not claim that government independent of religious authority will result in disaster. Neither did I claim that governments tied to religious authority will not lead to disaster. Neither did I say that the failure of Marxism shows the triumph of religion. These misunderstandings distort my claim.

I said, “The claim that naturalism offers better chances for political reconciliation has been falsified by twentieth century experience with naturalism.”

In order to clarify this claim, let us first consider the idea that naturalism offers better chances for political reconciliation. This idea stems from the Enlightenment, the “Age of Reason.” It regards religious ideas as out of place in the constitutions of modern states. It holds that reason and science provide sufficient foundation for the
institutions of government. It has no esteem for tradition, especially insofar as traditions involve supernaturalistic assumptions. In its early phase, it believed that the Enlightenment marked a decisive turning point in the history of the human race. It believed that, on the new foundation of reason and science, barbarism was behind us and the human race would march triumphantly into a bright future. It expected political and social harmony such as had never been seen before.

Now let’s consider the impact of the twentieth century on this idea. The twentieth century showed that barbarism is not behind us after all. Barbarism reappeared in new and more devastating forms. The persistence of social injustice and the horrors of war on a world-wide scale plunged European intellectuals and artists into a blue funk called modernism. Modernism as embodied in literature and the visual arts was the collapse of Enlightenment naivete. Not only has the expected bright future not materialized, but the failures of reason and science as instruments for managing our relations with one another and our natural environment make our future look darker than ever.

I hope that this explanation helps you to a more accurate understanding of my claim.

————————————

Moi: Yes, we’re aware that the high optimism and expectations of a number of Enlightenment writers, philosophers, and scientists proved to be too high; and that much more needs to be done, besides merely removing supernaturalism and superstition from governance, before we can eradicate the barbarism and inhumanity that plague humanity. Admittedly the twentieth century showed that much, as likely will events in the twenty-first century.

Of course we cannot yet celebrate with August Comte’s belief that reason and science would spell the end of the barbarism and inhumanity associated with ignorance and superstition, nor can we celebrate the advent of a scientifically and rationally grounded humane society.

But nothing in all this shows that we would achieve a better society and better government by reverting to religious tradition and ecclesiastical authority in any form. None of this shows that we would be wise in any respect to jettison science, reason, logic and technology in an attempt to establish workable societies and humane value schemes. Finally, none of this shows what you seem to think it shows; namely, “the failure of science and reason as instruments for managing our relations with one another and with our natural environment.” You’re signaling the end of the game (the scientific, rational experiment) is far premature and you have misinterpreted past failures as due to the use of science and reason. Those failures are due to a variety of reasons, none of which can reasonably be portrayed as too much reliance on science and reason in managing human affairs. In fact, many of us would argue the contrary.

In short, you make two basic errors in your assessment. A misinterpretation of past events and a hasty rush to judgment contra science and reason. Mostly what you do is construct a ‘strawman’ version of the ideals of the Enlightenment, reason, and science and then proceed to knock down that strawman. But I suppose this an acceptable tactic, given your post-modernistic judgment that modernism has failed, a judgment which is as faulty as it is bizarre.

Some confusion on Justice and The Utilitarian Principle

Not too long ago, one of my philosophical correspondents (“Pablo”) took up the question of justice (What is its source?) and utilitarianism. The ensuing discussion brought out some utilitarian claims on this issue and conceptual problems with the Utilitarian solution.

Pablo raises the problem of the source of ‘justice’ and offers his solution, which refers us to the cleansing work of science and to the utilitarian principle as a guarantor of justice:

“If justice is some kind of ideal, from where does it emanate? Is it already in our heads in some evolutionary way? Does a god place it there? Is it self-evident? Where?

My claim is simple (though I’m sure some would consider it simple-minded) but I think profound. Once science has erased all the obvious prejudices we humans have of one another, including the religious ones, then we can be sure what’s good for ourselves is also good for other members of society and this is the basis for justice as I see it. Once prejudices against African-Americans, women, Native Americans, Jews, et. al., have been undermined, then we can expect for others what we want for ourselves. It’s the Golden and/or Silver Rule writ large from the bosom of the utilitarian principle. We don’t need an ideal of any sort to follow; we just need to use a principle similar to Rawls with a view toward consequences that benefit all of society.”

—————————

Some background for this exchange: Originally another of our correspondents, Spanos, had cited John Rawls’ theory based on the “original position,” as one attempt to give the source of ‘justice.’ But apparently Pablo does not see Rawls as providing what he, Pablo, seeks. Presently I shall set aside his reasons for discounting Rawls. Instead I will focus on Pablo claims: Science serves to prepare the way for justice; and the principle of utility provides a basis for justice.

Pablo makes the surprising claim that “science erases prejudices that humans have about one another.” This is interesting, since there’s much reason for doubting that science does any such thing. Science may provide us with certain knowledge and understanding about human beings and human society. Eventually this knowledge might contribute to a more tolerant, accepting attitude to those different from ourselves, and eventually humans might realize some degree of moral progress and a improve in their treatment of others. But the claim that science will erase all bias and prejudice strikes me as a great overstatement. However, for the sake of argument, let’s allow the possibility that “science erases all prejudices toward others.” Following this cleansing, Pablo tells us that we would work to realize for others the good that which we desire for ourselves.

Apparently the putative cleansing applied by science (the “erasing of all prejudices) would release a sense of fairness toward all fellow humans. We would desire that others be treated as we’re treated. But where did this ‘sense of fairness’ come from? From the scientific cleansing that Pablo postulates, of course. So, science, not utilitarianism, is the foundation for justice. The principle of utility is a principle of justice only when the people conceiving it and applying already have a notion of fairness. The principle of utility by itself does not guarantee justice or fairness, it only aims at the greatest good for the greatest number, which can leave many out in the cold.

Rawls presents a ‘theory’ of justice which is in some respects a consequentialist theory and in some respects a social contract theory. But it does not beg the question. As Spanos noted, Rawls imagines that a group of rational, self-interested persons would come up with rules for a putative society in which none of them would know what position they would occupy in that society. So each one has to come up with rules which treat every member of society fairly. But they do this only because they wish to invent a society in which each one of them, thinking primarily of his/her own self-interest, would get an even shake. They don’t do this because they already think that fairness should be evenly applied. The resulting ‘fairness’ follows from the rules that are created as a result of each rational, self-interested person calculating the arrangement that would insure that he/she would come out all right. This qualifies as a legitimate attempt to explain justice; it does not presuppose (beg the question on) justice.

Contrary to the Rawlsian theory, Pablo’s position appears to beg the question. For what he seems to say is that the principle of utility (greatest good for the greatest number) fairly applied (i.e., so that nobody is sacrificed for the good of the majority) is the foundation for justice. He now argues that a sense of fairness is built into the principle of utility. (He seems to have forgotten that he had found the source of justice in the “cleansing work” of science.) The principle of utility cannot explain justice if it presupposes justice.

—————————————
Pablo replied:

Juan asks: Doesn’t utilitarianism beg the question? Does it assume fairness in order to work?

My view is not question begging. Utilitarianism, essentially, has a built-in fairness since it advocates the greatest good for all. It is only fair that all should benefit from the rules and laws of society (each citizen counts as one under the principle.) Yes, it advocates minority rights but only because we are all minorities of some sort and because giving rights to all these minorities is for the greater good of society as a whole. However this does not guarantee individual the same rights for all members of society for obvious reasons. Some are criminals, some insane, some have beliefs which harm others (the KKK for example), those guilty of hate crimes, those in the country illegally, et. al. Since the rights of the above groups cause more harm than good, generally, we separate them from society; in other words, we punish them until or unless they can be rehabilitated (or become citizens). If the activity harms no one, then, clearly, it is to be allowed as an individual right. The principle of utility, however, is not always easy to apply and it could be used unjustly. We know when it’s unjust when we can determine the rule or law does more harm than good for the persons involved. This is precisely what we do in practice, and how new laws are made and applied. So how does this principle beg the question?

One problem with utilitarianism, brought up by a friend many years ago and others as well, was that of scapegoating. In the days of the wild west, the law-enforcing authorities would often find some disreputable person and accuse him of a crime which he did not commit. They did this so the people in the city could feel safer (clearly for the greater good of all). Even though there may have been little evidence the disreputable person was guilty, he was accused nonetheless. At the same time, the authorities also got rid of someone they didn’t like. Now, was that justice? How is utilitarianism to be applied in such a case (and such cases still exist today: the case of the four innocent college students accused of raping a girl was just such an instance. They were claimed guilty because of a prejudiced prosecuting attorney as you may recall).

There are a number of reasons for claiming an injustice here and from a utilitarian point of view. For one, the real criminal in the case of scapegoating is still out there and may well strike again. The problem of safety has not really been resolved and clearly that’s not for the greater good. Secondly, if (or when?) the knowledge about the real criminal comes out, the respect for law enforcement agencies will be quickly undermined (and could lead to taking the law in one’s own hand; vigilantism). That’s certainly not for the greater good. Also, even if neither of the two mentioned defenses turn out to be applicable, the authorities may well think it a good idea to do the same thing in other cases where they have trouble finding the real criminal. This habit would certainly lead to the problems mentioned in the first two cases sooner or later. So, in the long run, such practices would be unfair since they would, ultimately, lead to more harm than good. (There is also the psychological problem of the authorities having to live with a guilty conscience if they knowingly accuse and prosecute an innocent person.)

————————————–

Pablo has to decide whether he believes that the principle of utility can be the basis for justice or whether it already incorporates justice in its formulation. These are different positions, which Pablo states at different times.
He tells us first that his “..view, ,,is not question begging. Utilitarianism, essentially, has a built-in fairness since it advocates the greatest good for all. It is only fair that all should benefit from the rules and laws of society (each citizen counts as one under the principle.)”

This means that the principle of utilitarianism incorporates in its statement a statement of justice, “a built-in fairness” in that it “advocates the greatest good for all.” To me this asserts that the principle will always be applied fairly, so that everyone benefits. So stated it is just another way of stating the idea of justice, not a basis for justice. Pablo even suggests that this “built-in fairness” is an essential element of utilitarianism. This obviously states that utility includes a justice-feature from the beginning. Utilitarianism explains justice only in the sense that utility is qualified by a justice principle.

On the other hand, Pablo admits, while considering the problem of a convenient scape-goat, that the principle of utility “can be used unjustly.” This happens when someone’s application of the principle of the “greater good” requires that some individual or a minority group be scape-goated in order to maximize benefit for the rest of society. But how could this happen if the principle essentially has the “built-in fairness” which Pablo claims above? Can a rule which essentially includes a built-in-fairness element be applied unfairly? I would not think so.

My recollection is that predicate of the principle of utility (as stated by most utilitarians) is the “greatest good for the greatest number.” So the utilitarian action or rule is labeled a morally good act or policy because it results in the greatest good for the greatest number of those affected by that action or policy. We can imagine this working as a principle applicable in the real world; and applied in the real world it would sometimes result in benefit for the majority derived at the cost of others not benefiting or even suffering as a result. For example, a war necessary to defend the nation can presumably be justified on utilitarian ground. The nation as a whole benefits in not being destroyed by the enemy; but a good number of people (soldiers, victims of the war) must suffer as a result. To state the principle as one which realizes the “greatest good for all” is to lay down a rule which is unrealistic and inapplicable. How could we ever say that a specific action/rule is one that results in benefit for all affected by that action? A natural, even inevitable element of social-political life is that there are conflicts of interest between individuals or groups of people. Someone doing the best he can to treat everyone fairly will still do things do not maximize the interests of some people, even work contrary to some people’s interests. In other words, we cannot insure that a principle of action always results in “fairness” for all concerned. This is why the principle of utility is normally stated as that action-policy which brings about the greatest good for the greatest number, not the good for all.

Pablo goes on to explain that unjust applications of the utilitarian principle would inevitably result in bad consequences, which could be ferreted out by utilitarian principles. Yes, initially it might appear that unjust scape-goating of innocent persons will result in maximum benefit for the rest of society; but eventually this injustice would prove to have negative consequences for the general state of society (an example might be the treatment of Japanese Americans in the 1940s). So, on utilitarian grounds we could show that initially the utilitarian principle was not observed. I understand this is the gist of Pablo’s argument.

The problem here is that utilitarianism, as a principle that preserves justice, can only be defended by “just-so stories.” Yes, of course, if victimization of innocents was always exposed and always led to bad consequences for the perpetrators and for society, then we could say that utilitarianism always proves just. But it takes a very naive and trusting soul to buy all this; the real world does not work this way and often great injustice is the road to greater benefit and profit for those who bought the utilitarian policy in the first place. Those willing to scape-goat the defenseless are not always, maybe not hardly ever, exposed for their unjust acts. These “just-so stories” are really a flimsy ground on which to rest the argument for the justice of the utilitarian principle, given that it must be argued for and does not have a “built-in-fairness” as an essential element. But Pablo was not consistent on this point.

A murky “Moral Landscape” in the horizon?

Sam Harris, one of the “new atheistic” writers, apparently has a new book coming, The Moral Landscape: Thinking about human values in universal terms. Someone sent me a text of a recent interview in which he answers a few questions about the way in which science provides answers to moral questions. Below are the first three questions and Harris’s reply to each. I will show that his replies are as perplexing as they are problematic and seem to discount the really hard questions of moral situations. To anyone (like myself) who holds out hope that the work of the sciences is relevant to moral philosophy, Harris’s perspective on these issues leaves a lot to be desired, to put it as generously as I can.

1. Are there right and wrong answers to moral questions?

Harris: Morality must relate, at some level, to the well-being of conscious creatures. If there are more and less effective ways for us to seek happiness and to avoid misery in this world — and there clearly are — then there are right and wrong answers to questions of morality.

2. Are you saying that science can answer such questions?

Harris: Yes, in principle. Human well-being is not a random phenomenon. It depends on many factors — ranging from genetics and neurobiology to sociology and economics. But, clearly, there are scientific truths to be known about how we can flourish in this world. Wherever we can have an impact on the well-being of others, questions of morality apply.

3. But can’t moral claims be in conflict? Aren’t there many situations in which one person’s happiness means another is suffering?

Harris: There are some circumstances like this, and we call these contests “zero-sum.” Generally speaking, however, the most important moral occasions are not like this. If we could eliminate war, nuclear proliferation, malaria, chronic hunger, child abuse, etc. — these changes would be good, on balance, for everyone. There are surely neurobiological, psychological, and sociological reasons why this is so — which is to say that science could potentially tell us exactly why a phenomenon like child abuse diminishes human well-being.
But we don’t have to wait for science to do this. We already have very good reasons to believe that mistreating children is bad for everyone. I think it is important for us to admit that this is not a claim about our personal preferences, or merely something our culture has conditioned us to believe. It is a claim about the architecture of our minds and the social architecture of our world. Moral truths of this kind must find their place in any scientific understanding of human experience.
————————————–

Now let’s consider Harris’s reply to each question in turn.

#1: What is a question of morality, and in what sense are there right and wrong answers to moral questions? Harris and the interviewer seem to limit questions of morality to questions about the well-being of conscious creatures (e.g. humans), and seem to further delimit these to the problem of maintaining or increasing the well-being of such creatures and decreasing or eliminating suffering. As a colleague pointed out, this is the perspective of utilitarianism. On this perspective, given that we can outline effective ways of maintaining and increasing human well-being and effective ways of reducing suffering, then we have the utilitarian’s proposal for resolving those moral issues which lend themselves to this ‘weighing benefits against cost’ of our policies and actions. In principle (on utilitarian principles), there are right and wrong answers to this limited set of moral questions. But aren’t there other types of moral questions? Don’t we sometimes have to act out of respect the rights of an individual and not in terms of purported consequences of our action, as when we have to keep a promise we made to that person? Don’t we sometimes have to speak honestly although the consequences may not result in more benefit than harm?

#2: In what sense can science answer such questions? Harris assumes that human well-being can be scientifically defined; and assumes also that the means for achieving human well-being also rests on “scientific truths.” Somehow science can tell us much about how we “flourish in the world” and how our actions affect others. Given these ‘facts,’ he suggests that science can answer moral questions.

Taken by themselves, these remarks are far too simplistic to offer much help to those who wish to connect the sciences with morality. Science might be able to explain what kind of creature we are (evolved biological creatures with big brains) and even explain the social culture necessary to understanding the kind of social animals that we are. But how does this explain human flourishing? A full explanation of that requires that we explain values and purpose, and explain individual decisions as to what values or purposes to realize in one’s existence. Does science explain these? Science might be able to provide a menu of choices that a person may face; but it does not determine which of those choices the person should select. Regarding the issue of our how our actions affect others, science can offer some explanation, but not a full explanation. The consequences of our actions can be traced only in simply cases; but there are many consequences of our actions which cannot be foreseen, even by the best that science can offer. Insofar as answers to moral questions rest on our ability to foresee the consequences of our actions, those moral questions cannot be answered fully, with or without the aid of science. Whenever our actions have an impact on others, moral questions do apply. But this does not mean that such questions can be answered. (An example of what seemed to be good moral policy which had unforeseen bad consequences is the establishment of huge units for public housing back in the 1960s, public housing which soon became the base for a variety of socially dysfunctional families, criminal activities, juvenile crime, illicit drug trade and prostitution. The moral choice to offer public housing to poverty stricken families seemed to be a good choice; the best of the social sciences supported that policy; but nobody foresaw the bad consequence resulting from that policy.)

#3: This question relates to moral conflict, and Harris simply dismisses it as applicable in most cases of moral question or moral choice. He assumes that the significant moral situations are those in which it is clear what we should do in order to bring about “what is good, on balance, for everyone.” But this is too fast and not at all a case of dealing seriously with the issue of moral conflict. Harris dismisses such cases as “zero sum” contests which differ from the “most important moral occasions.” This might be a convenient position to take when one is trying to show that morality is reducible to utilitarian considerations amenable to scientific treatment; but it hardly strikes me as an honest effort to deal with real cases of moral conflict, cases in which the best reasoning we can apply does not give us an answer as to what is the right thing to do and in which any prediction as to the consequences of our decision is questionable at best. It is well enough to assert that we all know that child abuse is bad for everyone, and that this can be explained in terms of the “architecture of our minds” and “the social architecture of our world.” This sounds good and may impress the unwary reader, but often the hard choice is one that does not lend itself easily to a choice leading to child abuse and one which avoids it. (E.G. Is it better to remove a child from dysfunctional parents when the child welfare system often subjects that child to abusive foster parents? Someone has to make the hard choice without any guarantee, scientific or otherwise, that the action will be the one most beneficial for the family and child directly affected by the action.)

Moral conflict is more significant in the area of practical morality than Harris assumes. Many aspects of our individual and social circumstances, including those that Harris lists — war, nuclear proliferation, malaria, chronic hunger, child abuse – are such that involve moral conflict, hard choices between alternatives in which it is not at all clear which is the best choice in terms of beneficial or harmful consequences of the alternative actions. War is generally a very bad thing for most humans affected by it; but there are many cases in which it is far from clear which choice between war and avoidance of war is the morally good choice in terms of beneficial consequence. Nuclear proliferation is another very bad thing to impose on the world; but is it obvious that the correct moral choice for any nation facing the choice of developing as a nuclear power or foregoing such development is to avoid such development? Is it possible that the consequences of such a choice might turn out to be most harmful for that nation? Surely neither science nor rational analysis can give a definitive answer to that question. The frequency of malaria, chronic hunger, and child abuse in large parts of the world is surely a moral failing on a grand scale. But do any of the experts, scientific or otherwise, know exactly which national and international policies are ones that insure beneficial consequences in resolving these scourges? That we should work to eliminate these as much as possible is without dispute. What we must do and what actions individual societies must take is subject to trial and error. The best intentions of individuals and governments often result in bad consequences. Science can help to reduce these tragic mistakes; but neither science nor the most enlightened thinking will insure that the hard decisions that humans must make will have the best consequences.

Any “scientific understanding of human experience” will divulge that humans often face tragic situations, moral dilemma, in which there is no guarantee as to what is the right choice in terms of beneficial or harmful consequences. Jean Paul Sartre offers the example of a young French man during the Nazi occupation of France who had to decide between joining the underground resistance fight the Nazi invaders or staying with his aged mother who needed him to care for her. This is not a choice that could resolved by tracing the consequences of each alternative, weighing the benefits and cost, and making a morally correct decision. All the science in the world would not help him; knowing the “architecture of the mind” would not help him. The young man had to make a choice without the comfort of knowing that he made the correct moral choice. He had to make a choice and then live with it, never sure that he had done the right thing. Many moral situations are like this, contrary to Harris’s facile dismissal of such situations as not being “important moral situations.” Any situations in which there are more potential recipients of some benefit than there are benefits to dispense: when there’s a shortage of food or water and a decision must be made as to who gets fed and watered and who must do without; when some medical procedure (vital organ transplant) is limited to a few patients out of a large waiting list; when a decision must be made as to who ( of a limited number) gets the position at a company or the admission at the university and there are many qualified applicants or when trying to decide where to direct our donated dollars when many worthy charitable organizations are making honest appeals. In all these cases, the moral choice is not one that can be made in terms of a scientific understanding of human experience or anything close to an ability to make a cost-benefit analysis of the consequences of our action.

In summary, based on the Sam Harris’s replies to the first three questions by the interviewer, I have little or no confidence that his latest literary effort, The Moral Landscape, will offer much that is helpful to those who look to the sciences for some help in dealing with moral issues.