Wednesday, September 30, 2009

Ethics and Atheism

In recent decades the attendance of traditional Christian churches in the US has been in decline. This has translated into an increase of those practicing (or at least identifying themselves as) Neo-pagans, Wiccans or several other varieties of spirituality. As spiritual practices with more or less defined, but always present, ethical and moral guidelines they can be dismissed from the concerns of this essay. This decline has also, however, translated itself into a growing population of those who identify themselves as atheists. In an increasingly secular age, taken at face value, atheism does not seem problematic. It can be argued that Science, History, Philosophy, Art, Literature and Music can all continue to be produced without any particular belief in God. The only place where atheism is problematic is in the arena of Ethics. Some popular perceptions or misperceptions are that atheists cannot be expected to act morally without the presence of divine mandates, that their ethics will be innately inferior to those of religious origin, and that the only morality left open to an atheist is moral relativism.

It is absurd to think that because someone is not required to act in a certain way due to divine mandate that it precludes them from acting morally. It is a stock either/or fallacy. It is just as absurd to assume that because someone professes to be of a given faith that they can be expected to act morally. If a person happens to be of a given faith they may be more inclined to trust the moral character of another member of their faith, but it is not, nor can it be, a given. In the everyday world, the origin of morality has little to do with our expectations of moral behavior. Our expectations of moral behavior are derived from our observations of moral behavior. We tend to trust those we see behaving morally and to distrust those we see (or have reason to believe are) behaving immorally or amorally. Moreover, our trust may be lesser or greater depending on the particular area of behavior under discussion. For example, a given man may be perfectly willing to let a lawyer that never violates attorney-client privilege, but also engages in affairs, represent him in a lawsuit without introducing the lawyer to his wife.

What drives this misperception stems from the categorical/hypothetical divide. Moral theory tends to be (with the exception of morally relativistic systems) defined by the categorical, or that which is universal. Moral actions are by nature hypothetical, or that which happens situationally. Moral imperatives, Christian or otherwise, tend to be very broad: don’t steal, don’t kill, and don’t lie, to name a few that appear in both religious and non-religious moral systems. While not true categorical imperatives, they’re as close as one can come without becoming vague to the point of irrelevancy. These rules are not contextual. They are meant to define behavior in all instances at all times. Moral behavior, on the other hand, is entirely contextual. Human fallibility in combination with stressors (be they personal or professional) will lead to failures to engage in the universal rules. Here’s the crux of the matter, because religious morality is divinely mandated (theoretically), those who accept it make the mistake of assuming that because the rules are divine and universal the behaviors they require will be equally universal, but any rule not both divine and universal will lead to suspect behavior. It is a conclusion that ignores the differentiation between the categorical and the hypothetical. Human beings act in the hypothetical and the rules are broken, whether they are divinely mandated or not. The assumption that those who lack divinely mandated rules are more prone to immoral behavior because of the lack of divine mandate is insupportable. Moral behavior occurs in context and so does moral failure, irregardless of the source of the rules. To assert that atheists are more prone to moral failure than those who embrace religious moral systems is at best an error and at worst an ugly prejudice.

It is equally spurious to assert that an atheist’s ethic is innately inferior to a divinely mandated ethic. Such claims find their source in an understandable, but nonetheless flawed, line of reasoning. It is a variant on the Cartesian Ontological Proof for God. The argument goes something like this:

1. God is perfect.

2. God has given mankind rules of conduct.

3. As God is perfect, the rules God has given to mankind are equally perfect.

The argument seems sound enough from a logical standpoint; unfortunately, the entire argument rests on three significant assumptions. The first assumption is that there is a God. The second assumption is that God is perfect. The final assumption, which actually hinges on accepting the first two, is that such a God is sufficiently interested in human beings to give us rules. If these assumptions are accepted as irrefutably true (which true believers do accept), then the arguments work. Nevertheless, to accept the first or all of these assumptions is an act of faith and not reason. There is no viable way to demonstrate the existence, nature, or actions of God. As such, there is no way to assert the superiority of divinely mandated moral systems, because such judgments rest on the believed, but ultimately non-demonstrable, nature of God’s perfection.

As to the notion that the only morality available to an atheist is moral relativism, there is nothing to support such a claim. The most significant principle in the various incarnations of moral relativism is that moral positions are, by nature, not absolute positions. As such, those professing contrary moral thinking cannot be condemned for their moral reasoning. Setting aside the numerous examples of how this perception of morality fails, how can an atheist avoid being or being labeled a moral relativist?

The simple answer is that being an atheist does not preclude moral absolutism. It simply isn’t divinely mandated moral absolutism. While the existence of God cannot be demonstrated to be a priori, there are principles which have been more or less accepted as a priori true: logical principles. How does one move from logic to morality? A very simple example would be the application of the law of identity and its corollary the law of non-contradiction. If we have a procedure for identifying, for example, a human being, then the law of identity dictates that a human being is a human being and the law of non-contradiction dictates that a human being cannot be other than a human being. Since most moral systems concern themselves primarily with human interaction, this is a fairly important place to begin.

Let’s say that our procedure for identifying a human being is a genetic examination. Anyone who has a genetic code within a certain tolerance qualifies for a status as a human being, irregardless of their racial heritage. In one fell swoop, an atheist who accepts the laws of identity and non-contradiction has eliminated all forms of racism, as any morality worth discussing will include all human beings beneath its umbrella and the genetic differences between the various racial groups on this planet are miniscule. Granted, this is a simplistic example and subject to certain criticisms, but it does demonstrate the possibility of establishing an absolute moral position without recourse to a non-demonstrable deity.

It is outside of the scope and purpose of this essay to follow the line of reasoning out to a full-blown moral theory. Nonetheless, it can be said that the assertions that atheists cannot be expected to act morally without divine mandate, that an atheist’s ethic is innately inferior, and that an atheist is condemned to moral relativism are untenable positions. An atheist is subject to the same moral quality and failure as a theist because they both have moral victories and defeats in the hypothetical. The inferiority of an atheist ethic is based on flawed logic or ill-considered suppositions. The possibility of an absolute morality without recourse to God is entirely conceivable. What is more interesting, and perhaps more worth the time and effort of exploration, would be just why it is that some of those who embrace a divinely mandated morality feel the need to try to undercut and denigrate the moral grounding of those who do not.

Monday, September 28, 2009

Intelligent Life in the Universe

The question of whether or not there is intelligent life elsewhere in the universe is one that preoccupies the human mind. Rightly so, I think. The vast expanse of the universe almost compels the consideration of the possibility. The idea that in all of that uncharted room we, human beings, alone have the capacity for memory, reasoning, history, art, and relationships (to name but a few) is not only disquieting; it is borderline repulsive. It seems unimaginable that beings so prone to destruction, planetary mutilation, and behaviors more appropriate to an unmonitored playground than a civilization stand triumphant at the very pinnacle of the evolutionary mountain. So we speculate on the possibility of life elsewhere. Some of these visions are distopian: H.G. Wells imagined extra-terrestrial life as genocidal. Some of these visions are more hopeful, Star Trek and Star Wars. However, the possibility of such intelligent life in the universe is a topic of considerable debate.
The most commonly referenced means of determining the possibility of intelligent life in the universe is the Drake Equation:

N = R* fp ne fl fi fc L

For an explanation of the equation see setileague. Drake estimates that there are 10,000 communicative civilizations in the Milky Way Galaxy. Very exciting stuff, until you realize that basically every number Drake, or anyone else using the formula, is employing is, in the best case scenario, a best guess. 10,000 is not as compelling when it's a guess. Essentially, it places you back to square one in terms of the possibility.
So, let us dismiss hard math for the time being and think in terms of reason. The universe is, literally, unimaginably vast. We only just begun to breach the outer reaches of our own solar system in the last few decades. That's nothing in terms of the existence of the universe. We are a very primitive species when you get right down to brass tacks. Our technology is supremely limited. The most brilliant physicians in the world don't fully or in many cases partially understand what happens in our bodies. Think about this for a moment. We live in our bodies every day. It is probably the most examined, poked, tested, prodded and experimented on object in the world and we don't even really understand it yet. We cannot manage our toxic output, our waste output, or our population. We don't have a decent theory for the origin of life on earth without descent into wild speculation or religion, neither of which is a good place to begin when discussing the possibility of life elsewhere in the universe. In point of fact, about the only thing at which human beings excel at, as a group, is killing other human beings. This is the height, the great vantage point, from which experts look out into the universe and declare that it's unlikely that there is intelligent life anywhere else. There is a term to describe such a statement from such a group: Hubris.
It is an arrogance of the most unsavory sort that leads people to declare themselves the only intelligent life in the universe. It is the statement of children who believe themselves to be more special than they truly are. It is a reflection of western culture's, in particular, belief in its own superiority. We are all subject to our biases, be you as scientist or a blogger, but when you look up at the sky at night it should be clear that the limits our self-knowledge and knowledge of the universe should preclude a belief that we are alone.

Saturday, September 26, 2009

Do Ethics and Politics Mix

One of the recurring issues in politics, whether it be at the local, state, or national level, is the question or accusation of ethics violations. In simpler times, most of the offenses now called “ethics violations” were called something a little more appropriate: lawbreaking. Nonetheless, times change and so do the terms we employ. I’m willing to grant that some of these ethics violations probably stem from a simple lack of knowledge or a lack of understanding of the relevant rules or laws involved. Let’s face it; the legal code in the US has become so complicated that even lawyers need to specialize to understand a particular area of it. So, even though you still here the phrase “ignorance of the law is no excuse” thrown around from time to time, I’m not going to outright condemn every person in politics accused of an ethics violation of malicious intent. That said, the simple magnitude and frequency of these ethics violations does beg the question: Do ethics and politics mix?
Before attacking that question directly, it might help to recall the old saying about politics being the art of compromise. I bring it up because it goes to the heart of the question. Ethics, as most people conceive of it, is about the possession and execution of principles. Whatever those principles may be, we more or less expect people to stand by them. That everyday people utterly fail in this challenge as often as politicians (though with less fanfare) might make for an interesting piece down the road. Compromise, though, is exactly the opposite of our general conceptions of ethics. Compromise is all about surrendering our (with hope) deeply held convictions to greater or lesser degrees in the hope of achieving a functional outcome. This is what politicians do: all the time.
To achieve anything in politics is to agree to things that go against your stated beliefs to achieve some of your stated goals. This is what riders on bills are all about. A whole bunch of people who, individually, would probably never vote for these things, agree to do so if something they want gets tacked onto it.
So, to the question “Do ethics and politics mix,” I’m forced to say, “No, probably not.” There are undoubtedly some politicians out there who courageously voted their consciences for one glorious term, accomplished none of their stated goals, and were soundly ushered out of office for more pragmatic and effective representatives. However, when push comes to shove, politicians are morally compromised by very nature of their jobs. So while we may shake our collective heads when politicians come under fire for ethics violations, we should never be surprised. The step from an ethical compromise that is required to get what you want in a legal way and one which is illegal is very short. Moreover, the line between them, when you compromise all the time, has to become blurry.
Maybe, when candidates are interviewed or fielding questions, instead of asking them what they stand for we should ask them, “How much of this are you willing to throw to the side?” At least then we’d know what to expect.

Thursday, September 24, 2009

Absence and Freedom

I thought this might be a good time to discuss some of the roles absence plays in philosophy. Surprisingly enough, absence plays a profound role in philosophical discourse. In Existentialism, for example, the presence or absence of another person plays a significant role in how we operate as human beings. In argumentation, negative arguments rely on the opposing position having an absence of proof to support itself. Arguments against the existence of God use this strategy of absence extensively. In social and political philosophy absence is critical to defining freedom. Given the siege on civil liberties in the last few years, this seems to be a valuable topic to explore.

There are, generally speaking, three types of freedom that are discussed in social and political philosophy: absolute freedom, positive freedom, and negative freedom. Absolute freedom is exactly what it sounds like, the complete absence of constraint or, conversely, the complete ability to act. However, as a term, it has very little social or political currency. A social or political structure, no matter how democratic, dictatorial, or totalitarian, effectively begins from the notion that the behavior of the individual members of the group are constrained in some fashion, usually by laws.

Positive freedom is defined as the potential to act or pursue goals within constraints. This is particularly relevant to societies. Every society has laws, legal systems, or methods of constraint. These are not absolute constraints. While an individual or corporation is forbidden from acting in particular ways in the pursuit of making money in business, neither is forbidden from pursuing profit or creating a successful business. They can pursue goals, can act, to achieve their ends within the confines of the constraints established by society or government.
Negative freedom is another matter. It is defined as the absence of constraint. You can walk, sleep, eat, etc., as and when you wish assuming there is nothing external to forbid these things (medical conditions, financial burdens, and so on). While seemingly similar, positive and negative freedom are characterized by one fundamental difference; the locus of control.

In positive freedom, the locus of control is external: laws, customs, mores, and taboos. In negative freedom, the locus of control is internal; your own wants and desires.

Now, to participate in a society, we are all more or less forced to accept that a large part of what we experience is positive freedom; the ability to act within constraints. However, in recent years, the right swinging government has been looking to diminish what negative freedoms we do have as well as increasing the constraints of our positive freedom. Freedoms such as privacy, in particular, have suffered from the rallying cry of security. While we indeed must accept constraints, we need not accept a government's stance that all freedom is for sale on the auction block of security. Positive freedom is only as good as the reciprocal restraints on government.

Tuesday, September 22, 2009

The Fiction of Reality TV

Reality is one of those things that philosophers spend a great deal of time thinking about. We’ve got an entire branch of the discipline devoted to the question of the principles of reality: Metaphysics. So, when confronted with something calling itself Reality TV, I find myself raising an eyebrow. In the first place, after two thousand plus years of work, philosophers have yet to find a way to demonstrate, definitively, the existence of a material world (which is what most people consider to be reality). Every attempt to prove or assert the absolute existence of a material world has met with failure. So, on that score, I find myself discounting the notion of reality in connection with Reality TV. However, let us make the assumption that such a material existence is the fact of the matter.
With the world as a given, there’s a deeper problem with the concept of Reality TV. Television, is by nature, an artificial construct. It breaks down images, which themselves exist only on film, or as digital data, and transmits them through the atmosphere, or bounces them off of satellites, and then another device takes those bits and pieces and reassembles them onto a screen. So, what you see is, at best, a second or third hand rendering of a recording of something that may or may not exist in the material world. There’s nothing very real about a picture of a picture that’s been disassembled and reassembled a couple of times.
Let’s set that aside for the moment and just talk about the experience of being in the world itself. The physical experience in the world is, not including sleep and barring some physiological disorder, a continuous one with long and boring stretches where nothing interesting happens. People go to work, do their jobs, pick up their dry-cleaning and so on. Life is a linear experience, morning to night, day to day, always connected. This is not the experience of Reality TV. It is sequential, as such, but it is not a continuous presentation of anything. It is presented piecemeal, like a work of fiction. A director, or an editor, or someone else has chosen to present you with fixed points in the lives of those who have been recorded.
Beyond that, a question that recurs in philosophy of film and philosophy of aesthetics is the question of how authentic something is that has been framed and recorded by a device. When you see a photograph, or watch a film, decisions have been made, by one or more other people. Someone has decided what speed film to employ. Someone has decided how to frame the shot, thereby deciding for you what constitutes the most important space of recorded environment. Someone has decided how to focus the shot. A vast set of factors that make up the recorded space has been reduced with no choices on your part. This is not the case in the world of non-Reality TV. You get to make those choices any other time.
Perhaps the thing that most condemns Reality TV as being without any reality at all is the subjects of the shows themselves. As amusing, or interesting, or pathetic one may find the Hogan’s, the Osborne’s, the cast of Survivor, or the enigmatic Gene Simmons, these people are anomalous. Be it due to their careers (Hogan, Simmons), or their environments (Survivor), they exist in what one might call extraordinary circumstances. They are what are referred to in statistics as outliers. Outliers are invariably interesting, but they are fundamentally non-representative. The reality that most people inhabit does not include multi-million dollar homes, record contracts, or living on an island competing for a huge payoff by being the most effective con. The reality of the vast majority of people is marked by the far more mundane repetition of a job or jobs, their kids’ activities, dishes, taking out the garbage, and doing the laundry. That is the reality that rarely, if ever, appears on so-called Reality TV. The sad fact of the matter is this: reality, as strange as it sometimes is, by and large is populated by people who live routine lives.
Those self-same, average people don’t watch Reality TV to see reality. They watch it for the same reason that they watch movies and other fiction based visual media; because it isn’t reality. It’s entertainment. It’s escape from reality. It’s far too edited, too distilled, and probably far too choreographed to be considered reality. So let’s just stop calling it Reality TV and start calling it what it really is: Fiction.

Sunday, September 20, 2009

Questions and Philosophy

There is a sort of haunting question people ask that plagues philosophy: “What’s the point?” This is followed immediately by the statement: “It doesn’t give you any answers.” Both the question and the statement are frustrating in the extreme for those who pursue the study of philosophy, in large part because the question and the statement are only tangentially related. Moreover, it demonstrates the same lack of understanding that condemns all lawyers to suspicion when the vast majority of them do their jobs diligently and ethically. To say that philosophy does not provide answers is an equivocation on the grand scale. What people really mean when they say that is that philosophy does not provide the same sort of moral certainty as religion. Answers, however, come in many forms.
Logic, one of the foundational branches of philosophy, is entirely about answers. It is logic that says that an apple is an apple and not a parakeet. Mathematics, science, psychology, and medicine were all contained beneath the umbrella of philosophy at one point or another. It’s a much harder sell to say that none of those things provide answers. That said, what such statements about answers demonstrate is a confusion about the nature of the discipline. Philosophy is not about answers, as such. Answers are a hoped for end or, for some, merely a byproduct. Philosophy is driven, first and foremost, by the act of questioning. It is a discipline of curiosity and the pursuit of those questions.
Unfortunately, many of the questions philosophers ask are the same questions that religion, theoretically, answers. What is the good? What is the right? What is the nature of love, of friendship, or of freedom? People hear these questions asked and assume that philosophers and theologians are doing the same thing. They are not. You’ll know this because the theologian will assert the existence of God, revelation, holy texts, or enlightened beings, etc., as a foundational truth that lends their answer unquestionable authority and leave it at that. The philosopher will ask those questions, try to formulate an answer, or prompt one from another, and the very next thing out of his or her mouth (at least if he or she is doing honest philosophy) will be the question, “Why?”
There is a childlike curiosity that prevails in Philosophy. Philosophers want to know just what make something what it is, be it the mind, the soul, or the origin of the world. They do this through very intensive intellectual exercises, but without the curiosity, it wouldn’t happen at all. The point is the exploration. Philosophers are great explorers, as adventurous in their own ways as a Magellan, Drake, or Verrazzano. They test the world of the abstract, the unknown, the world of ideas with the same ambition and, sometimes, at the mercy of the same derision as their adventurer progenitors. They stand at the very edge of knowledge and ask, brave souls that they are, “Why is this knowledge?” “What makes this knowledge?”
In the final analysis, though, perhaps the most fundamental and potent questions at the disposal of the philosopher are the ones that begin with, “What if…?” Imagination and curiosity are the hallmarks of the active mind and lead to the most interesting answers. So I challenge you, look at the world, at something that doesn’t make sense to you and ask yourself a question. Make it one that begins with “why” or, better yet, “what if,” and see where it takes you. Talk to you soon.

Saturday, September 19, 2009

Intersubjectivity and Relationships

Relationships are one of those topics that receive a great deal of attention. Everyone from psychologists to newspaper columnists talk about it. Women’s magazines always feature at least one or two articles about better understanding relationships. Books on every aspect of relationships from better listening to better sex are readily available from every imaginable perspective. Some of the literature is great and some of it is shallow, useless crap. However, as a philosopher, the element of interpersonal relationships that I find the most interesting comes out of the existentialist schools of thought: intersubjectivity.
Intersubjectivity, in brief, is the idea that when dealing with another person you grant them, or at least should grant them, the status of a being-for-themselves rather than that of an object of your perception. That seems simple enough in theory. Now think about any interpersonal relationship that you have. Have you ever stopped listening to the other person and thought about what you are going to say next? Of course you have. We all have at some point. That’s one of the ways to turn someone into an object, rather than granting them a legitimate status as a being-for-themselves. To treat someone as a being-for-themselves is to acknowledge that they have an existence, a mind, and thoughts independent of you and that this entitles them to not be treated as an object by you. If you stop listening and think about what you’re going to say next, you have chosen to make that person an object whose purpose is to be the recipient of your ideas.
Most of us have people in our lives that we pretty consistently treat as beings-for-themselves: significant others, family members, and close friends. Those represent deeply personal or highly valuable relationships with complex and profoundly charged emotional content. There is a matrix of expectation, obligations, and cultural norms that predispose us to offer a heightened state of attention and being-for-themselves status to those with whom we share these relationships. In point of fact, to fail to offer them being-for-themselves status can not only have powerful social consequences, but can lead to painful cognitive dissonance. Being dismissive of those we care about, whose opinions we value, is a difficult and maybe even an unnatural act. Yet, those relationships represent a very small percentage of the interpersonal relationships that we engage in on a regular basis. What about the relationships that we are less invested in? The casual acquaintance? The co-worker? The register-jockey at the grocery store? Do the same rules apply equally there? Should they?
The answer is, of course, that we don’t deal with the randomly met person, casual acquaintance, or counter clerk with the same level of attention that we do when dealing with the more personally valuable relationships in our lives. The same set of expectations, obligations, and cultural norms don’t apply in those situations. We don’t experience the same emotional content. In simple terms, the interpersonal context has changed in dramatic ways. It is those changes which make it so easy to dismiss the pain of the person we just met as not our problem, to accept the firing of a casual acquaintance with aplomb, or to treat a clerk as a moron. They don’t exist as beings-for-themselves to us. They exist as objects and we give them the same regard we would an object. This is the juncture at which the concept of intersubjectivity becomes problematic.
The argument can be made that we should be offering the same being-for-themselves status to any person that we encounter. The more realistic view of this is that we cannot offer that status to everyone and do anything else. Consider the amount of time and energy and attention that needs to be devoted to maintaining close personal relationships. That is essentially the expenditure that would have to be made on every person you encounter in a day to treat everyone as beings-for-themselves. It would not only be exhausting, it would consume every moment of your day. It is probably only realistic to maintain between six and twelve of these genuinely intersubjective relationships at a time.
That is not, however, to say that we can’t apply some of the lessons of intersubjectivity to the more casual relationships in our lives. While it is not practical to try to treat everyone as a being-for-themselves the way we do family, friends, and lovers, we can certainly work to remember that the people we encounter do have thoughts and feelings independent of our own. We may not respect those thoughts or know what those feelings are, but we can remember that our behavior will have more impact on another person than it would on the object we try to treat that person as being.

God is Dead

"God is dead" is one of the most often quoted and misunderstood quotes from Nietzsche's body of work. The stock misinterpretation is that Nietzsche was referring to a literal being "God" as having died. It is profoundly unlikely that he intended that as his meaning, in large part, because he was an atheist. He never believed in the existence of a literal God in the first place. So it is a clear non-sequitur to think that he would concede that such a being ever existed at all.
What Nietzsche was trying to say is that the man-made concept of God is dead. In other words, the moral framework that man placed onto the concept of God the creator couldn't do what it was supposed to do any longer. Rather than being what it was supposed to be (moral guide, encouragement) the concept of God had become an inhibition to the development of mankind, or at least the small subset of mankind that interested Nietzsche.
Nietzsche, like many philosophers, was fascinated by the possibility of an ideal man. In the early Nietzsche this was represented by his Master Type or noble man and eventually developed in the more deeply misunderstood concept of the Übermensch. Without going into a lengthy explanation, the conflict nascent in the phrase "God is dead," and on which Nietzsche would spend the bulk of his writing, goes something like this. Mankind is, consciously or not, aiming to create the Übermensch. Along the way, mankind stalled with the concept of God; the external authority figure. The Übermensch cannot accept the concept of God because the Übermensch is a self-ruling being: he gives laws to himself and enforces them onto himself, to paraphrase Nietzsche. It is, ultimately, the Übermensch who will and must overcome the concept of God and expose it for the dead idea it is, if only to himself.

5 Minute Philosophy

One of my great loves is philosophy, but philosophy can be tough reading. So, I have decided to offer up what I call 5 Minute Philosophy. I'll write posts about philosophical topics, with the aim of making it accessible and try to keep the reading time under five minutes. This blog is a outgrowth of a different blog that I felt was going in the wrong direction. I've cherry picked the best posts from that blog and those will be appearing here over the next few weeks and then it will be on to original material. A few of these early posts may be a little over five minutes to read or seem a little dated. I imported them virtually as is, including typos, so they have their flaws. But, I love them anyway. I hope you enjoy reading them as much as I enjoy writing them.