DiaBlogue: Universal Utilitarianism

Standard
[Updated 10/09 @ 1 PM PST with Alan’s Comments]
[Updated 10/11 @ 7 AM PST with Ernie’s Comments, at end]
I am grateful for Alan’s follow-up post On Carrots and Sticks, as I was quite unsure about how to respond to his Clarifications, Hopefully. I appreciated the necessity and relevance of his request for me to clarify my position, but given our recent failure to find (solid) common ground I wasn’t quite sure where to start.
Fortunately, his reference to Daylight Atheism‘s post on The Ineffable Carrot and the Infinite Stick (I love the title 🙂 appears to provide exactly the starting point I was looking for, with its definition of Universal Utilitarianism:
Always minimize both actual and potential suffering; always maximize both actual and potential happiness.
For purposes of this discussion, let us both stipulate that this moral imperative is at least “proximately true” — that is, any ethical theory we propose has to either incorporate or address this truth to be considered valid.
Fair enough, Alan? In that case, [Read More] to see whether the “Ebon Muse” has managed to shed “Daylight” on a non-deistic theory of morality…

First of all, let me confess that (due to time constraints and, well, impatience) I did not read all of Ebonmuse’s presumably excellent article. Rather, I focused my short attention span on his (?) articulation and defense of Universal Utilitarianism (Alan, if I’ve missed anything crucial in this or future posts of his, please let me know). In particular, I was very intrigued by his arguments about What Makes Morality Universal? — as well as his answer to the charge of “Stealing” Morality from Theism.
Overall, I was very impressed by the breadth of his research, the willingness to ask tough questions, and his articulate defense of his proposition. I readily appreciate Alan’s wisdom in invoking him, perhaps analogous to my invocation of Brian McLaren.
For the most part, I find Ebon Muse’s arguments reasonably sound and convincing. I do agree that virtue is a subject that can be rationally investigated, and I agree with his choice of starting from (a) happiness and (b) empathy:
A. Happiness
“If this ethical system is neither independent of human beings nor established by decree of a higher power, what makes it universal? The answer lies partly in the fact that it is founded on something that is universal among sentient beings, namely the desire for happiness… I claim that universal utilitarianism is the set of rules that, if consistently followed, maximizes both actual and potential happiness for everyone – it offers the greatest probability for maximal happiness. “
B. Empathy
“Universal utilitarianism is not in any way derived from theistic morality, because it is based on the fundamentally human trait of empathy. It proposes that we should help others … because we all know what it is like to be happy and to suffer, and we should want to increase the happiness and decrease the suffering of others just as we want that for ourselves. “
However, I find it troubling that he does not seem to recognize that he is making an explicit choice to elevate empathy to the status of a “fundamentally human trait.” Yes, I completely agree that we all want to live a world based on empathy. Alas, empathy is not the only “fundamental trait'” we are dealing with; as far as I can tell, “selfishness” is an equally fundamental trait, and exists in a constant tension with empathy — a fact he appears to gloss over.
“I claim that universal utilitarianism is the set of rules that, if consistently followed, maximizes both actual and potential happiness for everyone – it offers the greatest probability for maximal happiness. Therefore, the most rational course for all human beings is to follow it – and this conclusion holds regardless of what other factors are brought into consideration. That is what makes this moral system universal.”
Um, no. He is making a subtle error, but it is a fairly profound one; one as ancient as Plato’s Republic and as modern as the Prisoner’s Dilemma, so I’m a bit dismayed that he seems to have completely missed it.
What is the error? That he fails to see the value of hypocrisy. Yes, I absolutely want to live in a world where everyone else seeks maximize the happiness of the whole. But, rationally speaking, it is better for me if [within that context] I can find a way to maximize my own happiness [at the expense of others] — as long as I don’t get caught! That is, as long as I can maintain the appearance of civic-mindedness, I can enjoy the benefits of such a society without having to pay the price.
This isn’t hardly a theoretical concern. Economists know it as the problem of moral hazard and free riders. In fact, one of the primary purposes of government is precisely to prevent such “local optimization” at the expense of the whole, by enforcing compliance with community-minded behavior. Unfortunately, that in turns leads to the problem of Quis custodiet ipsos custodes? — which is why most societies obsess about government corruption, and why the most immoral people in ancient civilizations tended to be kings, emperors — and popes!
Alan, if you don’t acknowledge this problem, then I can see why you consider Christianity (and deism) wholly superfluous. However, I would claim that any theory of ethics that fails to recognize this problem is demonstrably incomplete, and wholly useless in the real world.
Now, I suppose you might reply (as Ebon Muse seems to have done at one point) that people should always value each other’s happiness as if it were their own “just because it is the right thing to do.” Even if means great sacrifice on my part, with no earthly reason to think I’ll get rewarded for it. Or foregoing the opportunity for great personal gain, even if there is no chance of ever getting caught.
Is that true? If so, then let me me ask again: Why?
If your answer is simply “Because!”, I’m okay with that — as long you accept that this makes it a non-contingent (“religious”) belief on your part.
For my part, I consider the “imperative towards virtue” a consequence of deeper non-contingent beliefs I have, which makes it possible for me to rationally investigate the source and character of that imperative. And I would welcome the chance to (finally 🙂 compare the relative predictive power of our respective non-contingent
beliefs.
Love,
Ernie
—–
On
Oct 9, 2006, at 11:21 AM, Alan Lund wrote:
I’ll obviously write more later, but in your critique of Universal Utilitarianism, I think it is you that missed something. When Ebon Muse said “I claim that universal utilitarianism is the set of rules that, if consistently followed, maximizes both actual and potential happiness for everyone…”, I am pretty sure that the consistent following is not just by one person over time, but by all people. When you describe the value of hypcrisy to one person, that value disappears if it is consistently practiced by everyone. Thus, UU succeeds in describing why this kind of hypocritical practice is unethical.
So, yes, I acknowledge the problem but it is not a problem with Universal Utilitarianism. It is a problem with hypocrisy.
On Oct 11, 2006, at 6:47 AM, Ernie Prabhakar wrote:
Hi Alan,
Okay, let us concede that UU is the ideal state if “consistently practiced” by everyone. But that merely raises the question:
a. Do you have any rational basis for believing that a large group of humans could “consistently practice” those principles?
b. Do you have any empirical data regarding the actions necessary to achieve such a state?
Thanks,
— Ernie P.
Advertisements