DiaBlogue: Ratioanalizing Virtue

Standard
Once again, I am pleased that Alan agrees that we can treat Universal Utilitarianism (UU) as Metric, Not Imperial. That is, we both agree it is a useful Metric, vs a System of ethics as Ebon Muse would construe it. Alan ended with a challenge for me to “to tell us a little more about [my] solution to [the] [ethical] trilemma.

Which I’m happy to do, but that requires me to back to the roots of Reason, Morality and Evolution. [Read more] to see just how far back I’m willing to go…

As Alan notes while responding to my “reasoned conversation” about Christianity and atheism:

Evolution does not operate by pure chance on inanimate objects, but on replicating lifeforms under selective pressure.

I (unlike my disputants on FoRK) agree, and I hope that Alan will allow me to rephrase this as:

“Evolution acts by selectively rewarding organisms that best conform to the Laws of Survival.”

since he already concedes that he:

“find[s] nothing particularly surprising or noteworthy in the idea that inorganic and organic processes are governed by the same general laws.”

Let me go one step further and assert that evolution:

I. Selects for cognitive systems that are capable [in principle] of apprehending Truth
II. Selects for emotional systems that reward conformance with the Laws of Survival

In case it isn’t obvious, (II) simply implies that we “naturally” feel good — i.e., happy — when we:

a. achieve personal utility (e.g., eating, mating)
b. fulfill antipathy towards those we perceive as threats (e.g., punishing our enemies)
c. manifest empathy towards those we identify with (e.g., helping the weak)

and that all these behaviors are essential for survival. Well, (a) and (b) are essential for all animals, though (c) is only relevant for social animals.

I hope that all these seem equally “unsurprising” and obvious to Alan, though I should point out that many post-modernists disbelieve (I) while objectivists categorically deny (II). I believe this also addresses the “incompleteness” Alan noted in Ebon Muse’s formulation of UU, in that it recognizes and assigns a constructive role to (b) antipathy, not just (c) empathy.

Given that, let me assert that the “Laws of a Survival” describe “General Systems” in the same way that the “Laws of Physics” characterize “Natural Systems.” Since we’ve defined “happiness” as the goal of ethics, then I believe we can say that:

A. Ethics are meaningful in the context of Systems
B. Virtue is acting to promote the health and happiness of the System
C. Virtue is rational when the System supports my own health and happiness

Again, I hope this is all fairly obvious and non-controversial, but nonetheless illuminating. It is neither rational nor virtuous for, say, a prisoner of war to devote himself to the happiness of his captors; conversely, it is both foolish and vicious for a criminal to weaken the society he parasitically depends upon (and wise and just to punish him for it). Perhaps more impressively, this works equally well to explain non-human “morality” among birds and mammals

Still with me, Alan? If not, let me know where/how I lost you.

If so, though, this then begs the following practical and theoretical questions:

i. Is our sense of morality “just” an artifact of our evolutionary past?
ii. When does rational inquiry support rather than subvert the System?
iii. Is it ever justified to protect the System by suppressing truth?
iv. Am I justified in hating my enemies? Conversely, is it ever rational to forgive them?
v. Is it ever valid to use the System to serve me, rather than vice versa?
vi. On what basis can I trust leaders to defend the system rather than exploit it?

I would assert that the simplest and most powerful way to answer these is to assert that:

1. There exists a meaningful higher-order System encompassing social, natural, and logical systems
2. It is always Virtuous to act in accordance with this System
3. Support for this System is entirely, necessarily, and always consistent with belief in Truth
4. The most trustworthy and virtuous people are those who believe (1-3), and devote themselves to both understanding and supporting that System

As best I understand it, this is compatible with most forms of humanism I have seen, so I suspect Alan doesn’t fundamentally disagree (even if he is new to this formulation). However, I would argue that believing in (1-4) is tantamount to believing:

5. The present System exists as the result of a benevolent Purpose

By “benevolent” I mean “sympathetic to human happiness”, which makes (5) identical to what I call “strong deism” or a “moral purpose behind the universe” — which in turn satisfies my first goalpost.

I realize this is not the only possible assumption, but I assert that it is the simplest and most comprehensive explanation for everything above, and that to deny (5) leaves a complex welter of unjustified beliefs in its place.

Agree? Disagree? If so, where and why? If not (5), what superior axiom could you propose in its place to justify (1-4)?

Over to you, Alan.

Advertisements