I must say, I think we are dangerously close to actually making forward progress! Alas, I’ve often felt this way before, only to discover that it was merely a superficial agreement masking a deeper misunderstanding. 😦
I think part of the challenge is that we each have underlying assumptions that are not merely unspoken, but actually unconscious. That is, it is precisely our own unexamined assumptions that lead us to misinterpret what each other is saying. I believe the only way to get past that is to both work harder at clarifying our own reasoning, as well as make an extra effort to deduce why the other person isn’t getting our point (or appears to be making a non sequitur).
Case in point: I believe the reason I was uncomfortable with the The Utility of Universal Utilitarianism is that I didn’t understand how you intended to use it, or what exactly you meant by a “theory of ethics.” In fact, I see three different (but overlapping) roles that “UU” might play, and I suspect I was critiquing (III) while you were defending (I). [Read more] to see what those three are, including (hopefully) one that matches your vision of UU.
Ethical behaviors minimize potential and actual suffering and maximize potential and actual happiness
Now, I can think of at least three ways to utilize that definition, as a:
As I implied earlier, you (Alan) appear to be referring (I), since you say:
Fair enough. If we are merely using UU as a metric to provide a common frame of reference for our ethical discussions, then I withdraw my objections. However, in that case I’d like to propose a few additional assertions, if only to see whether or not you agree:
A. In practice, we need some system of ethics to make effective decisions
By a “system”, I mean something that answers, proposes a mechanism for answering, or at least denies the need to answer questions such as:
I’m not saying every system can (or even must) provide complete answers to all those questions, but we need at least some answers to make any sort of meaningful decisions. Especially because:
B. Most ethical systems would claim to optimally fulfill UU as a metric
I’m not saying they do fulfill UU, but the proponents of every system I’ve studied — from Libertarians to fundamentalists — all claim that their system provides the optimal outcome for humanity, given various subsidiary assumptions about what is known, knowable, and doable.
Of course, you might claim that any such assumptions are a priori disallowed if we assert UU, but that would lead us to:
C. UU can itself also be formulated (and critiqued) as just such a system
In particular, that is how I read The Ineffable Carrot and the Infinite Stick, as asserting a dogmatic heuristic regarding ethical knowledge and motivation.
If you agree, or at any rate believe that UU is sufficient to provide answers to (a-g), then I would love to see how you (or Ebon Muse) address them.
Conversely, if you at least agree that the goal is to find a system of ethics that satisfies the UU constraint, and want to hear my attempt, please let me know and I’ll take the next “at bat.”
If, on the third hand, you deny (A-C) — or think them ill-defined — then I would ask you to help me figure out where (and why) our underlying assumptions are diverging so dramatically.
Yours in hope,
— Ernie P.