Though I was mostly being ironic in my comment upon your “non-angry” post, I have been a little frustrated by my inability to integrate your various statements and positions into a coherent model that I could respond to. However, I’ve come to realize that I was not being sufficiently explicit in my own writing (or even my thinking), which may have led to a compounding snowball of confusion!
Therefore, at the risk of being pedantic, let me try to explain exactly what I am trying to accomplish through what might be called a Shared Paradigm of Morality. Which hopefully won’t just feel like more philosophical SPOM…
To start with, I’d like to go back to the phrase “meaningful social inquiry” from my first goalpost. By parallel with “meaningful scientific inquiry”, I believe this requires a shared paradigm. That is, in order to approach truth we need a coherent community operating under agreed-upon (if implicit) rules for what and how they are studying, so that they can reliable learn from and critique each other’s results. For example, in the physical sciences we might articulate this as:
I. Model: There exist mathematical laws governing the physical universe
II. Means: The ongoing interplay of theory, experiment, and peer-review enables us to better understand the truth of the universe
III. Metric: The best theory is one which most elegantly explains all past results while enabling precise predictions for the widest range of future experiments
This doesn’t mean it is impossible to question these assumptions, but it does mean that by doing so you are going outside “normal science” into a realm where the usual rules of dispute resolution (and thus “scientific consensus”) no longer apply — at least until a new paradigm is formed. In other words, a paradigm ensures convergence of discussion — something we ourselves have historically lacked!
In particular, my hope is that the two of us can agree upon at least a minimal Shared Paradigm Of Morality (SPOM), in order to meaningfully ask questions about and within the paradigm. In parallel with the above, I’d like to propose the following as SPOM, draft 1:
I. Model: There exist objective (but not absolute) moral principles governing human behavior
II. Means: By employing the full range of our cognitive, perceptive [including emotional], and evaluative faculties we can realistically hope to improve our understanding of those principles
III. Metric: The best moral system is that which maximizes current and future happiness while minimizing current and future suffering.
Would you agree, Alan? If not, how would improve upon it? More importantly, do you understand why I think we need to agree on such a paradigm?
Starting from this common ground, I hope that we can meaningfully ask (and answer) some of the questions that have long been floating around.
Let me start with your comment about “sufficiency”, which is now part of (II). My focus was not so much on elucidating an exhaustive list, but rather to assert that some such list would suffice. In particular, there are alternate paradigms that deny that moral progress is possible for “unaided” humans, and thus we are forced to rely on unexamined tradition or “divine revelation.” Thus, our paradigm must include the belief that such inquiry is — at least in principle — viable; else, why bother?
In other words, I think our paradigm needs to capture the essential shared beliefs necessary for “meaningful social inquiry.” Make sense?
That said, our paradigm — by design! — raises as many questions as it answers. For example:
a. If moral principles are not absolute, then what are they relative to? If they are relative to something which differs among potential observers, then in what sense are they objective?
b. Do we accept that emotions are a valid way to perceive reality? If so, do we only include “positive” emotional states like empathy, or also “negative” emotions like anger or hatred?
c. Why do people fail to act morally? Is moral failure primarily intellectual, emotional, or volitional? And how can it be prevented/corrected?
d. Is there a unique solution which globally maximizes happiness and minimizes suffering? Or are there multiple local maxima which maximize the happiness of one particular population at the expense of others?
e. Is it it our moral duty to choose an Operative Depiction of Reality that maximizes our motivation to do good, even if that conflicts with an ODoR that better fits the available evidence? Or is it possible — within our existing paradigm — to prove that no such conflict exists?
To be clear, I am not saying these open issues are “flaws” in our current SPOM. However, I am asserting — and hope you agree — that any viable paradigm must provide a way to meaningfully address these types of questions. In particular I suspect our individual “moral theories” would give different answers to many of these questions. If so, then either the SPOM must enable us to determine which one is “better”, or it is necessarily incomplete.
Would you agree?
If so, then perhaps you can take a stab at answering (a-e) to help me better understand your position — even if it is simply “I don’t know” or “I don’t care.”
If not, and you don’t see the relevance of a SPOM, then perhaps you can elucidate what you do think is necessary for “meaningful social inquiry.”