*I want to emphasize that my understanding of (II) is as a meta-definition of knowledge. I use “meta” here because specific definitions of knowledge will depend on the context and the paradigm. This is bothersome. It means that one person can say “I know X is true” and another can say “I know X is false”, and both can be accurate so long as either their contexts or their paradigms differ.*

*universally true*to

*infinite*

*precision*. That is, we only have ‘contextual knowledge’ rather than ‘absolute knowledge’.

This is a hard truth to swallow for ‘certaintists’, but I believe it is essential to use that as a starting point (even if I’m not 100% sure about it :-).

Having given up the possibility of completely accurate knowledge, I am instead arguing that knowledge claims can (and in fact must) be evaluated along two key dimensions:

**valid**, i.e., is it properly justified according to its (implicit or explicit) paradigm?

**accurate**to its (implicit or explicit) level of precision within its (implicit or explicit) domain?

In turn, these items — paradigms, precisions, ranges — can themselves be known and evaluated along similar lines, resulting in similar judgments about *their* accuracy and validity. This is allows us to directly test knowledge (by measuring its accuracy) and indirectly test paradigms (by measuring the accuracy of knowledge justified by the paradigm).

To return to your “hypothetical” example where we have contradictory knowledge:

In fact, this kind of disagreement happens all the time (as anyone who’s been married knows :-). And further (though my wife may not agree 😉 this model suggests a great way to resolve such disagreements. Because, according to this model, what “A” and “B” are really (usually implicitly) saying is:

Resolving those disagreements thus requires, in general:

Roughly speaking, it is usually straightforward to resolve a disagreement if the paradigms are the same (E = H), otherwise it requires creatively constructing a new shared paradigm — which is not easy. In fact, the lack of such shared paradigms (or meta-paradigms, if you prefer) is arguably at the root of most human disagreement, since (according the ‘honest Bayesian’ theorem) we can easily interpret a different paradigm as dishonesty!

You may not like such an apparently *ad hoc* approach to knowledge, and to be sure it does lead to some weird results. However, I assert that this is the only kind of knowledge accessible to humans, and therefore we might as well make the best of it. If you want to build an epistemology on something else, you’re welcome to try, but the track record of previous attempts is not encouraging.

Let me give you two strange but real-world examples of why context is essential to evaluate knowledge claims, even if it gives counter-intuitive results.

Let’s start with number theory (integer arithmetic), arguably the most rigorous belief system we know about. Even here, knowledge is only valid within a particular context. For example, “1 + 1” doesn’t always equal “2” — in binary it would equal “10” and in module 2 arithmetic it would be “0”. There are even different paradigms for what constitutes a ‘valid’ proof — intuitionism, for example, rejects ‘*reductio ad absurdum*‘ arguments of the sort classical mathematicians and logicians make all the time. That doesn’t make them less valid — if anything, they’re far more rigorous, since they make fewer assumptions. Though the system as a whole has much less explanatory power, they arguably have greater confidence in their results. Thus, which paradigm to use depends largely on what you’re trying to do, i.e., context.

Here’s an even more disturbing example: consider two weather forecasters, both honest Bayesians with the same perfect data, but different priors. Say that on Monday, one of them predicted a 70% chance of rain, the other predicted 90%. It rained. Is one more reliable than the other? Well, we can’t tell from one data point. But, amazingly, if we go back and check all their past forecasts, we might find that they are *both* perfectly reliable — that is, the actual outcomes are statistically correlated (within sampling error) to their respective predictions, even though they differ from each other. Its a bit freaky, but it is an inevitable consequence of Bayes theorem (which is why non-Bayesian frequentists literally refuse to make predictions, to avoid such ambiguity).

To be sure, one forecaster might be more useful than the other if he or she consistently makes *stronger* predictions (e.g., only 10% or 90%) yet maintains the same accuracy rate as the other. But, again, that “usefulness” depends on the context in which its being used.

I realize this is all a bit esoteric, but since we’ve decided to make predictions a foundational aspect of our epistemology we’re stuck with the consequences, so we might as well be aware of them. Anybody want to place odds on whether or not Alan will be confused by this? 🙂