Here’s a quick (and probably pretty bad) argument that it is rational to believe contradictions (or at least why it's not irrational to believe contradictions) inspired by conversation with Dan.
1) One ought not believe contradictions (assume for reductio)
2) Ought implies can (ask Pete about this)
3) Possibly, one does not believe contradictions (from 1 and 2)
4) Necessarily, one believe contradictions.
A few things to note at the outset: First, by ‘believe contradictions’, I don’t necessarily mean believe that p and not-p. Believing that p and believing that not-p (in "separate compartments" if you’d like) would do fine. The ‘cans’ and ‘possibles’ of this argument should be read along the lines of nomic possibility. Also, this is a proof by reductio in favor of believing in contradictions – you might think the argument is self-defeating for this reason. You might also think that ought implies can is a bad principle (or at least shouldn’t be applied to this kind of case). I think premise 4 stands in the most need of justification.
I'm not entirely sure how to justify (4). Maybe we could say that, for some complex propositions P, we might fail to believe that P, fail to believe that not-P, but believe that P or not-P; this might not be a contradiction itself, but maybe we could draw one out. Or maybe we could say that believing the premises of an argument but not its conclusion commits us to contradiction (maybe by way of a possible-worlds analysis of content -- in all belief-worlds where I accept the premise, I accept the conclusion, but by hypothesis, I don't accept the conclusion)... I'm not entirely sure where to go, but I still think there's an argument in the neighborhood. Thoughts?
[Note: I've taken back my original justification for (4).]
Subscribe to:
Post Comments (Atom)
9 comments:
Where does the fourth premise come from? Is it supposed to follow from the other ones?
-Kelly
Opps... I get it now. If you can find a justification for 4, then you get the conclusion that it's rational to believe contradictions. I guess I just don't see how to motivate 4.
-Kelly
Here's the kind of thing I had in mind about (4), but was having some second thoughts about:
Our minds are finite (whatever that means, exactly), and thus are bound to exhibit some failures of ideal rationality. For example, there will be propositions that we won't accept because they are too complicated to grasp, even though they follow from other propositions that we accept.
Now, this in itself isn't a contradiction, but maybe we can get one from here. Imagine that P and Q entail R, that the agent accepts P and Q, but not R (on account of its complexity). Maybe we can say that the agent accepts R, by virtue of all of his belief-worlds being R-worlds, but doesn't accept R, since the agent will not be inclined to assert R.
Of course, here I'm running together a Lewis-Stalnaker account of belief together with a dispositional account, so maybe that's the real culprit. I dunno, I need to think about it some more.
One potential problem with your proposal is that it may not work for *every* nomologically possible subject. Is there not a nomologically possible subject that, say, lives in a box and has very few theoretical beliefs, so there isn't really an issue about the subject not tracing out the consequences of her complex beliefs?
Barak,
I don't get it. (4) seems plain false. Isn't it possible that someone only have time enough (say he dies really young) to have three consistent thoughts? Or isn't it possible that someone only believes consistent things? You don't want to say that ideal rationality is simply impossible do you?
-Einar.
No, I don't think it's possible for someone who, say, is brought up in box to only have a couple beliefs along the lines of 'this wall is black.' This is because beliefs are far more rich and varied than that. A simple agent with a brain like ours will also have dispositional beliefs, beliefs about their own mental states, order their beliefs into higher-order beliefs, have kinds of a priori beliefs and reason from those, and so on. As soon as we realize the complexity of an agent's belief system, we'll pretty quickly generate a problem where their belief worlds will be contradictory. Hell, if we allow the agent to have any a priori beliefs at all, we'll get there, since they won't follow up on those entailments.
I also don't think there is such a thing as the first three beliefs an agent has. I just don't know what that would be like.
Barak,
I'm sympathetic with your argument, however, I doubt whether a restriction to nomic possibility will protect you from the problems raised by Einar and Kelly. I would accept that it is nomically possible for one to have very few beliefs and for these to be suitably transparent so that one avoids any sort of contradictory belief.
(Perhaps I am disagreeing with you here ... If I am, I'd like to hear more about why you think I'm wrong, I'm wrong.)
But you can run the same argument while restricting the epistemic agents in question to "normal and mature" human believes, where to be a normal human believer, one must be fallible, and in order to be a mature believer, one must have a rich stock of beliefs.
[... continuing from above ...]
I think it is quite plausible that, as a matter of nomic necessity, every normal and mature human believer holds contradictory beliefs. Hence, a variant of premise (4), one restricted to normal and mature believers, IS justified.
What do you think?
-Ed
To make things perfectly clear, here's the revised Barak argument, I have in mind:
1) [A normal and mature human believer] ought not believe contradictions.
2) Ought implies can.
3) Possibly, [a normal and mature human believer] does not believe contradictions (from 1 and 2)
4) Necessarily, [a normal and mature human believer] believes contradictions.
Make sense?
Post a Comment