Tuesday, May 1, 2007

Truth and Contradictions

This may be more of a request for something to read than anything else, but here goes:

Imagine this dialogue between a proponent of dialetheism and a proponent of classical logic:

A: You think that my logic is hopelessly strange, but I can give you a pretty good idea of what you have in mind. There are several paraconsistent logics out there -- consider FDE. Instead of a valuation function assigning 1 or 0 to propositions, imagine that it assigns either {1}, {0}, {1, 0} or {null}. A proposition is true, intuitively, if 1 is a member of its valuation. The logical connectives then work in the expected way (for instance, A&B is true ({1} or {1, 0}) when 1 is a member of the valuation of A and B). We can then see exactly which inferences are valid and which aren't, but we should both agree that things aren't totally crazy -- plenty of things will be false (and only false) in this system, and we can have meaningful discussions about contradictions.

B: I understand your system, but you've just changed the subject. For me, truth and falsity are exhaustive and exclusive. This is constitutive of my notion of truth and falsity. You've provided a model of something else entirely -- and maybe your model could do some interesting work, but to give your gloss on truth in this model is just plain disingenuous.

Does this sound right? What should A's response be? I'm sure people have talked about this before in phil logic, but I haven't seen anything particularly interesting said at this point in the dialectic (Stalnaker's Impossibilities paper being the only thing I can think of)...

19 comments:

@ said...

Barak,

as a friend of mine once said: remember, contradictions are CONTRADICTORY!

-Einar

Anonymous said...

I'm with B, but I also think that this blog is the sparse ground for being. So perhaps you should go with A.

-Kelly

Barak said...

I'm starting to think that B has a good point. But it's also not the case that I'm starting to think that B has a good point. C'mon, Einar, liberate yourself!!

(I love how everyone's post looks like it's coming from Edward. Damn you, Ed!)

Anonymous said...

Barak,

you're starting to sound like Borat...

-Einar

Anonymous said...

Barak,

Giving a paraconsistent logical system is mathematically unproblematic. The question is whether it is more NATURAL than a standard logical system when it comes to representing the laws of TRUTH. Which one better carves nature at its joints? Let's not take human psychology at face value and think that there are true contradictions ('true contradictions' doesn't even make any sense) just because in some instances one might need a paraconsistent logical system to REPRESENT a human mind. That is giving too much to anti-realism, isn't it?

-Einar

PS: liberate myself from what?!

Barak said...

Let's say that our project is to provide some structure that can represent contradictory beliefs (which are pretty common, but let's leave that aside). You don't have to call these structures 'worlds' if you don't like, and -- at least on the face of it -- we don't get committed to anything that should make a classical logician uncomfortable. But, of course, we still want a paraconsistent logic to tell us how to revise beliefs, what we should expect would follow from contradictory beliefs, etc.

Now, when we give a value of '1' to a proposition in this system, we don't need to think we're saying that it's true -- maybe we're not modelling truth. I gess that's what B wants to say. And maybe that's what you want to say, but I'm not entirely sure. But then, what are we modelling? Believed-as-true? I guess that'd work in some contexts, but for the belief revision stuff to work the way it should, we should assume a kind of realism (the agent's beliefs aim at truth, we ought to revise beliefs when we have reliable evidence/justification/whatever)...

Anonymous said...

Barak,

I guess the project now seems uninteresting to me. Why would we want to model contradictory beliefs if they don't aim at truth? As I said, giving a paraconsistent logical system seems mathematically uncontroversial, so if we want to simply model paraconsistent beliefs that don't aim at truth I guess I have no problem with that. But why is that philosophically interesting? I think of belief as truth-directed. I don't really believe anything unless I think it's true. So a paraconsistent model that don't aim at truth don't seem to be able to tell me anything about what I should infer from my beliefs, or how I should revise my beliefs, etc.

I should revise my inconsistent beliefs into true beliefs. What do we need paraconsistent logics for if they don't aim at truth?

This is really not my topic, so I might be missing something here...

-Einar

Barak said...

Einar,

I think you maybe misunderstood what I was trying to say there. I *do* think that belief aims at truth. If it didn't, then -- well, I dunno, but things would get really strange really quickly.

What I was trying to say was that if you do admit that we have contradictory beliefs (which is maybe something we could argue some other time), simply giving some model isn't enough -- we should think that the beliefs have some interesting connection to truth. So truth in the model should be either our austere notion of truth, or at least something really close to it. But then, we'll have a model of dialtheic truth, and we're open to B's criticism that this just plain isn't truth. And that would put us back to the beginning -- what good is our model if it isn't lining up to anything interesting in the world?

Timmo said...

Barak,

It sounds like the objection to dialetheism being raised here is Quine's old objection to deviant logics: classical logic is true by definition, and deviant logicians are simply changing the subject. The dialetheist negation is not really a negation operator because it is not exhaustive and exclusive.

I think this completely misses the point of debates about logical systems. Our systems of logic are theories about what arguments are valid and invalid. Classical logic is one such theory, and paraconsistent logic is a rival theory. So, it makes no sense to say that paraconsistent logicians have changed the subject -- there is a substantial disagreement about our pre-theoretical, intuitive notions of negation, truth, and falsity.

Einar,

I argue here that believing contradictions may be rational. In fact, I think that the paradox of the preface gives us a contradiction we are rationally obligated to believe.

Cheers!

Barak said...

Timmo!

It might be that the debate would switch over to our pretheoretic concepts of truth, falsity, and negation, but I don't want to get hung up in philosophy of language. What people would assent to with respect to how negation works need not inform how we construe our rules of inference.

As for the paradox of the preface -- I don't intend to rest my arguments there, because the Bayesian kind of stuff you mention is pretty congenial to classic logic. If by believing P and not-P, I just mean that I believe P to degree .7 and ~P to degree .3, I won't have enough to build a paraconsistent logic around just that. Bayesian patterns of reasoning are quite classical; more the point, Bayesians have nothing to say about people whose beliefs in a proposition and its negation *don't* sum to 1, and that's more of what I'd be interested in giving some model for. Think of the liar sentence -- I want to imagine someone who gives a degree of belief in "'This sentence is false' is true" very close to 1, and also "'This sentence is false' is false" is also very close to 1.

Timmo said...

Barak,

What people would assent to with respect to how negation works need not inform how we construe our rules of inference.

I am not sure what you mean by this. When doing logic, we are trying to formulate a general theory of argumentation, trying to give a systematic account of what argument forms count as valid. We have an intuitive grasp of validity; so, we look to find patterns which characterize the principles underlying those intuitions. What arguments are intuitively seem valid seems to be the only thing which informs how we (should) understand inference.

I take the paradox of the preface to show this: assuming that S's rational commitments are logically closed under whatever logic S accepts, then given reasonable beliefs about S's own fallibility -- an assurence of error -- then S will be rationally comitted to a contradiction.

The point of introducing the probabilistic considerations was to articulate an alternative view of rational commitment which avoids a commitment to a contradiction. However, I think this undercuts the force of a logical argument. Here's what I mean: if you succeed in a reductio of my views, then, on the probabilistic picture of rationality, I need not give up my views. My commitments are not closed under logical consequence. This seems wrong to me, and I want to stick with the idea that my rational commitments are closed under whatever logic I accept.

It's the next step that motivates paraconsistent logic. If I can reasonably hold that I have some false beliefs and I am commited to the logical consequences of my commitments, then I can reasonably hold a contradiction. If classical logic is correct, then I will end up being committed to everything -- which is irrational! So, we should either: (1) give up classical logic; (2) give up the idea that rational commitment is closed under logical consequence; or (3) give up view that I can reasonably hold that I hold false beliefs. The most promising thing to do is (1), I think.

Barak said...

Timmo,

Maybe the way I phrased how we understand negation wasn't (at all) good. I'm not sure what I was getting at there :) But let me say a bit more about probabilistic reasoning:

Reductio -- or at least something pretty analogous -- is a valid form of argument by probabilistic reasoning. This is because your credence in a contradiction (P & ~P) ought to be 0, whatever credences you assign to P and ~P. If you know that A enails P & ~P, you won't believe in A. Of course, the wrinkle is that A need not entail P & ~P to degree one -- maybe the likelihood of P & ~P given A is .8 -- but if we can prove this, then we have proved that you ought to believe ~A to degree .8 (absent other considerations, of course).

All the paradox of the preface shows is that a suitably long conjunction of beliefs that are each held to some degree of just below 1 will collectively dip below .5. And that seems pretty intuitive...

Barak said...

(As a quick addendum to my previous post: At least in the system I'm familiar with, Pr(A&B) is not equal to Pr(A) * Pr(B). Rather, it is equal to Pr(A) * Pr(B,A). And since the probability of ~P given P is 0, Pr(P&~P) is 0.

Maybe this wasn't necessary, but I just thought I'd mention it since I wasn't sure what you meant by the usefulness of reductio arguments)

Timmo said...

Barak,

All the paradox of the preface shows is that a suitably long conjunction of beliefs that are each held to some degree of just below 1 will collectively dip below .5.

Perhaps you won't mind if I stick to my guns: I think that the paradox of the preface shows quite a bit more than this. :-P

Here is the way I see the paradox of the preface:

(1) The historian claims C1, C2, ..., Cn.

(2) The historian believes C1, C2, ..., Cn are all of the claims made in her book.

(3) The historian is rationally committed to the logical consequences of her beliefs.

(4) By (1), (2), and (3), the historian is committed to the claim: "For every proposition p expressed in my book, p".

(5) The historian believes: "For every book b, there is a proposition p expressed in b, such that ~b.

(6) By (3) and (5), the historian is committed to the claim: "There is a proposition p expressed in my book, such that ~b".

(7) By (3), (4), and (6), the historian is committed to the claim: "For every proposition p expressed in my book, p and there is a proposition p expressed in my book, such that ~b"

What is doing all of the work here is (3). If you want to reject the idea that the historian rationally believes the contradiction enumerated in (7), then you have to reject (3), the claim that the historian's rational commitments are closed under logical consequence. I am wary about giving that up.

If you argue that my view V has some untenable consequence V*, then, if my commitments are closed under logical consequence, I am rationally obligated to give up V. But, without closure under logical consequence, then there will be possible circumstances were your argument has no rational force at all! Switching to a probabilistic picture of rational commitment means that we have to weaken the rational force of logic -- is that something we really want to do?

I hope I've got this right. :-P

Yes, you're right about P(A & B). P(A & B) = P(A) * P(B) iff A and B are independent (statistically uncorrelated) events. [In that case, P(B, A) = P(B).] I was assuming that simple events, described by atomic formulae, are uncorrelated. In "classical" probability theory, A and ~A are as anti-correlated as one can get! Of course, for a dialetheist, A and ~A may, in general, be independent events, making it possible for P(~A, A) > 0.

Barak said...

We shouldn't accept the conclusion of the preface paradox, insofar as we shouldn't accept contradictions. I flirt with accepting contradictions more than most in the department, but I think that we should still leave that off the table.

Premises 1 and 2 are unassailable, and I think we can even get a form of (6) from the setup of the problem. That really only leaves 3.

I don't think anyone believes in 3, completely unrestricted. Then she'd be committed to every truth of math and logic, but she's hardly irrational for failing to be perfect. [Well, maybe you just want to keep the bar of rationality as some unattainable ideal...] But if our rational beliefs aren't closed under *some kind of* logical consequence, then it's unclear what we're talking about when we talk about rationality. To evaluate (3), we'd need to determine what kind of logical consequence you have in mind.

First order logic is one way to model beliefs, and often a fruitful one. But the paradox of the preface is a kind of case the demands a more fine-grained treatment. Subjective probability is another way to model beliefs, and it seems to have a natural application in these kinds of cases. Rolling its theorems into a kind of many-valued logic is a pretty intuitive way of capturing our reasoning as it relates to claims we're not certain about. And if you want our beliefs to be closed under that, then I think your argument is stopped at line 4. If you mean for logical consequence to be classical logical consequence, then I'd stop it at 3, and offer some other interesting relation of closure for our beliefs.

Timmo said...

Barak,

We shouldn't accept the conclusion of the preface paradox, insofar as we shouldn't accept contradictions.

That may be true, but I do not think it can be assumed without begging the question. We shouldn't accept the conclusion of the preface paradox only if we have good philosophical reasons to think that something like (3) is wrong (which may be the case). As it happens, I am quite happy to accept the conclusion of the preface paradox. :-)

To be sure, there are definitely more careful ways to express (3). I would go for:

(3*) For every agent S, if S accepts the set of propositions X and logic L, then, when presented with a L-valid argument from X to some proposition p, S ought to commit herself to p.

Then, given some minimal assumptions about the logic that the historian accepts, then the argument should still go through.

What do you think is wrong with the inference from (1), (2), and (3) to (4) she accepts a logic like, say, LP? Certainly, if she accepts classical logic, then (4) will be right.

Barak said...

Timmo,

If by (3), we mean that our beliefs are closed under classical implication, then (4) follows and we'll have our contradiction. I just don't think this is the right thing to say -- it's far too strong.

If, by (3), you mean closed under LP or FDE or relevance logic or something, then we'll likewise get (4). All of these systems are closed under conjunction (at least to my knowledge -- there are some strange varients of relevance logic that I don't know much about). (4) will follow, but you can accept the contradictory conclusion without accepting every other proposition. But still, I don't think this is the intuitively right thing to say, even though I think LP and the like do have their uses. Again, intuitively, I simply *don't* accept the conjunction of my beliefs, full stop; I certainly don't accept all the implications of my beliefs, since that would mean that I'm omniscient about math and logic. I accept each belief by itself (perhaps to some degree), but I don't accept their conjunction. This means that, whatever system of implication is at stake in (3), it shouldn't be one that is closed under conjunction. And, again, since a many-valued logic based on probability theory accomplishes this, I think that it's a useful way to think of this closure.

Timmo said...

Barak,

...intuitively, I simply *don't* accept the conjunction of my beliefs, full stop...

Really? I'm afraid I don't share that intuition. It seems to me that if I believe A and I believe B, then I ought to believe (A & B). An agent with minimal rationality who believes A and B should not be agnostic about or deny (A & B) -- she should accept it.

I certainly don't accept all the implications of my beliefs, since that would mean that I'm omniscient about math and logic.

Right: an agent's beliefs are not closed under logical consequence relations. But, what about (3) and (3*)? I only mean to suggest that our rational commitments are closed under logical consequence relations (that we accept as correct). There are no omniscience problems here: I can be unaware that my view V commits me to holding some other view V* as well.

Barak said...

I should have been a bit more specific: I don't accept the conjunction of all my beliefs. I don't even accept arbitrary conjunctions of two "atomic" beliefs, since I'm pretty sure that I have lurking contradictions (and not of the kind that I want to countenance.) I'll grant that an agent can be rebuked for not accepting an 'easy' conjunction of beliefs, but that agent shouldn't be rebuked for failing to believe *all* conjunctions of those beliefs. The paradox of the preface is just one reason why.

You're right about the closure of beliefs and rational commitments being different, and that bit was pretty sloppy of me to run them together. But even still, I have to rest on my earlier assertion that the bar of rationality is not so high that we can be rebuked for failing to be omniscient about math and logic. I don't have anything super-convincing to say here -- maybe you want rationality to be an unattainable ideal, in which case I think we just expect different things out of any study of rationality. And figuring out just what we *should* be committed to -- well, obviously, that's the whole project.