Click Here ">
« July 2004 »
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
You are not logged in. Log in
Entries by Topic
All topics  «
Cognition & Epistemology
Notes on Pirah?
Ontology&possible worlds
Syn-Sem Interface
Temporal Logic
Blog Tools
Edit your Blog
Build a Blog
RSS Feed
View Profile
Translate this
LINGUISTIX&LOGIK, Tony Marmo's blog
Sunday, 11 July 2004

Purver and Ginzburg shed some light on the Semantics of Noun Phrases, from the perspective of the HPSG school, which I both respect and dissent from:

Clarifying Noun Phrase Semantics

Matthew Purver and Jonathan Ginzburg

Reprise questions are a common dialogue device allowing a conversational participant to request clarification of the meaning intended by a speaker when uttering a word or phrase. As such they can act as semantic probes, providing us with information about what meaning can be associated with word and phrase types and thus helping to sharpen the principle of compositionality. This paper discusses the evidence provided by reprise questions concerning the meaning of nouns, noun phrases and determiners. Our central claim is that reprise questions strongly suggest that quantified noun phrases denote (situation-dependent) individuals-or sets of individuals-rather than sets of sets, or properties of properties. We outline a resulting analysis within the HPSG framework, and discuss its extension to such phenomena as quantifier scope, anaphora and monotone decreasing quantifiers.

Download link

Posted by Tony Marmo at 07:19 BST
Updated: Monday, 9 August 2004 08:12 BST
Saturday, 10 July 2004


Yoad Winter on Choice Functions

Winter's page has many papers, and his concerns include computational linguistics. It is worthy to check it. One of his recent works, Choice Functions and the Semantics of Indefinites, is a sort of advanced introduction to the issue.

Methinks that choice functions can be used for almost any thing in semantics. Hamblin approaches, according to what the more experienced folks told me, began with questions. Thenceforth, Kratzer and many others have applied them to the semantics of scope. But, for me, the obvious application of Hamblin approach would firstly be binding/linking theory. It seems that there have already been some attempts to do so. (Anyone correct me if I'm wrong, please).

To my dismay, however, people still insist in separating binding from control. I love syntax though, I dislike a syntactic configuration solution for binding and control. A choice function solution is more agreeable to my intuitions.


One friend from This is not the name of the blog has a crucial question:


by Chris Tillman

I'm probably overlooking something obvious, but I was wondering if someone could help me out with this.

Uses of 'without' sometimes help express conjunctions with a negated conjunct, as in 'Al is going to the store without Mary going'. This should be symbolized as A & ~ M. Sometimes it is used to express a conditional, as in 'Without going to the store, John will have nothing to eat for dinner.' Here is the sentence that is troubling me:

(S) Bill drinks without Harry drinking.

Should (S) be read as a conjunction, a conditional or neither? And if neither, then what?

See it

Posted by Tony Marmo at 14:34 BST
Updated: Monday, 9 August 2004 08:13 BST
Friday, 9 July 2004

On contradictions

By Walter Carnielli
(Source: The Paraconsistency Webgroup)
Dear Friends,

Please see below some comments on Dick's views expressed in
"On contradictions".
I would like to encourage you all to participate in the
Paraconsistency discussion list of the WCP'2000, by subscribing
or just sending copies of our discussions to the list:


I agree with (what I think was) the conclusion of Fred's talk that we don't have any good arguments for the law of non-contradiction. It's too basic--either we accept it or we don't. Any argument for it we've seen or can imagine uses that law either explicitly or implicitly.

OK, but is not the situation the same for many other laws too? For basic laws concerning natural numbers? After all, when you start enumerating any kind of arguments about numbers, you are already using numbers. Or the grammarians using established grammar to explain grammatical rules.

However, I now think that we do accept some contradictions as true in our daily lives.


Posted by Tony Marmo at 14:05 BST
Updated: Monday, 9 August 2004 08:14 BST



16th European Summer School in
Logic, Language and Information

Universite Henri Poincare
Nancy, France
9-20 August, 2004

Semantic approaches to binding theory

Binding Theory, which is concerned with sentence-internal constraints on anaphora, was originally (Chomsky 1983) conceived in syntactic terms as conditions on the distribution of indices:
Condition A
Anaphors are locally bound
*Johni thinks that himselfi is clever.
Condition B
Pronominals are locally free
*Hei likes himi.
Condition C
R-expressions are free
*Hei thinks that Johni is clever.

But other researchers have attempted to derive these constraints from lexical semantics or the interpretative procedure rather than the syntax. Some (e.g. Reinhart 1983, Heim 1993, Fox 2000, Buring 2002) add a semantic component to a syntactic core, but others are more radically semantic (e.g. works by Jacoboson, Keenan, and more recently Barker & Shan and Butler, among others). The workshop will provide a forum to compare and assess these diverse proposals as well as to present the results of recent linguistic work to non-linguists.

Note: ESSLLI is the annual summer school of FoLLI, the European Association for Logic, Language and Information.

Posted by Tony Marmo at 06:25 BST
Updated: Monday, 9 August 2004 08:16 BST
Tuesday, 6 July 2004


Jackendoff talk: semantics must be generative

by Nick

On Friday (4th) I heard Ray Jackendoff give the keynote lecture at a conference organised by the UCL Centre for Human Communication which my department (UCL Phonetics and Linguistics) is part of (in some way I don't understand).

What he said may not be news to anyone else, but I hadn't heard it, not having read any of his recent stuff, except the bits about music.

Broadly, he thinks that mainstream - ie Chomskyan - linguistics is on the wrong track by supposing that syntax is the only generative component needed in the grammar, so that phonology and semantics need only interpret the output from syntax.


Posted by Tony Marmo at 02:26 BST
Updated: Monday, 9 August 2004 08:17 BST
Wednesday, 30 June 2004

Topic: Cognition & Epistemology

Can Justification Just Fall Short of Knowledge?

By Matt Weiner

From the Certain Doubts blog

We all know that justified true belief can fail to be knowledge when funny stuff happens (or at least most of us think this). What I want to ask is whether a JTB can fail to be knowledge for a more mundane reason-because the belief is justified, but it isn't justified enough to count as knowledge.

Another way, perhaps, to put this is to question a line from section 6 of Ralph's paper "The Aim of Belief" : "[T]here is no way for a rational thinker to pursue the truth except in a way that, if it succeeds, will result in knowledge." Is this so?

Here's a case I'd like to survey you on. Charlie Brown, a baseball general manager, is trying to decide who to pick in the amateur draft. He looks at the prospects and comes to believe, based on his high school performance, that Joe Shlabotnik will be a good major league player someday. Indeed, Joe does turn out to be a good major leaguer. So Charlie had a true belief; it also seems as though it may have been justified, because it was based on performance. Yet I would think that it falls short of knowledge, because predicting someone's eventual major league performance on the basis of his high school performance is too uncertain.

(Apologies to non-baseball fans; the argument probably transfers to any sport, though baseball performance is notoriously difficult to predict.)

Indeed, I'd argue that Charlie is much better off knowing that his pursuit of the truth about Joe's future performance will not result in knowledge. I'm convinced by Tim Williamson's argument that one of the advantages of knowledge over JTB is that it is less likely to be abandoned in the face of counterevidence. Yet Charlie should be ready to abandon his belief in Joe's future in the face of counterevidence. Given the chancy nature of baseball prospects, a general manager has to be prepared to abandon someone who looked promising but who isn't panning out, or he may damage his team by keeping on an underperforming player. Players who you know to be good will be kept in the lineup after a poor start (I remember Barry Bonds batting under .200 one May when he was in Pittsburgh and going on to win the MVP-er, sorry again to non-baseball fans); players who you think to be good won't.

Does this case convince you? Do you think Charlie is only justified in believing that Joe will probably be good? Do you think it casts any sort of light on the kind of justification that's necessary for knowledge?


Posted by Tony Marmo at 17:25 BST
Updated: Monday, 9 August 2004 08:19 BST
Saturday, 26 June 2004
I have found this interesting article at the Musings from the Lehigh Valley log:

Knowledge and Stability

by Joe Shieber
June 08, 2004

Marc Moffett has been considering some interesting questions concerning knowledge and stable belief and justification at Close Range. In response to some probing questions, he submitted a follow-up post, including the following example:

The other day I was going out of town and was supposed to call some friends when I got into the airport. My wife wrote their number down and I glanced over it. As I was leaving, she reminded me to take the number. I said, 'I know it' and proceeded to recite it from memory. Knowing that the number was still fresh in my mind her response was, 'Do you really know it?'

Marc suggests that the example shows that knowledge sometimes requires not simply reliably-produced true belief (let's grant that the short-term memorial faculty allowing Marc to rattle off the number correctly is reliable), but stable belief, or stably justified belief. Marc claims that we have an intuitive grasp of stability and instability to which he can appeal in making this suggestion. However, and without meaning to be difficult, I still don't know what stability is; nevertheless, let's leave this problem aside.

What I want to do here is suggest an alternate diagnosis for Marc's example. To do so, let me first present one of my own:

The other day I was sitting in a restaurant with my wife, planning our summer vacation while perusing the menu. My wife wanted to go to Germany to visit her family, while I wanted to spend most of the trip visiting Denmark and Norway. Finally, I acquiesced to her wishes just as the waitress was coming to take our order. Right before the waitress interrupted our discussion, I told my wife, 'Okay, we'll go to Germany this summer.' Then the waitress took our orders -- first my wife's, then mine. I ordered the duck breast and broccoli rabe. After the waitress left, my wife simply said, 'Are you sure?' Wishing to tease her, I answered, 'Yes, I'm in the mood for some duck.' She smiled and then repeated, 'Are you really sure?' At which point I reassured her that I'm happy to go to Germany.

This (fictional!) conversation seems to me perfectly possible. The question, 'Are you really sure?' doesn't indicate that my wife thought me unsure about the duck and broccoli rabe; rather, it indicates that I should return to the question at issue -- that of our summer vacation plans.

Similarly, in the case that Marc presents, his wife's question, 'Do you really know it?' doesn't deny that he now knows the number; rather, it indicates that his rattling off the number is an attempt to change the subject. The real question at issue is whether his knowledge involves the sort of reliable process that would underwrite his knowing the number once he reaches his destination. So Marc is correct when he notes that his wife's question was perfectly proper; she needn't have asked, 'Will you know it when you arrive?' However, he is incorrect, I would offer, in suggesting that the interpretation of his wife's question involves the introduction of the notion -- as yet unexplained -- of stability. Rather, in asking the question his wife was asking, 'Do you have the sort of knowledge (i.e., knowledge produced by a faculty reliable over the course of your trip) at issue in our discussion thus far?'


The case you offer is quite complex (more complex, I think, than the original). As I see it, there are two ways of construing it neither of which threaten my position.

On the first way of construing it, your assertion that you are sure that you want to order the duck is to be understood literally (though playfully). In this case, my initial reaction is that the follow-up question of whether or not you are really sure is not appropriate. So I guess I don't believe that "really" has the revert-to-conversational-thread use that you suggest.

Why then does your example read reasonably well? Because on the second reading, your assertion that your are sure that you want the duck is used to conversationally implicate that you are happy with the Germany decision and that you have already moved on. In this case, the use of "really" is apt and functions as I suggested in the original example (Are you sure or merely feigning?)

Posted by: marc | June 9, 2004 07:36 AM

What I say in the previous comment doesn't do justice to your case. Even if you grant me the discourse function of "really", the general point is just that my wife is asking whether or not I have the right sort of knowledge.

The picture then is that knowledge simpliciter defines a genus of knowledge relations which are further individuated by the type of faculty which produces/sustains the associated belief. So in the case, though it is true that I have knowledge-1 (i.e., the sort of knowledge produced and sustained by perception-cum-short term memory), what is required is that I have knowledge-2 (i.e., the sort of knowledge produced and sustained by perception-cum-medium term memory). So my wife asking whether I really know-2 the number or if I am just faking it (by relying on my knowledge-1).

Now, unless there is a principled way of restricting the determination relations, the cost of this view is a very great deal of ambiguity in the word "knows". I'm not sure why the resulting view is preferable. I suspect, however, that what is bugging you is the contextualist component (since the stability view is consistent with reliablism). The idea is that, on the stability view, whether or not I know that the number is such-and-such depends on the context. On your alternative, however, there is an upward necessitation from knowing-n to knowing simpliciter. As a result, you will get to say (context independently) that I know-simpliciter the number.

Is that the crux of the disagreement?

Posted by: marc | June 10, 2004 02:45 PM

Your second post precisely captures the crux of our disagreement, Marc. Thanks for revisiting the question, and for taking the time to spell out the disagreement so clearly. On a related note, thanks for posting your paper on these issues at your website. As soon as I've had a chance to go through it carefully (in the next week or so), I'm sure I'll be posting some further thoughts on the very interesting issues you address there.

Posted by: j.s. | June 11, 2004 11:34 AM

Posted by Tony Marmo at 01:41 BST
Friday, 25 June 2004

The debate on Understanding and Knowledge goes on.

Post your comments if you will.

Posted by Tony Marmo at 17:24 BST

Not Every Truth Can Be Known:

at least, not all at once

According to the knowability thesis, every truth is knowable. Fitch's paradox refutes the knowability thesis by showing that if we are not omniscient, then not only are some truths not known, but there are some truths that are not knowable. In this paper, I propose a weakening of the knowability thesis (which I call the "conjunctive knowability thesis") to the effect that for every truth pthere is a collection of truths such that
(i) each of them is knowable and
(ii) their conjunction is equivalent to p.

I show that the conjunctive knowability thesis avoids triviality arguments against it, and that it fares very differently depending on one other issue connecting knowledge and possibility. If some things are knowable but false, then the conjunctive knowability thesis is trivially true. On the other hand, if knowability entails truth, the conjunctive knowability thesis is coherent, but only if the logic of possibility is quite weak.

Greg Restall


I am glad to be among the first persons to see it online. Now, let me make some comments about the manner the paradox is presented at the online encyclopedia:

Fitch's (1963) paradox challenges `common sense', as any paradox does. Still, there are ways to begin explicating or describing a paradox adequately. Surely, it is better to depart from ideas that everyone may intuitively agree to.

Everyone's intuition is that what is knowable is not necessarily known. In the same manner, not everything that is visible has been seen.

The way Fitch's paradox is presented in Stanford Encyclopedia is misleading for larger audiences. It says that the principle of knowability claims all truths are knowable ( (KP) p ? ?Kp). Then it concludes: "If one accepts the knowability principle, she must deny that there are unknown truths. (...) In sum, if all truths are knowable, then all truths are known. `

Well, if I accept (1) bellow:

(1) a. All Humans are mortal.
b. Greg Restall is human.
c. Thus, Greg Restall is mortal.

Should I deny that there are humans who are not dead? Should I conclude that Greg Restall is dead just because he is mortal? Should (1a) and (c) be understood as (2) and (3)?

(2) All humans are dead.
(3) Greg Restall is dead.

What may happen does not necessarily happens. One must make a difference between what is ` x`-able and what is already ` x`-ed.

I prefer the way you start presenting the paradox in your paper. The text at the Online Encyclopedia causes the wrong idea from the start, although it re-presents the paradox in the proper manner further on.

Tony Marmo at April 24, 2004 06:07 PM

Bravo! I love this paper. I'll still be thinking about it for a long time I'm sure, but initially, two comments. (i) In the middle block of paragraph 23 there are some p's which should beq's. Were you not taught as a lad to mind your p's and q's? Sorry, couldn't resist 8-) (ii) There's something weird about formalising `P is knowable' as \Diamond KP where \Diamond is an ordinary alethic modal operator. It's knowable that there is milk in the fridge (M)---go and look. It's not knowable that the colour of this banana is killer yellow---look at it and die. I would have thought it is not knowable that there is no milk in the fridge---because there is milk in the fridge! But \Diamond KM is true---there's an (alethically) accessible world in which you drink all the milk, and in which you (then) know there is no milk in the fridge. So to my ears, if P is not true, then not only is P not known, but P cannot be known, i.e. P is not knowable. In fact "You cannot know P unless P is true" is just the truth condition on knowledge (not "You do not know P unless P is true"). So if \Diamond KP is to mean that P is knowable, then the following must be true (at every world in every model): \Diamond KP --> P. This will then make (3) on p.7 not a truism, but a falsism! (For (\exists q)(\Diamond Kq\wedge \Diamond K\neg q) to be true, q\wedge\neg q (i.e. "q and not q" for the tex-illiterate) would have to be true.) But then your version of the knowability thesis does not follow from a truism (at least not this one, because it is a falsism) and so is not (yet shown to be) almost trivially true.

Nick Smith
at April 24, 2004 09:57 PM

In Mathematics one works with constants, variables and incognitae, not only propositions. In Formal Semantics we know that we cannot account for linguistic phenomena without the notions of `contant' and `variable'. A Semantics without variables could yield many paradoxes and/or miss important data respecting human languages.

Now, I ask both of you this: hasn't the notion of incognita a place within Logic? You have [[x]] but you do not know the denotation of [[x]]. You can only know it after, let us say, you solve an equation or inequation, or a problem. I guess that, for instance, the correct Semantics of questions like:

(1) Who gave Cinderella a poisoned apple?

would imply that x in (2) is better understood as an incognita:

(2) Who is x, such as x gave Cinderella a poisoned apple?

In Cinderella's traditional story the answer is [[x]]=none, for none gave her a poisoned apple. You find the denotation of the incognita in the same manner as you find the solution of an equation.

Now, my question to both of you:

What if one tried to see the knowability paradox as an argument in favour of the notion of incognita within a Logic system?

Tony Marmo at April 25, 2004 05:57 PM

A question: Can "Every truth is conjunctively knowable" be read as form of logical pluralism? That is, if there is "conjunctively knowable" might there also be "foo knowable"?

If there is more than one way of knowing, then we might enquire whether Fitch meant K as all ways of knowing, or just one way. Does the paradox remain if K meant all ways?

RdR at April 30, 2004 06:30 AM

I am not a Logician though, I think I can answer your question:

If there is more than one sort of mental process that you call `to know', then you actually have more than one predicate.

Suppose you have two ways of `knowing', represented by K and G. Then if there is a proposition p and we do not know it, you can write:


To write that we know we do not know p, it could be:


In such case, I guess it is more difficult to build a similar paradox, but one could try it. I hope the others correct me if I said anything wrong.

Tony Marmo at April 30, 2004 07:53 AM

Yes, if we allow multiple ways of knowing, we might know in one way that we (don't) know in another way [your G(notK(p))].

But my question was (twofold) whether Fitch distinguished such multiple ways of knowing (which in Greg's paper would mean multiple ways of deriving logical consequences), and whether the paradox would hold if Fitch's interpretation of knowing was "for all ways of knowing".

That is, should we read the knowability thesis as (p)(K) (p -> <>Kp) ? And can we derive from that the paradox (p)(K) (<>K(p & -Kp)), given that that derivation also uses a particular K.

RdR at April 30, 2004 09:58 AM

As far as I understand Fitch's paradox, it assumes that there is only one kind of `known' predicate (K).

But one has to take Philosophy into account and, yeah, to know that something exists may be considered one way of knowing. To know what such thing is, to be able to define or describe it is another way or part of knowing it.

As I said before, the issue about assuming that there is p such that we do not know p is that actually we know or assume that it exists, but we cannot say what is its denotation or describe it, etc.

In many sciences it is often the case that in order to explain certain facts one has to relate them to some unknown or yet to be discovered entity, whose existence is already accepted by scientists, but which seem to lack a denotatum or cannot be defined or characterised.

Tony Marmo at April 30, 2004 11:01 AM

I think multiple ways of knowing might be a way around the paradox. Kp & K-Gp is not necessarily a paradox. For example, let's say K is "knowing theoretically" and G is "knowing empirically". I may theoretically know p and theoretically know that I don't empirically know p.

However, are multiple ways of knowing necessary?

Let's say that there is only one way of knowing. According to <>K( p & -Kp) such knowing is allowed to be higher order. But then other paradoxes seem possible. If numerals are names of sentences and K operates on sentences or names of sentences and 1 is the name of (K-1), then 1 seems paradoxical.

So, maybe we don't allow K to be higher-order. But that would imply that the first K in <>K( p & -Kp) is different (in extension at least) than the second K. So, multiple ways of knowing seem necessary.

RdR at April 30, 2004 02:43 PM

By the way, in Romance Languages we have two verbs for the English `to know':

[1] Portuguese `saber', French `savoir'


[2] Pt `conhecer', Fr `connaitre'.

The first has to do with the word `wisdom' in English and also with the kind of knowledge that enables someone to do something, i.e., the idea of `know-how'.

The second has to with the idea of `knowledge' in English and also refers to a kind of experience that is part of a collective, ie, to get a share of a common knowledge or to share knowledge.

Some people feel that the first implies `deeper knowing'. On the other hand, you may use the second to express the idea that you met someone or found something.

Tony Marmo at April 30, 2004 03:38 PM


Some French examples to illustrate the case:

(1) Je sais qu'il y a des choses inconnues. I know that there are unknown things.

(2) Je sais qu'il y a des choses que je ne connais pas. I know there are things I do not know. (The first verb is `savoir', the second is `connaitre').

Tony Marmo at April 30, 2004 04:08 PM

I'm loving the comments! What fun to have such interested and interesting readers.

A few responses. Clearly Nick is right, there's a reading of knowable according to which what is knowable is true. I'm tempted to say that this reading is a confused scope issue, resulting from the following reasoning: necessarily, what is known is true, therefore, if you can know something, it is true. But that doesn't follow at all. It's just follows that if you can know something it can be true.

Anyway, I'll add a section to the paper indicating what one can say about conjunctive knowability in the case where we say that only truths are knowable.

As to incognita , I'm not sure about what to think about this. I s'pose I'm not sure what is the best way to talk about objects of intentional attitudes in the case of complete ignorance.

Different kinds of knowledge? I think that Fitch's paradox is restatable if we talk about being known-in-any-which-way. Let's say that something is known-in-any-which-way, if it's known in way-1 or way-2 or way-3, etc. Or so suspect anyway.

Greg Restall at April 30, 2004 04:49 PM

I left a message to Kai von Fintel and friends in his weblog, sort of inviting them all to come and speak their minds. I hope he read all of this and may say something.

In one of Kai von Fintel's courses in Intentional Semantics, he explains that `to know' in English has to with a reflexive world accessibility relation:

(0) wRw

It is the only `opaque' verb that has this character in English. Thus, this should explain a sentence like (1) is odd, but (2) is ok:

(1) #Bob knows that John Howard is the Prime Minister of France.

(2) Bob assumes that John Howard is the Prime Minister of France.

You cannot fix (1) as (1'), although (2') is an option:

(1') #*Bob WRONGLY knows that JH is the PM of France.

(2') Bob WRONGLY assumes that JH is the PM of France.

But, I wished Kai von Fintel could give us more details of such analysis in the way he uses it.

Tony Marmo at April 30, 2004 05:10 PM

About knowing and linguistics: I think Germanic languages make similar distinctions. In Dutch there is "weten" and "kennen". I can say, "Ik ken Jan" (I know John) and "Ik weet logica" (I know logic), but not "Ik weet Jan". Similarly, "Ik ken logica" would sound provincial. To me "kennen" is more like "knowing by acquaintance" and "weten" is more like "knowing by learning" (some skill or fact).

About knowing-anywhich-way: Don't we still run into reflexive paradoxes? It would mean the second K in <>K(p & -Kp) would include the same knowing as the first K. That is, reflexive knowing about the sentence that asserts the knowledge. So, paradoxes like "I know I don't know this sentence" would be possible.

RdR at May 2, 2004 11:35 AM

Richard, on knowing-any-which-way, I think I misunderstood the original motivation for the distinction. I'm happy with having knowledge tout court as a disjunction of knowing1, knowing2, knowing3, etc., because I think that the knowability paradox isn't a real problem. I reckon that we non-omniscient knowers should just bite the bullet and say that some truths are not knowable in any way at all.

If you want to distinguish between different kinds of knowing as a way of saving yourself from Fitch's paradox, then you'll need to resist that conclusion, of course.

Greg Restall at May 3, 2004 04:14 PM

Oh, and some news for everyone. I've managed to get a bit further with the paper on the basis of Nick's comment that on some views of knowability, to be knowable is to be true. (So <>K pentails p.)

It turns out that if to be knowable is to be true, then (modulo some sane choices about how propositional quantification works) conjunctive knowability is inconsistent with the S4 axiom for possibility: that <><> pentails <> p. That came out of the blue for me: I didn't see that coming at all , so thanks, Nick, for pointing me in the direction of thinking about this.

If we're happy for possibility to be weak (below S4) and knowledge to be weak (also below S4), then it turns out that there are models in which at every point every proposition is conjunctively knowable, and at every point not all truths are known, so we don't have a collapse into omniscience.

I'll write this up in gory detail soon.

Greg Restall at May 3, 2004 04:23 PM

Thanks Greg, that's agreeable: I would refute Fitch's paradox on the grounds that K can't be reflexively higher-order (on pain of a version of the liar's paradox). However, each kind of knowing might be known by a different kind of knowing (higher level), which allows us to talk about knowability but escapes Fitch's paradox.

RdR at May 3, 2004 05:04 PM

The paper's been updated. The current file is version 0.85, and it contains the gory detail I mentioned above. Check paragraphs 3.11 and 5.1 to 5.8 for the really new bits.

The proof of the failure of transitivity is not as nice as I'd like it to be, but it will do for now. If I can come up with anything nicer, I'll update it again.

It needs some nice pictures, too. But that will come later. I need some sleep.

Greg Restall at May 4, 2004 12:40 AM

Yeah, version 8.5 has more pages and is clearer.

Do you intend to compare your approach with Beall (2000)?

Tony Marmo at May 4, 2004 04:07 AM

Tony: I don't intend to compare my approach with JC's (2000) account, except for my throwaway lines in paragraph 1.4. Do you think I should do more than that?

My reasons for not are that I'm presupposing that the reasoning to Fitch's conclusion is valid, I'm then reflecting on what that should tell us about knowability. It has struck me that in the case of p& ~K p, the verificationist should say that both pand ~K pare knowable, and that might be enough to be getting on with, even if the conjunction isn't knowable. The paper is an exploration of what one can do with that move.

Therefore, the kinds of moves that try to contract the propositional logic of the situation -- to make it possible that ( p& ~K p) is known -- don't appeal to me.

Of course, JC is an extremely bright guy, and one with whom I enjoy collaborating . This isn't to say that we agree on everything.

Greg Restall at May 5, 2004 10:10 AM

Well, to be honest with you, in Linguistics people always ask you to write an overview of what the others have done before you advance your own thoughts. I think it is a choire to do it in every paper. I am not doing it anymore, because journals always impose page limits.

I just asked it because I began to search Beall's paper thru the internet.

Tony Marmo at May 5, 2004 09:08 PM


As I like to work with models/situations. I think that the following has a good solution to the paradox:

Edgington, (1985) "The Paradox of Knowability" Mind 94, 557-568.

Tony Marmo at May 6, 2004 06:16 PM

By the way, a new paper by John MacFarlane on knowledge attributions might be of interest.

I haven't read it yet. I've found it thru Kai von Fintel's blog.

Tony Marmo at May 12, 2004 09:16 PM

Yes, I've downloaded Macfarlane's paper. I have skimmed through the earlier draft, and I very much like the line (on the assessment relativity of knowledge ascriptions). I'm not sure whether admitting this would change the picture markedly, or keep it pretty much the same. I should have a think about this, but I'm inclined to say that a real consideration of that is for another time and place.

Greg Restall at May 12, 2004 09:44 PM

This sentence by Gere, who also won the Foot in Mouth award by the Plain English Campaign in 2002, is of much greater philosophical interest:

`I know who I am. No one else knows who I am. If I was a giraffe and somebody said I was a snake, I'd think "No, actually I am a giraffe."'

In his post Can Derrida be ever wrong? (september 29 2003), Mark Lieberman makes the claim that Derrida's sentences are nonsensical.

He talks about a game someone played with people who took Derrida seriously. The game consisted of picking one of the long sentences of any document by Derrida and substuting antonyms for its words in order to produce variants of that sentence. Later the original sentence and its variants were presented to people who were supposed to know Derrida's works and the question was: `which sentence is the original?' He claims that Derrida's admirers were unable to establish which one was the original sentence. According to Lieberman, this suggests that Derrida's rethoric is `a sophisticated form of White Noise'.

I confess that Derrida often makes no sense to me either, regardless of whether I read him in English or French. Maybe a Plain French Campaign is in order and French Intelectuals like him should try to express themselves in plain French. But I think the nonsensical aspects of their discourse are not mere instances of odd usage of natural languages. Would Derrida make sense if he wrote his thoughts in simple and direct French????

Tony Marmo at May 16, 2004 02:36 AM


Assume that the answer is no: even if Derrida wrote in plain French his sentences would still mean nothing.

Now we have the following situation: Derrida's admirers think they know that Derrida's sentences mean something. After the game they get to know they do not know what Derrida's sentences mean. But they still assume that Derrida's sentences mean something, whereby they claim they know the sentences mean something but they do not know what their meaning is.

This is a kind of situation where claim that

(1) they know they do not know p.

But (1) is false and yet the falsity of (1) does not entail they know p.

Moreover, if on one hand (1) is false, in natural languages (2) is not the negation of (1):

(2) They do not know they do not know p.

So we go to stage after the aforesaid game was played and Derrida presents his sentences in plain French and everyone finds out that his sentences mean nothing. Now they know that (1) is false. How would one express this `change of knowledge' in a natural language? Would it be (3)?

(3) Now they know that they do not know that they do not know p.

I do not think so. I think that at the stage former admirers of Derrida get at the same conclusion as Lieberman, i.e., they come know that Derrida's sentences mean nothing, the report that would accurately depict this fact is (4):

(4) They know that not p.

Or, if one wants to be more accurate, we have three stages:

(0) They assume they know p When Derrida's sentences seem to make sense.

(1') They assume they know they do not know p. When, after the game, they find out they do not understand Derrida's sentences.

(4') They assume they know that not p. When they find out that in plain French Derrida's sentences mean nothing.

[1] Of course, here I adopt a three values system, where if one does not understand a sentence judges it undecidable and not false.

[2] In all of the aforesaid cases we have to consider that there are two propositional attitudes: `to know' and `to assume'.


The second possible answer is yeas:

yeas, if Derrida re-writes his sentences in plain French they make sense.

Now we get these stages:

(0") They assume that they know p.

When they intially think they understand Derrida's sentences.

(1") They assume they know that they do not know p.

After the game has shown them they cannot understand Derrida's sentences.

(4"a) They assume they know that p.

When finally Derrida re-writes his sentences in plain French and show what they mean. But, assuming the KK thesis, (4"b) is the other possibility:

(4"b) They assume they know that they know p.

Tony Marmo at May 16, 2004 11:52 AM

OK, version 0.95 is uploaded now. This one has pictures . (And a nice new argument, too.)

Greg Restall at May 20, 2004 01:30 AM

Dear Greg,

I have too extra questions for you.

First, I am intrigued by the fact that nobody, who insists in defining opacity as failure to apply Leibniz' substitution of identicals, has tried to relate it to Fitch's paradox. I ask if you could tell me why.

Second, is it accurate that Leibniz' Substitutivity of Identicals principle is from his work `Discourse on Metaphysics'? I could not find it there.

Tony Marmo at May 28, 2004 07:28 PM

On Tony's two questions:

I am intrigued by the fact that nobody, who insists in defining opacity as failure to apply Leibniz' substitution of identicals, has tried to relate it to Fitch's paradox. I ask if you could tell me why.

I'm not really sure why, but here's a conjecture. Most people who are interested in formal treatments of Fitch's paradox do so thinking that the formalism of modal logic is the right way to think of the inferential properties of claims to knowledge. And here, even though we have opacity, we do have substitutivity of logical equivalents in this kind of epistemic contexts. I think that this is an idealisation, but an OK idealisation here. (In the paper I talk about reading Kp as pis a consequence of what is known, and this satisfies the substitutivity of logical equivalents.)

Second, is it accurate that Leibniz' Substitutivity of Identicals principle is from his work `Discourse on Metaphysics'? I could not find it there.

I have no idea! Does anyone around here know?

Greg Restall at May 29, 2004 10:46 PM

Thank you, Greg.

I was thinking of sentences like:

(1) Jimmy knows that Superman can fly.
(2) Superman is Clark Kent.

(3) (?)Jimmy knows that Clark Kent can fly.

If Fitch's paradox is considered:

(4) The Hulk does not know that Bruce Banner is smart.
(5) Bruce Banner is the Hulk.

(6) (?) Bruce Banner knows that the Hulk does not know he is smart. (7) (??) Bruce Banner knows that he does not know that he is smart.

What do you think?

Tony Marmo at May 30, 2004 12:17 AM

Greg and all,

I'm a little slow on commenting, but thought it might be worthwhile even at this late date. I've just completed a ms. on the paradox, following up on my `95 piece on it, and have a further piece, in progress, on my website. It argues that no one can be complacent about Fitch's paradox, simply taking the Fitch's proof as presenting a somewhat surprising result. I need to make some changes to it, but here's one way to put the point. "All truths are knowable" is, if true, necessarily true. "All truths are known" is false, at least when the domain in question is finite minds, but seemingly contingently false. And yet if Fitch's proof is sound, the two claims are logically equivalent. That ought to bother everybody...

jon kvanvig at June 16, 2004 08:03 AM

On Jon Kvanvig's comment:

Yes, that bothers me. I look forward to seeing the ms. to see how the bother dissipates. It should be fun.

Greg Restall at June 17, 2004 01:04 PM

Posted by Tony Marmo at 01:32 BST
Updated: Friday, 25 June 2004 17:48 BST
Wednesday, 23 June 2004

Some Thoughts About the Relationship Between Information and Understanding

Michael O. Luke
Paper to be presented at the American Society for Information Science Conference, San Diego, CA, May 20-22, 1996

That there is a relationship between information and understanding seems intuitively obvious. If we try to express this relationship mathematically, however, it soon becomes clear that the relationship is complex and mysterious. Knowing more about the connection, however, is important, not the least because we need more understanding as our world becomes faster paced and increasingly complex. The influence of increasing the amount of information, increasing the effectiveness of information mining tools and ways of organizing information to aid the cognitive process are briefly discussed.

Introduction: Why the Relationship Matters

Those of you who are expecting to learn something definitive about the relationship between information and understanding, or to find out the results of some project investigating it, will I hope be disappointed with this talk. My subject is indeed the relationship between information and understanding, but this is not something I can tackle in any standard way. I am afraid that you are going to have to do some of the work. What you will see from me is a great deal of ignorance - great dark stretches in the map of understanding the relationship - lit up faintly, here and there with some gleams of insight (I hope you will agree that there are some gleams).

So then what is the relationship between information and understanding? And why even pose such a question? Is there really any value in considering the relationship in any detail? What am I doing even putting the two terms on the table and drawing an arrow from one to the other with a question mark after it? After all, I know only a little about information and understand not much about understanding. What can possibly justify my temerity in raising such an issue at this conference and taking up a half hour of your time dealing with it?

In my own defense, let me suggest that this is one of the most important questions that an organization like ours, contemplating at this conference, as we are, the digital future, possibly can deal with. We practice information science and call ourselves information scientists. As scientists we seek to understand - the thirst to comprehend, to know how things work is, after all, the passion that drives science, for it certainly can only be the thirst to know and not money or fame! We seek to know and we have been puzzling at it for a long time. In this century in particular we seem to have made enormous progress in understanding as our stockpiles of information have grown at a dizzying rate. As information scientists we are interested in information. How does it work?

Surely, then the question of the relationship between these two things, information and understanding, should be no stranger to us, no alien skulking unobtrusively in our midst, but a constant companion. So maybe I am the only person in this room who doesn't understand the relationship fully. I guess if so that would eminently explain why I am up here squirming. I'll tell you a little story before I embark upon the major theme. When I came actually to write this paper, having been e-mailed that the deadline was March 1, it was -40 degrees Celcius at the time, and the sun was coming through the window of my office flat, straight across the farm land, the black soil that grows the world's best durum wheat invisible beneath a heavy mantle of glaring, gleaming white. It was a cryogenic Manitoba winter morning, bright and brittle, and we consoled ourselves by saying, "but it's a dry cold" and thinking of the mosquitoes, black fly, and deer fly we didn't have right now. So putting together the presentation proposal seemed a good idea at the time!

And in fact I still believe it is. So what is the justification for raising the question in this forum? I think it is that we are not really interested in information just for its own sake, revelling merely in piling it up and moving it around. We recognize information as a means to an end. It is what it can do for us, what it has done for us, what we might do to make it do even more that drives us. And what it can do is to promote understanding and to help us acquire knowledge and give us the basis for action, for decisions, for planning and doing. Information is useful - it helps us understand and when we understand we can do useful things, like invent things, develop better strategies for business success, and we even feel better. I am richer not poorer in the face of the rising sun for understanding something about how it may have formed, for how it creates the heat and light that enable life, for knowing how long the light has been travelling before streaming through my window and I an enriched for knowing its relative insignificance in the overall scheme of things in the universe.

It isn't useful in any economic sense this particular knowledge but it helps clear up some of the mystery around me if not the brooding stuff in the background. Information above all is useful, helping education and commerce, powering art and science, driving technology and innovation before it, commerce and industry. Knowing more about the relationship should help us to exploit it more effectively.

And that's not all! My final argument is right now, at this of all times, when we seem to stand poised at the edge of a node of almost cataclysmic change, with not much help of controlling it, maybe thinking ourselves lucky if we can just survive as the storm of change breaks around us, we shall need understanding if we are to have any hope at all of avoiding the perils and steering as best we can for safer ground. Understanding then is a prescription above all for managing, if that's the word, or perhaps more realistically coping with the future, the next millennium and beyond.

I should say that some people have higher aspirations and little patience with lowly old understanding. They have loftier things in mind as the titles of their books attest: "The Wisdom of Teams", "Working Wisdom" and the "The Wisdom of Science", the latter considerably older than the others and not a bad book. Well, I have no quarrel with wisdom? If occasionaly one can stumble across it, recognize it for what it is, and use it, so much the better! I just think that realistically there is more pay dirt in the more prosaic relationship we will explore in this session.

Considering an Equation

Now one of the things that scientists do when contemplating relationships is to seek a law, typically a mathematical expression that links the phenomena in some way, a notational short hand for the force hiding in the action. E=mc2 and that sort of thing. Rarely has something so potent been expressed so economically.


Posted by Tony Marmo at 20:02 BST
Updated: Wednesday, 23 June 2004 20:10 BST
Tuesday, 22 June 2004

There is also a post from the Desert Landscapes blog on the same issue:

The Relationship Between Knowledge and Understanding

by Michelle Jenkins

I've been thinking a lot lately about the relationship between knowledge and understanding. Knowledge and understanding, I think, are quite different sorts of things. My general grasp of the nature of understanding is influenced largely by the Ancients. One understands something if she 1) is able to provide a comprehensive explanation 2) has a systematic grasp of all of the information and 3) can defend her explanation against any questions or criticisms. First, in order to understand something I must be able to provide a comprehensive explanation of it. A physicist, for example, who understands the theory of relativity must be able to provide an explanation about why the theory of relativity is as it is, how it works, how it affects a variety of other physical laws and observations, and so forth. Second, to understand something, one must be able to `see' the relationship between different bits of information in the whole of the field to which the bit of information belongs. You must have a systematic grasp of the information relating to the matter at hand such that you see that the information, and the relationships that the different bits of information have with one another, forms an almost organic whole. Thus, a car mechanic who understands why the part of the car is making the sound that it is, has this understanding because he has a systematic grasp of the whole of the vehicle. He knows how the different parts relate to each other and how and in what ways certain conditions will affect both the different parts of the vehicle and the vehicle as a whole. This ties closely into the need for a comprehensive explanation. The physicist (or car mechanic) is able to provide a comprehensive explanation about the thing that she understands because she understands and `sees' the thing as a whole, as part of a complete system. Finally, in order to understand something, one must be able to defend her claim against any criticisms that are leveled against it. This defense must itself be explanatory. One cannot defend her view by pointing to the words of another, but must defend it by demonstrating an ability to look at the issue in a variety of ways and as part of a systematic whole. She is not proving her certainty with regard to an issue, but is demonstrating her understanding of the issue. In defending her view successfully, she demonstrates a reliability and stability within her account. Apparent in this account of understanding (I hope!) is that one must have a rather large web of information about the matter which one claims to understand. In order to develop and defend a suitably comprehensive explanation, one must be able to employ a huge number (and variety) of facts and bits of knowledge that relate to the thing she claims to understand. And, as the systematicity requirement shows, that web of information must be structured in a systematic manner.

Knowledge (at least most accounts of knowledge) requires none of this. I need not offer an explanation in order to know something. To demonstrate justification I must only point to a trustworthy source of that information. Further, knowledge need not be systematic. I can know that an electron has a negative charge without being able to place that bit of knowledge in a larger systematic account of particle physics. Finally, while knowledge may involve (but need not, in some views) the ability to defend one''s belief, the standard to which one must defend his belief is not as high as that of understanding. One must only prove, for knowledge, that he is certain of the belief (and has a right to be certain), not why the belief is so or how that belief ties into a larger system of beliefs.

Despite the differences between the two states, however, it seems to me that there is at least one very important connection between understanding and knowledge, but the issue has vexed me to the extent that I'm not certain what to think. Basically, I think that understanding, or more particularly our desire to understand, guides us in our quest for knowledge. We don't typically seek to know things for their own sake (we don't aim to be trivia mavens, we don't think that phonebook memorizers are ideal epistemic agents) but rather we seek to know things so that we might come to have a deeper understanding of something. (*See note at bottom of post*) If I desire to understand how my car works, then that desire to understand is going to direct me in what sorts of things I'll attempt to learn, in what things that I wish to have knowledge. But in order to ever obtain understanding, I must first come to know a lot of things about the subject that I hope to understand. I can''t understand something merely by having appropriately connected beliefs. Those beliefs must have the proper sort of justification and truth-connectedness that makes them knowledge. It would seem weird, I think, to claim to understand something while not knowing the information that makes up the requisite web of information. Thus, I think, understanding and knowledge do have a very definite relationship. Understanding is a more fundamental epistemic state than knowledge and is the epistemic state to which we ultimately aim. Knowledge is merely a step (not the destination) in our epistemic progression. However, we cannot understand something if we do not first have knowledge of a whole web of information that allows us to develop comprehensive accounts, defenses of those accounts, and of which we have some sort of systematic grasp. Our understanding of something depends on our first having knowledge about the information surrounding that issue. Put another way, understanding is knowledge plus something else...and I take that something else to be some sort of systematic grasp of the bits of knowledge that allows one to posit and defend explanations regarding the thing one claims to understand.

But there are questions with this account. Perhaps most prominently - must my web of information be comprised only of things that I know? Can I understand something even if my web of information is comprised partially of beliefs that fail to be knowledge? I'm thinking here of two cases. First is the physicist who understands particle physics (at least inasmuch as one can understand particle physics) and has firm beliefs, but not knowledge, about some of information making up his web of information. Were someone to ask him about a specific claim, he may say something like, "well, recent experiments indicate this, although we can't know for certain until...". Does the physicist, who relies on these beliefs (that yet fail to be knowledge) to support his explanations, have understanding of matter at hand? My second case is of the skeptic who disavows any claims of knowledge. Can we say that this person, who does not think that he knows anything still understand a great many things? Here I suppose there are (at least) two related questions. First, if we must know the information that comprises our understanding, then how do we handle the different standards for knowledge that different folks have? Surely the person with a lax standard for knowledge does not understand more than the individual with a very high standard for knowledge. And second (though related), if one must know the bits of information that comprise his understanding, must he acknowledge that he knows the information before he can claim to understand the matter which depends upon that information?

I don't know what to think in the face of these, and other, questions. On the one hand, it seems to me quite apparent that understanding must be comprised of different bits of knowledge. Understanding implies a deep grasp of the material at hand, and how can one have such a grasp of the material if he does not know the information that makes up that web of information? But on the other hand I don't want to take away the physicist's claim to understanding, nor do I want to say that the skeptic, even though he may disavow any claims of knowledge, does not understand anything. So I find myself at an impasse. I don't know which of the two claims ought to be rejected or whether they're really more compatible than they, at first glance, appear to be.

**Note: I should note that understanding does not guide all of (or probably even most of) the things we seek to know. I quite often want to know what time it is, where I left my car keys, or when the movie is playing, without any deeper desire for understanding. This is something I've been struggling to account for for a while, as of yet quite unsuccessfully. Right now I want to recognize this and offer the rather flimsy response that while we do seek after this purely instrumental sort of knowledge all the time, the knowledge that is guided by understanding seems to be a more important sort of knowledge, the sort of knowledge that defines us as rational, inquisitive, and thinking creatures.

Reactions to Jenkins' Post

Fascinating issue. My view is that understanding is a specific kind of knowledge - *explanatory* knowledge. Some beliefs have a content of the form "A because B." I'm an explanatory realist, in Kim's sense, so I think that such a belief has a truthmaker, namely, a state of affairs consisting in a real-world explanatory relation holding between A and B. I like to call such beliefs "explanatory beliefs." When they are true, justified, and Gettier-proof, these beliefs constitute *explanatory knowledge*. And states of understanding are states of explanatory knowledge. The basic idea is that to understand x is to know *why* x (or perhaps why x is the case).

I'm not sure this account would withstand critical scrutiny. In any case, it does not conflict with Michelle's account of the relation between understanding and knowledge. In fact, I have the feeling that they might illuminate each other further.

Comment by uriah -- 6/17/2004 @ 1:26 am

I agree that understanding is about having a comprehensive, interrelated system, but I think that the system can consist of various things besides knowledge. For instance, consider someone who understands Freudian psychology (case 1). He can explain the relationships between the id, ego, and superego, the importance of dreams and family relationships, and the rest of the system. He can come up with explanations for people's behavior based on this theory, and ways to correct their problems, even in contexts that earlier Freudians never imagined. However, let's suppose that Freudian psychology is basically nonsense and has little or nothing to do with how people actually are. This Freudian does not understand human behavior, but it sure seems like he understands Freudian psychology. And, his understanding does not merely consist of knowledge of what his predecessors believed, since he can create novel explanations for novel cases.

We could further imagine that this expert in Freudian psychology knows that Freudian psychology is nonsense (case 1b). He's just as adept at it, but he doesn't actually believe that people have ids or oedipal complexes or that dreams are a window to the unconscious. I would say that his understanding is a system of ideas, not knowledge or beliefs. When dealing with things like scientific or philosophical theories, some would say that you have to have this kind of understanding of the theory before you can decide whether to believe it.

Case 2: Consider an expert juggler. He can juggle most anything you give to him. He knows just how to toss and catch a knife so that he grabs the handle. He knows how to toss and catch and egg so that it doesn't break. He can adjust his juggling if he's moving around, or if there are obstacles around him. If he juggles with a partner, he can recognize the partner's particular abilities and preferences and make things easier on the partner. I would say that he understands juggling. But he doesn't really have beliefs or ideas, and he can't explain what he's doing very well, let alone defend it. But his abilities seem to represent a kind of implicit, functional knowledge or understanding.

You could say that these cases represent understanding in a different sense of the word, and you'd be right, but in order to understand understanding it seems important to identify how cases like these are similar to and different from the kind of understanding that you're trying to focus on.

Comment by Dan Keys -- 6/17/2004 @ 1:51 am

Michelle, I like 1) and 2) of "understanding". I'm not so sure about 3). In "On Certainty" Wittgenstein doesn't think Moore can confirm his knowledge merely by asserting it (e.g. 1969: 6,13,91,178,179). W. seems to think that the prefix "I know..." is a move that justifies others to demand proof. I think proof is a form of defence.
Comment by RdR -- 6/17/2004 @ 1:36 pm

Uriah pointed me to this discussion over at Certain Doubts, after lamenting the lack of epistemology on Arizona's blog. It's great to see such high-level posts and comments here.

Just two quick points. I don't think, and have argued in print (most recently in my new book on knowledge and understanding), that knowledge is a species of understanding. Suppose you have systemic information about Comanche dominance of the southern plains from 1775-1875. You can meet all of Michelle's account (which is amazingly similar to my own) and yet your information be gettiered (say, the books from which you learned had dates transposed, names switched, etc., and in your reading you auspiciously switch them and come up with the right dates). Your beliefs would be classic examples of gettiered beliefs, but you'd still be able to meet the conditions for understanding. (More discussion of such cases is needed to make them persuasive, but I won't go into that here). And, of course, there are always Swampman cases, which quite a few theories of knowledge cannot explain in terms of knowledge (here I'm thinking of Foley's use of such to show that knowledge is nothing more than broad and comprehensive true belief-his argument for his view isn't as good, though, as his use of swampman against other views). Even if Swampman doesn't have knowledge, understanding can be present.

The second point is about the cognitive goal that Michelle speaks of. There's a new Blackwell's Debate volume coming out, in which Marian David and I take opposing sides on this issue. Marian defends that truth is the goal, and I argue for a greater plurality depending on the construal of the question, but I also think that if the question concerns the goal of inquiry or investigation, understanding is the goal. There's a prepub link on my website, if anyone is interested.

Third, I worry a bit about explanation playing the strong role it plays in this discussion. I'm thinking of Kim's question as to why being given an explanation makes anything intelligible at all, in the way that understanding involves intelligibility.

Comment by jon kvanvig -- 6/20/2004 @ 6:28 am

It's great to see people talking about understanding! I agree with Jon Kvanvig that understanding is not a species of knowledge-in large part because of Jon's arguments! That, of course, leaves open the question of how the two are related. I have a pretty strong view about that, but it depends on a pretty non-standard view of knowledge. I take knowledge to be fairly unimportant epistemically (can I even say that?). I think our cognitive goal is best expressed as something akin to obtaining as comprehensive and intelligible a picture of the world and our place in it as we can. This, of course, implies having something very like understanding. Knowledge, on my view, is simply true belief that we acquire in a way that makes the acquisition of that truth creditable to us. Knowing that p is simply believing truly that p is such a way that we get credit for that accomplishment. What is valuable is the true belief. We value knowledge beyond mere true in the same way we value a moral deed that we perform more than we value merely the good outcome of that deed. We like for valuable states of affairs to add to our total worth as an agent, whether epistemic or moral.

This is by no means a detailed theory of knowledge, but it's enough to see why I take knowledge to be peripheral to our cognitive lives, and how I think it relates to understanding. We value true beliefs, but only insofar as they contribute to our understanding of the world. Thus, as Ernie Sosa, among others, has pointed out, we don't particularly care whether we have a true belief about the number of grains of sand on a stretch of beach. This will not inform our understanding of the world in any interesting way. So, our desire for understanding both guides our pursuit of truth (to some extent) and renders truths valuable to us. We pick up knowledge along the way as we try to figure out the world (if we do it right). It is an interesting question whether any knowledge is even necessary to understanding. I am open to the possibility (though I am not prepared to argue for it) that one could have a comprehensive understanding of the world without having any beleifs that meet the criteria of contemporary theories of knowledge.

Comment by Wayne Riggs -- 6/21/2004 @ 7:59 am

As a linguist, I think one part of the problem is that true knowledge presupposes understanding.
The argument above revolves around the idea that

(1) x knows that p, but x cannot explain p.

But if you assume that p itself is the explanation for q, you could affirm (2):

(2) x knows that q and knows that p, whereby x can explain q.

On the other hand if r is the explanation for p, then you may claim

(3) x cannot explain p, because x does not know r.

But you can also claim the other way round. Suppose that x does explain a certain phenomenon z. If s is the explanation for z, then

(4) x knows s.

But we do not even need to go this kind argumentation. To understand something is a little bit less than being able to explain such thing. To understand something is simply to know what it means or might mean. Take the sentence below:

(5) George Sands was not a male.

Even if you cannot explain (5), you understand it, because it has a meaning.

Of related interest.

Posted by Tony Marmo at 06:28 BST
Updated: Wednesday, 23 June 2004 07:53 BST
Monday, 21 June 2004
MONDAY, 21 JUNE 2004
Here is an interesting topic for discussion: The Relationship between Understanding and Knowledge. This issue has been and will certainly continue to be approached from several perspectives.

In the website The Examined Life there is a more general debate What is Knowledge by Paul Rezendes, Mitch Hodge and Graham Dennis.

On this same issue I have also found three other interesting documents. Firstly there are the papers below:


Knowledge Discourses and Interaction Technology

by Carsten S?rensen & Masao Kakihara

Research within knowledge management tends to either overemphasize or underestimate the role of Information and Communication Technology (ICT). Furthermore, much of the ICT support debate has been shaped by the data-information- knowledge trichotomy and too focused on repository-based approaches.
We wish to engage in a principled debate concerning the character and role of knowledge technologies in contemporary organizational settings. The aim of this paper is to apply four perspectives on the management of knowledge to highlight four perspectives on technological options. The paper presents, based on four knowledge discourses --four interrelated perspectives on the management of knowledge-- four perspectives on ICT support for the management of knowledge each reviewing relevant literature and revealing a facet of how we can conceptualize the role of technology for knowledge management.
The four technology discourses focus on the: Production and distribution of information; interpretation and navigation of information; codification and embedding of collaboration; and establishment and maintenance of connections.


Innovation through Knowledge Codification

by Carsten S?rensen and Ulrika Snis

Academics and business professionals are currently showing a significant interest in understanding the management of knowledge and the roles to be played herein by information and communication technology
(ICT). In this paper we take a closer look at one of the primary issues raised when supporting the management of knowledge how to understand the role of knowledge classification and codification as means for further organisational learning and innovation. Two manufacturing cases are analysed using particular perspectives from current theories on classification, the management of knowledge and organisational innovation.
It is concluded that a more complex understanding of the interplay between cognitive and community models for knowledge management as informed by research on social processes of classification can inform our understanding of both the role of classification of knowledge for organisational innovation and on the viability of providing ICT support based on codified knowledge.

There is also a post from the Desert Landscapes blog in the link above.

Posted by Tony Marmo at 20:08 BST
Updated: Tuesday, 22 June 2004 06:35 BST
MONDAY, 21 JUNE 2004


There's a fair bit of natural language work in paraconsistent logics, much along the lines Kai suggests (presupposition by e.g. Schoter and belief by e.g. Konolige). Some suggested refs follow.



Andreas Schoter (1995) The Computational Application of Bilattice Logic to Natural Reasoning, PhD Dissertation, University of Edinburgh

Andreas Schoter, Evidential Bilattice Logic and Lexical Inference (1994), Center for Cognitive Science Tech Report EUCCS RP-64, University of Edinbugh

Andreas Schoter and Carl Vogel, editors, Edinburgh Working Papers in Cognitive Science: Nonclassical Feature Systems (1995)

Vogel, C., & Cooper, R. (1995). Robust Chart Parsing with Mildly Inconsistent Feature Structures. In A. Schoter and C. Vogel (Eds.) RANLT 137 Edinburgh Working Papers in Cognitive Science: Nonclassical Feature Systems, Volume 10 (pp. 197-216).

(Don't remember if the following actually uses paraconsistent logics in the technical sense, but I suspect so.)
Konolige, K., Belief and incompleteness. in: J. R. Hobbs and R. C. Moore (eds.) Formal Theories of the Commonsense World, Ablex Publishing Company, 1985.

I believe here's also lots of AI work in default and nonmonotonic logic that references or builds on work in paraconsistent logic, and much of this has been applied to natural language. You should ask someone who really knows this stuff, like Rich Thomason, Michael Morreau or Nic Asher.

Posted by: David Beaver at January 28, 2004 12:01 AM

Posted by Tony Marmo at 19:53 BST
Updated: Tuesday, 22 June 2004 00:18 BST
Tuesday, 1 January 2002

On the Distinction between Relational and Functional Type Theory

By Paul E. Oppenheimer & Edward N. Zalta
It is commonly believed that it makes no difference whether one starts with relational types or functional types in formulating type theory, since one can either start with relations as primitive and represent functions as relations or start with functions as primitive and represent relations as functions. It is also commonly believed that the formula-based logic of relational type theory is equivalent to the term-based logic of functional type theory. However, in this paper, the authors argue that there are systems with logics that can be properly characterized in relational type theory, but not in functional type theory.
Source: Online Papers in Philosophy 




Posted by Tony Marmo at 01:00 GMT

Newer | Latest | Older