Click Here ">
« July 2004 »
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
You are not logged in. Log in
Entries by Topic
All topics  «
Cognition & Epistemology
Notes on Pirah?
Ontology&possible worlds
Syn-Sem Interface
Temporal Logic
Blog Tools
Edit your Blog
Build a Blog
RSS Feed
View Profile
Translate this
LINGUISTIX&LOGIK, Tony Marmo's blog
Sunday, 25 July 2004


Seminar on Plurality

By MarkSteen
Source: Orange Philosophy, July 24, 2004

Tom McKay approved of my idea of posting his announcement about his seminar on plural quantification, along with related topics (such as non-distributive predication). Tom has a new book on this subject which you can check out by clicking on the departmental webpage on the links list, then clicking on faculty, then McKay [sorry, for some reason my linking feature isn't working now].

I think some local-ish non-Syracusan (e.g., Cornell, Rochester) folk might be interested in attending. Here's the announcement [note that Tom will not have computer access until the end of the month and so you should wait a bit to email him or post questions here for him until August]:

Seminar, Fall 2004, on "Plurality"

There are lots of topics, and I want students' own interests to determine some of what we do.

My fundamental project (in a book I have just finished) has been to explore the issue of expanding first-order predicate logic to allow non-distributive predication. A predicate F is distributive iff whenever some things are F, each of them is F. Consider:

(1) They are students. They are young.
(2) They are classmates. They are surrounding the building.
The predications in (1) are distributive, but the predications in (2) are non-distributive. Non-distributive plural predication is irreducibly plural. In ordinary first-order logic, only distributive predicates are employed.

The incorporation of irreducibly plural predicates is related to a wide range of issues in metaphysics, philosophy of language, foundations of mathematics, logic, and natural language semantics. Some of the issues that we might consider:

What is the nature of plurality? How should we think of the relations among sets, mereological sums, pluralities and individuals? What (if anything) are these different ontological kinds, and how are they related? Can one thing be many? (Is one deck identical to the 52 cards? Or is this not an identity relation?)

Singular and plural predication; singular and plural quantification; singular and plural reference. How do those fit together? When we consider the full range of determiners in English and try to incorporate quantifiers to represent that, there are many interesting semantic issues to resolve.

How does the semantics of plurality relate to the semantics of mass terms?

In the foundations of mathematics, how far can plurals take us without set theory? What is the relationship of second-order logic to plurals and to the foundations of mathematics?

What is the nature of ontological commitment? What does semantics commit the semanticist to? What does it say speakers are committed to? (For example, if I say that the analysis of adverbs requires an event semantics, does that mean that an ordinary user of adverbs is committed to the existence of events? This kind of issue becomes interesting when we look at the semantics of plurals.)

Can we talk about everything without paradox? Are plurals a special resource to enable us to do so?

A large number of issues about the relationship of semantics and pragmatics come together when we consider definite descriptions. Usually discussions focus on singular definite descriptions, but we can see what difference (if any) it makes when we insist that the account be general enough for plural and mass definite descriptions. This then also relates to the consideration of pronominal cross-reference and demonstrative reference.

Some have argued that an event semantics is important for getting plurals right. It will be interesting to look at event semantics and how that relates to plurals.

I will meet with each enrolled student early on in the semester to identify some areas of interest and get started on developing the student's presentation and paper on a topic of the student's choice.

If people are interested in looking into this before the semester begins, my book is available on the department's website:
(Click on my name in the list of faculty.) Also, Oystein Linnebo has posted a draft of his forthcoming Stanford Encyclopedia article, and it is a good introduction: Scroll down to "Plural Quantification."

We will not presume any greater familiarity with logic than you would acquire by being alive and awake through most of PHI 651.

Please get in touch with me if you have questions.

Posted by Tony Marmo at 23:11 BST
Updated: Monday, 9 August 2004 08:02 BST

Knowing That and Knowing How

By David Bzdak
(Source Orange Philosophy 07/17/2004)

I've begun delving into the literature recently on the difference between knowing that and knowing how (re-delving, actually, but that's neither here nor there). I've been quite surprised to find that it almost all jumps off from Ryle's discussion of the topic in The Concept of Mind which I believe was published at 1950.

I see hardly any mention of this topic other than in response to Ryle, and not much on the topic pre-Ryle. This strikes me as odd for such an important epistemological distinction (I realize that the distinction was recognized, pre-Ryle -- I'm wondering if it was philosophically analyzed). Am I missing a mountain of books/journals/articles out there (perhaps in the non-analytic tradition)? Or did Ryle really essentially begin this discussion?


I've wondered about that myself. I wish I could offer some help. The only thing that comes to mind is Cook Wilson, but I don't think he published much.

Posted by: chuck at July 17, 2004 09:36 AM

Here's some notes from a fellow in Edinburgh, with a nice little Bibliography:

Also, I saw:
Snowdon, Paul. "Knowing How and Knowing That: A Distinction Reconsidered" Proceedings of the Aristotelian Society 104:1 Sep 2003.
Posted by: chuck at July 18, 2004 09:02 AM

Arguable one of the great discussions in ancient chinese thought was precisely about the knowing how & knowling that, with the Daoists perhaps holding that knowing that is useless without knowing how. If you want to read more I would suggest Chad Hansen's ingightful book, A Daoist theory of chinese thought.

Cheers David
Posted by: David Hunter at July 20, 2004 07:55 PM

While it doesn't fit exactly into Ryle's distinction, one could also bring up Heidegger and his distinction between present-at-hand and ready-at-hand. Present at hand was more propositional knowledge and he inverted the usual relationship saying that read-at-hand or utility was more fundamental. One could, I suppose, move to the Ryle distinction with Heidegger in his middle phase arguing that knowledge-how is more fundamental than knowledge-that. I think one would have to be careful though.
Posted by: clarkgoble at July 20, 2004 08:36 PM

Gosh that was truly terrible spelling. The Daoist story which is supposed to to illustrate this distinction is from (I think) Zhuangzi and is the story of wheel wright Slab, basically the story goes that a Duke is wandering round studying one of the books of the Sages and this lowly Wheelwright laughs at him being a Duke he basically says tell me whats so funny and you better have a good reason for laughing or off with your head. The wheel wright says pardon me Duke but I don't understand why you are wasting your time with the leavings of the Sages. "What" goes the Duke! the wheelwright says well I am growing old and I am trying to teach my son how to bend wood to make wheels, I have taught him everything I know, what to do when making a wheel, still he cannot do it. When I die all I will leave behind is him, and as a wheelwright he will be no good, what I can't teach him is the skill I have, it is the same thing with reading the Sages, what made them Sages cannot be found in their leavings.

There is a fair bit throughout Zhuangzi (Also know as Chuang-Tzu) to do with this distinction usually making the point that skill is more important than knowledge.
Posted by: David Hunter at July 20, 2004 08:45 PM

Posted by Tony Marmo at 14:36 BST
Updated: Monday, 9 August 2004 08:04 BST

Knowing How v. Knowing That: Some Heterodox Idea-Sketches

by Uriah Kriegel

(Source: Desert Landscapes 7/19/2004)

Since Ryle, orthodoxy had it that knowledge-how is categorically different from knowledge-that. The latter is a form of propositional representation of the way things are, whereas the former is just a capacity. The occurrence of the word "knowledge" in both expressions should not mislead us to think that they have something significant in common. There is no way to reduce Agent's knowledge *how* to ride a bike to some knowledge *that* certain things have to be done.

I have a somewhat different view. On my view, knowledge-how consists in *non-conceptual conative representations*. (...)

By "conative representations" I mean representations with a telic, or world-to-mind, direction of fit (wishing that p, wanting that p, hoping that p, intending that p, etc). These are to be distinguished from cognitive representations, which have a thetic, or mind-to-world, direction of fit (believing that p, expecting that p, suspecting that p, etc.).

The above parenthetized examples of conative representations all have propositional, and therefore (presumably) conceptual, content. But just like there are non-conceptual thetic representations, so we should expect there to be non-conceptual telic representations.


If my suggestion is on the right track, then although we cannot say that knowledge-how reduces to knowledge-that, since knowledge-that is propositional (because of the `that'-clause), we *can* say that it reduced some sort of representational knowledge.


Hi Uriah,

In what sense are your non-conceptual conative representations actually representational? One specific worry: what makes it the case that every step in the causal chain leading from my desire to ride the bike to my actual riding of the bike doesn't count as a non-conceptual conative representation of the next step along the chain?


Comment by Brad Weslake -- 7/20/2004 @ 12:54 am

Ok, I'm only half getting this. Suppose I know how to pedal, and the way to pedal is to use M16 or M17. Intuitively, it's both possible and quite likely that I don't know that the way to pedal is to use M16 or M17 - I don't even have concepts for M16 or M17. But if I know how to pedal, I *must* have a concept for pedaling.

You say, knowledge-how consists in *non-conceptual conative representations*. But there must be at least a little more, right? Because I have to at least have the concept of the thing I know how to do. And it seems that it's optional for me to have the concepts of M16 and M17 - if I'm a physiologist with a detailed understanding of the mechanics of pedaling, I still (can) know how to pedal.

I'm not quite sure I'm interpreting you right, so I'm going to stop here. Does this sound correct?

Comment by Jonathan Ichikawa -- 7/20/2004 @ 6:41 am

Woo-hoo, my favorite kind of response: everybody's right (except me?).

Christian, I think your formulation of the conclusion in terms of ability rather than knowledge is better than my original formulation. Here's why I was thinking of knowledge nonetheless. The common account of knowledge - as a belief that is true, justified, and Gettier-proof - seems to be tailored to knowledge-that. But once we agree that there is something fundamental in common to knowledge-that and knowledge-how, then we need some wider uderstanding of knowledge simpliciter. My view of knowledge-how isolates only one fundamental commonality with knowledge-that, namely, that the respective states are representational. However, with this may come other commonalities. Representatons have conditions of satisfaction (truth conditions in the case of knowledge-that, and on my view, fulfilment conditions in the case of knowledge-how) and they are answerable to certain standards of representation formation (compliance with which gives "justification," "wareant," or something in the vicinity). So maybe a case could be made for talk of knowledge rather than mere ability. But talk of ability is certainly more cautious.

Brad: I think you're also right that talk of conative states being representational is problematic. Conative states are certainly intentional, they are about something. But do they represent something? Only in a somewhat technical sense. In the regular sense of the word, as I hear it, to represent something is to represent it to be the case, or just to be. In that sense, conative intentional states are not normally representational, because they don't represent the way the world is, but rather the way one would want the world to be. In using the term "representation" I had in mind the more technical sense used in discussions of the representational theory of mind etc. It is common in these discussions to take desires and other conative states to be representational, in the minimal sense that they have conditions of satisfactions - indeed, in the minimal sense that they are intentional or have aboutness.

Comment by Uriah Kriegel -- 7/20/2004 @ 9:21 am

Uriah Kriegel,

Even with representation limited to the standard philosophical usage, I am worried about how much representation there is in your idea of non-conceptual conative representations. If I intend to ride a bike, but fall off, it is clear that my intention has failed to be satisfied; moreover this normative component of intention seems integral to it being an intention in the first place. Similarly, if I see in my schooner (as we say here in Sydney) a particular hue of yellow beer, it seems I could come to believe that this seeming was mistaken, and that the beer is actually some other hue (even if I couldn't articulate the difference beyond saying "it seems different"). What is the analog for your non-conceptual conative representations? My question about the causal chain was designed to get at this - some criteria is needed for sorting out causal links that count as representational from those that do not; it seems that this criteria needs to be normative; and it doesn't seem that the processes that underlie know-how are normative in the right way.


Comment by Brad Weslake -- 7/21/2004 @ 10:53 pm

Right, I forgot to address the Brad's problem of fixing the content of those "non-conceptual conative representations." The problem described by Brad is parallel, however, to the *horizontal problem of disjunction* for naturalist theories of cognitive representations. There are two disjunction problems that arise for cognitive representations. The first is: what makes it the case that my cat-thought is a representation of a cat and not of a cat-or-small-dog-on a moonless night? Call this the vertical problem of disjunction. The second is: what makes it the case that my cat-thought is a representation of a cat and not of a cat-or-big-bang? Call this the horizontal problem of disjunction. Similarly, we may ask what makes it the case that a desire to bike is a (conative) representation of biking and not of biking-or-getting-to-the-office? I don't have a good answer to this problem, or any answer really, but I'm comforted by having the partners in innocence I have in naturalist theories of cognitive representations. In particular, Fodor, Dretske, and Gillet had some interesting things to say about this problem at the end of the 80s.

Comment by Uriah Kriegel -- 7/22/2004 @ 3:15 pm


Seems I am just more sceptical than you of causal theories of content in general (I don't think any of the replies to these sorts of problems ended up succeeding). To expose my own biases, I think the prospects of an explanation of know-that in terms of know-how (via functionalist or pragmatist approaches to content) are far better than those going in the other direction...

Comment by Brad Weslake -- 7/22/2004 @ 9:32 pm

Posted by Tony Marmo at 13:58 BST
Updated: Monday, 9 August 2004 08:05 BST
Wednesday, 21 July 2004

On Opacity

Continuing from...

B. Language and Interpretative Techniques

Here I assume that truth-conditions alone do not automatically trigger any process of verifying sentences or reviewing beliefs or other propositional attitudes. Rather it is the users of natural languages that play a proactive role in the comprehension and evaluation of sentences. The proactive role of speakers is evident by the fact that the normal usage of such languages does not support the famous deflationist claim that adding the predicate is true to a sentence ? adds no content to it .
The fact that a string like it is true that contributes to the meaning of a natural language sentence is made evident in the cases the same sentence is evaluated differently by language users. Or in other words, a sentence may be deemed true by the person that says it and false by those who hear it, regardless of the conditions obtained in a certain world or situation. This means that the claim that a sentence like (2a) means (2b) can only be made from the perspective of the person who utters it, if and only if he is sincere:
(2) a. Tom Jobim composed a new song.
b. It is true that Tom Jobim composed a new song.

It cannot be made from the perspective of the hearer, who may doubt (2a). And, if the utter is a conscious liar, it cannot be made from his perspective either. Notice that this discrepancy of opinions may occur independently of whether Tom Jobim has or has not composed a new song in a certain world or situation. In part this observation shows a reactive role played by hearers, when they accept or doubt a sentence. But the possibility that even the utter of a sentence may deem it false evidences the proactive role he plays. And if one takes into account that interlocutors in a conversation also have their intentions and communicate them, then their evaluation of any sentence may also reflect a proactive role, more than a reactive one.
Thus, a sentence ? is not inherently construed as true (T?), false (??), undecidable (U?), possible (??) or necessary (o?). Those are judgements made by the language users when they handle sentences.
But the language users' proactive role is not limited to their capacity of merely ascribing truth-values to sentences. It is often the case that language users are able to construe gibberish utterances in a manner that they make sense, without incurring into some paradoxes or into some explosive inconsistency a la Pseudo-Scotus (99). And they usually do so, unless they choose not to.
Let us illustrate this idea with examples, like the sentences like (3). While the equivalents to in any artificial logic language are absurdities or paradoxes that require a sophisticated philosophical engineering to solve them, users of natural languages somehow manage to extract non-paradoxical coherent meanings from them:
(3) a. This sentence means the converse of whatever it means.
b. There's no such thing as legacies. At least, there is a legacy, but I'll never see it.
c. The ambitious are more likely to succeed with success, which is the opposite of failure.

There is an evident self-referential paradox in (3a), a clear contradiction in (3b) and a redundant or circular thought in (3c). And they can be interpreted in this manner, if the language users that read them proactively choose to see the possible paradoxes, contradictions and redundancies. But, for the reasons that I shall go into further on, that is not what they frequently do in the everyday common usage of a natural language. Accordingly, (3a) is usually interpreted either as referring to another sentence or as a potential metaphor for something, while (3b) may be taken as a review of statements and (3c) as an attempt to stress something. This evidences that the hearers/readers are able to recover the intended messages behind clumsily constructed sentences. Methinks that, in the exercise of this capacity, language users proactively employ certain techniques. But these techniques are not just any techniques: they are not merely ad hoc inventions, neither are they hazardously chosen.

C. Paraconsistency

Sch?ter (1994), among others, claims that the semantics of natural languages exhibit four basic properties that are already acknowledged and have been investigated in non-classic logic theories: paraconsistency (76), defeasibility, which contrast with monotonicity defined in (98), partiality, which contrasts to totality defined in (102), and relevance (101). Though paraconsistent approaches in Logic have been developed since the seminal works of Jas?kowsky (1948) and da Costa (1963, 1997) , and though they have important consequences for Linguistics, similar approaches in natural language semantics are only beginning.
As Priest (2002) explains, most of Paraconsistent Logic consists of proposing and applying strategies against non-consistency (Cf 93). There are several possible techniques to avoid or contain explosion within a logic system or semantics for artificial or natural languages, such as propositional filtration, non-adjunction, non-truth functional approach of negation, de Morgan algebra, etc. But those are all techniques invented by logicians for artificial languages. Should we accept the idea that in construing sentences the users of natural languages also use techniques to control explosions, then such techniques must be available as inherent interpretative devices of the human linguistic systems. In other words, as Logicians have their paraconsistent techniques for artificial languages, so the users of natural languages have techniques of their own, which are made possible by the fundamental properties of such languages.
Accordingly, the capacity of humans to ascribe values to sentences independently of the actual conditions obtained in a certain world or situation, which underlies the phenomenon called opacity in human languages, has to do with at least one of these techniques humans naturally posses: the contextualisation of sentences. I shall explore and unfold this matter in the following.


Posted by Tony Marmo at 14:16 BST
Updated: Wednesday, 21 July 2004 14:20 BST
Here I share some pieces of the current draft version of one of my articles on opacity in natural languages. I shall post one excerpt or two by day. Hope you like it. Comments are wellcome.


1. Prelude

1.1 General Considerations

A. The Issue

In this work I shall examine some aspects of the semantic phenomenon called opacity from the perspective of human languages in their common usage (rather than artificial languages or usages created by Logicians), relating it to the manner such languages, as computational systems, equip their users with the tools and the techniques to handle (pseudo-) paradoxes and explosions caused by contradictions. Although I resort to the work of Logicians and Philosophers, the principles and theoretic notions herein proposed to formalise such phenomena are primarily hypotheses respecting the inherent machinery of human languages, rather than merely invented solutions to approach the issues in question.
The latu sensu notion of opacity can be initially figuratively characterised as the phenomenon of a sentential context not allowing the light of a semantic/logic principle to pass through, i.e., a certain context is opaque because a certain (mode of) inference is not visibly valid therein.
There have been some more specific and/or stronger hypotheses trying to define actual instantiations of this notion occur and/or to predict when and to explain why they occur. Indeed, as far as I know, there have been at least two basic approaches on opacity.
The first basic approach on opacity, which is herein called classic or traditional, and which will be questioned in Section 2, assumes the definition of opaque context given in (1) below or variants thereof: Although (1) has never been accepted by important academic factions, like the Russellian philosophers among others, it is still the most spread conception in the literature:

(1) Opaque context (classic version)
A sentential context C containing an occurrence of a term t is opaque, if the substitution of co-referential terms is an invalid mode of inference with respect to this occurrence. (See Mckinsey 1998, Quine 1956)

The second basic view, which will be approached in Section 4, revolves around the idea of non-symmetry of accessibility relations. This second view has more adepts among linguists.
The alternative view I shall sketch here attempts to determine how fundamental opacity is and to relate it to issues of consistency and non-paradoxical interpretation in the common usage of natural languages.


Posted by Tony Marmo at 14:04 BST
Updated: Wednesday, 21 July 2004 14:19 BST
Tuesday, 20 July 2004


I am very happy to announce that my dear friend and colleague Martin Honcoop's work, Dynamic Excursions on Weak Islands, has been re-edited and published via the Semantics Archive. Martin, a proud disciple of both Groenendijk and Szabolcsi, was a very competent linguist, a true expert in many things and I had the privilege to meet him during his short life time and call him a friend. He together with Marcel den Dikken, Eddy Ruys and Rene Mulder made up an outstanding group of young formalists unmatched by their country-fellows (either of their age or younger). Martin was exceptionally patient when had to explain what formal linguistics is about to fanatic and intolerant empiricists. After Mulder had moved to the publishing business and Marcel gone to the States, it is not exaggerate to say that when Martin died, one quarter of the future of formal Linguistics in the Netherlands perished too. We all miss him a lot and I congratulate the blessed soul who put his paper in the Semantics Archive.

Posted by Tony Marmo at 17:31 BST
Updated: Monday, 9 August 2004 08:08 BST
Monday, 19 July 2004

Boolean networks with variable number of inputs (K)

Metod Skarja, Barbara Remic, and Igor Jerman

We studied a random Boolean network model with a variable number of inputs K per element. An interesting feature of this model, compared to the well-known fixed- Knetworks, is its higher orderliness. It seems that the distribution of connectivity alone contributes to a certain amount of order. In the present research, we tried to disentangle some of the reasons for this unexpected order. We also studied the influence of different numbers of source elements (elements with no inputs) on the network's dynamics. An analysis carried out on the networks with an average value of K= 2 revealed a correlation between the number of source elements and the dynamic diversity of the network. As a diversity measure we used the number of attractors, their lengths and similarity. As a quantitative measure of the attractors' similarity, we developed two methods, one taking into account the size and the overlapping of the frozen areas, and the other in which active elements are also taken into account. As the number of source elements increases, the dynamic diversity of the networks does likewise: the number of attractors increases exponentially, while their similarity diminishes linearly. The length of attractors remains approximately the same, which indicates that the orderliness of the networks remains the same. We also determined the level of order that originates from the canalizing properties of Boolean functions and the propagation of this influence through the network. This source of order can account only for one-half of the frozen elements; the other half presumably freezes due to the complex dynamics of the network. Our work also demonstrates that different ways of assigning and redirecting connections between elements may influence the results significantly. Studying such systems can also help with modeling and understanding a complex organization and self-ordering in biological systems, especially the genetic ones.

Keywords: Boolean networks , biological systems, connectivity distribution, variable K,
sources of order, canalization, frozen elements, no input elements (source elements), attractor similarity, effective distribution, genetic networks , high orderliness.


Posted by Tony Marmo at 17:57 BST
Updated: Monday, 9 August 2004 08:09 BST
Friday, 16 July 2004

We have often heard the same complaint when an influential linguist, such as Chomsky or Kayne, releases a new paper: Oh no! He changed everything again!

A lot of non-theoretic linguists, who are inquisitors sank in the darkness of 19th century empiricist dogmas, are very reactionary in this sense: they hate changes in the theoretic framework, they do not want any of them and deem it absurd to change things all the time. But, if the concepts of some form of thought never change then it is not real science.

Now, newspapers around the world give us the good example of what solid science really means:

Hawking finds hole in his theory

Source: Associated Press, The Globe and Mail

After almost 30 years of arguing a black hole swallows up everything that falls into it, astrophysicist Stephen Hawking did a scientific back-flip Thursday.

The world famous author of a Brief History of Time said he and other scientists had it wrong -- the galactic traps may in fact allow information to escape.

The findings, which Dr. Hawking is due to present at the 17th International Conference on General Relativity and Gravitation in Dublin on July 21, could help solve the "black hole information paradox," which is a crucial puzzle of modern physics.

Current theory holds that Hawking radiation contains no information about the matter inside a black hole and once the black hole has evaporated, all the information within it is lost.

However this conflicts with a central tenet of quantum physics, which says such information can never be completely wiped out.

Congratulations Professor Hawking! You are a truly wise man!

Read more:

Het Volk
Los Andes
The Australian
Corriere della Sera
La Cr?nica de Hoy
The Houston Chronicle
The Guardian
The Globe and Mail
The Independent
El Mundo
No Olhar
El Periodico de Catalunya
RP Online
The Telegraph
Ziua Magazin

Posted by Tony Marmo at 10:32 BST
Updated: Monday, 9 August 2004 08:10 BST

Roumyana Pancheva has a paper on the present perfect tense puzzle, a linguistic phenomenon:

Another Perfect Puzzle

The interaction of the perfect with temporal adverbials is the domain of
the well-known present perfect puzzle (Klein 1992) - the fact that certain
adverbials are prohibited with the present perfect in English (though not
some other languages) while acceptable with non-present perfects. As is
generally agreed, the prohibition is against past specific adverbials (cf.
Heny 1982, Klein 1992, Giorgi and Pianesi 1998, a.o.).
This paper adds yet another puzzle to the area of perfect-adverbial
interactions. It establishes a new generalization regarding the modification
of perfects by both past and non-past specific temporal adverbials. The
puzzling facts are illustrated in (1).

(1) a. ?? We saw John last night. He had arrived yesterday...
b. We saw John this morning. He had arrived yesterday...
c. We saw John last night. He had arrived the same day...

Adverbials like yesterday are allowed in past perfects, and they may specify
the time of the event, as in (1b).However, their presence is restricted,
depending on what the reference time in the past perfect is. The reference
time is the interval which tenses relate to the speech time, and which the
event time is situated relative to. In the case of the past perfects in (1), the
reference time is a past interval anaphoric to the reference time of the
preceding past sentences: last night in (1a) vs. this morning in (1b). The
choice of a reference time contained in the interval denoted by the adverbial
modifying the perfect results in degraded acceptability, as in (1a).
When the reference time in the past perfect is not contained in the denotation
of the adverbial modifying the perfect, the result is an acceptable sentence, as
in (1b).

Read it

Posted by Tony Marmo at 07:24 BST
Updated: Monday, 9 August 2004 08:11 BST
Thursday, 15 July 2004

Brian Leiter has a very important and interesting post on expertise and knowledge, which is both a defence of scientists against ignorance and a starting point to discuss what kind of attitude is really 'arrogance' and whetherit is or is not something natural of academic life. Although I do not agree with one line of his text, where he says that 'science is not a democracy', his paper in its entirety seems correct and highly relevant to other issues of this blog:

Arrogance and Knowledge

by Brian Leiter, July the 13th, 2004

Andrea Lafferty, executive director of the Traditional Values Coalition, a conservative religious organization, delivers what could be the signature line for our backwards times in America:

There's an arrogance in the scientific community that they know better than the average American.[Source the NYT]

In fact, of course, scientists do know quite a bit better than the average American about the matters for which their scientific expertise equips them. Those with knowledge, surprisingly, know more than those who are ignorant. Is that arrogance?

As Chris Mooney remarked , science is not a democracy,
[sic] and in a democratic culture, that inevitably becomes a cause of resentment, as Ms. Lafferty's comment attests. This resentment of competence was first made vivid to me when I appeared on CNN more than a year ago to discuss the textbook selection process in Texas. When I dismissed the argument that the textbook selection process should be democratic (which it isn't, though it pretends to be) on the grounds that competent educators should vet textbooks, not political and religious groups, the CNN host, Anderson Cooper, cut me rather short: that reply clearly made him uncomfortable, and he changed the topic to how the selection process wasn't really democratic anyway.

Resentment of competence was also a motif suggested by my exchange with Professor Eastman --one of the ignorant law professors shilling for teaching creationist lies to schoolchildren--who used that favorite rhetorical device of the anti-Darwin crowd by referring to its tyrannical orthodoxy. Unfortunately, as I noted on that occasion, views that are correct ought to be orthodox, and they ought to exercise the tyranny appropriate to truth, namely, a tyranny over falsehood and dishonesty.

But when truth and knowledge clash with deep-seated prejudices--especially those reinforced from the pulpit and in the public culture--resentment towards the arrogance of those with knowledge and competence grows.

Unfortunately, I don't see much room for compromise in this domain. Knowledge and competence can not become meek and abashed merely to avoid offending the vanity of the undereducated, the parochial, and the unworldly. The Enlightenment dream was to extend the blessings of reason and knowledge as widely as possible. In the United States, that Enlightenment project has been stymied: at the highest echelons of the culture, the material and institutional support for the pursuit of knowledge and competence is unparalleled, yet the fruits of these labors are often either regarded with suspicion and resentment in the public culture at large--or simply go unrecognized and unnoted altogether.

Could there be a greater failure of the Enlightenment project than that a huge majority of U.S. citizens actually believe there is an intellectual competition between Darwin's theory of evolution by natural selection and intelligent design creationism? Or that the President of the country publically affirms their skepticism, without being held up for ridicule in the media and the public culture?

These are, for various reasons, scary times in [the United States of] America, but the increasingly brazen haughtiness of the purveyors of ignorance and lies--who cloak their backwardness in the judgmental rhetorc of "arrogance" and a none-too-subtle appeal to the "ordinary" person's sense of democratic equality--may be the most worrisome development of all. That the empire of ignorance spreads its domain portends calamities from which it could take centuries to heal.

Permanent link

Posted by Tony Marmo at 00:35 BST
Updated: Thursday, 15 July 2004 01:00 BST
Sunday, 11 July 2004

Purver and Ginzburg shed some light on the Semantics of Noun Phrases, from the perspective of the HPSG school, which I both respect and dissent from:

Clarifying Noun Phrase Semantics

Matthew Purver and Jonathan Ginzburg

Reprise questions are a common dialogue device allowing a conversational participant to request clarification of the meaning intended by a speaker when uttering a word or phrase. As such they can act as semantic probes, providing us with information about what meaning can be associated with word and phrase types and thus helping to sharpen the principle of compositionality. This paper discusses the evidence provided by reprise questions concerning the meaning of nouns, noun phrases and determiners. Our central claim is that reprise questions strongly suggest that quantified noun phrases denote (situation-dependent) individuals-or sets of individuals-rather than sets of sets, or properties of properties. We outline a resulting analysis within the HPSG framework, and discuss its extension to such phenomena as quantifier scope, anaphora and monotone decreasing quantifiers.

Download link

Posted by Tony Marmo at 07:19 BST
Updated: Monday, 9 August 2004 08:12 BST
Saturday, 10 July 2004


Yoad Winter on Choice Functions

Winter's page has many papers, and his concerns include computational linguistics. It is worthy to check it. One of his recent works, Choice Functions and the Semantics of Indefinites, is a sort of advanced introduction to the issue.

Methinks that choice functions can be used for almost any thing in semantics. Hamblin approaches, according to what the more experienced folks told me, began with questions. Thenceforth, Kratzer and many others have applied them to the semantics of scope. But, for me, the obvious application of Hamblin approach would firstly be binding/linking theory. It seems that there have already been some attempts to do so. (Anyone correct me if I'm wrong, please).

To my dismay, however, people still insist in separating binding from control. I love syntax though, I dislike a syntactic configuration solution for binding and control. A choice function solution is more agreeable to my intuitions.


One friend from This is not the name of the blog has a crucial question:


by Chris Tillman

I'm probably overlooking something obvious, but I was wondering if someone could help me out with this.

Uses of 'without' sometimes help express conjunctions with a negated conjunct, as in 'Al is going to the store without Mary going'. This should be symbolized as A & ~ M. Sometimes it is used to express a conditional, as in 'Without going to the store, John will have nothing to eat for dinner.' Here is the sentence that is troubling me:

(S) Bill drinks without Harry drinking.

Should (S) be read as a conjunction, a conditional or neither? And if neither, then what?

See it

Posted by Tony Marmo at 14:34 BST
Updated: Monday, 9 August 2004 08:13 BST
Friday, 9 July 2004

On contradictions

By Walter Carnielli
(Source: The Paraconsistency Webgroup)
Dear Friends,

Please see below some comments on Dick's views expressed in
"On contradictions".
I would like to encourage you all to participate in the
Paraconsistency discussion list of the WCP'2000, by subscribing
or just sending copies of our discussions to the list:


I agree with (what I think was) the conclusion of Fred's talk that we don't have any good arguments for the law of non-contradiction. It's too basic--either we accept it or we don't. Any argument for it we've seen or can imagine uses that law either explicitly or implicitly.

OK, but is not the situation the same for many other laws too? For basic laws concerning natural numbers? After all, when you start enumerating any kind of arguments about numbers, you are already using numbers. Or the grammarians using established grammar to explain grammatical rules.

However, I now think that we do accept some contradictions as true in our daily lives.


Posted by Tony Marmo at 14:05 BST
Updated: Monday, 9 August 2004 08:14 BST



16th European Summer School in
Logic, Language and Information

Universite Henri Poincare
Nancy, France
9-20 August, 2004

Semantic approaches to binding theory

Binding Theory, which is concerned with sentence-internal constraints on anaphora, was originally (Chomsky 1983) conceived in syntactic terms as conditions on the distribution of indices:
Condition A
Anaphors are locally bound
*Johni thinks that himselfi is clever.
Condition B
Pronominals are locally free
*Hei likes himi.
Condition C
R-expressions are free
*Hei thinks that Johni is clever.

But other researchers have attempted to derive these constraints from lexical semantics or the interpretative procedure rather than the syntax. Some (e.g. Reinhart 1983, Heim 1993, Fox 2000, Buring 2002) add a semantic component to a syntactic core, but others are more radically semantic (e.g. works by Jacoboson, Keenan, and more recently Barker & Shan and Butler, among others). The workshop will provide a forum to compare and assess these diverse proposals as well as to present the results of recent linguistic work to non-linguists.

Note: ESSLLI is the annual summer school of FoLLI, the European Association for Logic, Language and Information.

Posted by Tony Marmo at 06:25 BST
Updated: Monday, 9 August 2004 08:16 BST
Tuesday, 6 July 2004


Jackendoff talk: semantics must be generative

by Nick

On Friday (4th) I heard Ray Jackendoff give the keynote lecture at a conference organised by the UCL Centre for Human Communication which my department (UCL Phonetics and Linguistics) is part of (in some way I don't understand).

What he said may not be news to anyone else, but I hadn't heard it, not having read any of his recent stuff, except the bits about music.

Broadly, he thinks that mainstream - ie Chomskyan - linguistics is on the wrong track by supposing that syntax is the only generative component needed in the grammar, so that phonology and semantics need only interpret the output from syntax.


Posted by Tony Marmo at 02:26 BST
Updated: Monday, 9 August 2004 08:17 BST

Newer | Latest | Older