Click Here ">
« September 2004 »
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30
You are not logged in. Log in
Entries by Topic
All topics  «
Cognition & Epistemology
Notes on Pirah?
Ontology&possible worlds
Syn-Sem Interface
Temporal Logic
Blog Tools
Edit your Blog
Build a Blog
RSS Feed
View Profile
Translate this
LINGUISTIX&LOGIK, Tony Marmo's blog
Friday, 1 October 2004


A Logic of Interrogation Should Be Internalized in a Modal Logic for Knowledge

by Rani Nelken and Chung-chieh Shan
Source: Sematics Archive

We propose a new, modal interpretation of questions. The idea of interpreting questions via modal logic goes back to Hintikka (1976) and ?qvist (1965), who interpret a question as a request for knowledge: ?bring it about that I know whether . . . ?. Such a request is composed of an imperative part and an epistemic part. Focusing on the latter, we interpret a question as the knowledge condition required in order to answer it completely. We will reduce the epistemic part of the meaning of both yes-no questions and wh-questions to statements of the form ?it is known whether? or ?it is in the common ground that . . . ?. For instance, for a yes-no question such as Is Alice quitting?, the meaning is ?it is known that Alice is quitting or it is known that Alice is not quitting?. Several di erent approaches have been suggested in linguistic semantics for modeling questions.

1. It is popular to follow Hamblin (1973) and Karttunen (1977) (hereafter HK)
in taking a question to denote its set of partial answers, or partial true answers. For instance, for the wh-question Who?s quitting?, these would be answers of the form: Alice is quitting, or Alice and Bob are quitting.

2. Groenendijk and Stokhof (1984, 1996; hereafter GS) propose a more parsimonious approach in which the answers in the set are required to be complete and mutually exclusive?in other words, a partition of possible worlds in the space of logical possibilities. For the same question, these answers would be Nobody is quitting, Just Alice is quitting, etc.

3. In contrast to these firmly intensional question denotations, Nelken and Francez (2000, 2002; hereafter NF) propose an extensional interpretation. The meaning of the same question is r (?resolved?) if it is known for every person in the domain whether he or she is quitting. Otherwise, it is ur (?unresolved?).

In this paper, we bridge these theories and combine their advantages. We begin by presenting the basic approach in Section 2. In Section 3 we delve deeper into the denotation of questions. In particular, we address what has been the main criticism against similar approaches: how to deal with embedded questions. Our theory captures GS?s prized entailment relations among questions and assertions (Section 4), while also enjoying an extensional semantics like NF?s (Section 5) and NF?s increased expressive power for complex questions (Section 6).


Posted by Tony Marmo at 01:01 BST
Updated: Friday, 1 October 2004 04:42 BST
Thursday, 30 September 2004


On the Form of Chains: Criterial Positions and ECP fects.

By Luigi Rizzi

It is widely recognized that natural language syntax makes an extensive use of movement: elements are typically pronounced in positions different from those in which they receive some of their interpretive properties.

Rizzi's paper tstarts with a discussion of the functional motivations of movement, and then connects this level of analysis to the study of the form of chains, with special reference to the A'-system, and the formal principles which constrain possible chain configurations. The last part of the paper addresses another traditional research topic of A'-syntax the subject-object asymmetries arising in A'-extraction, and the system of principles proposed in the first part is shown to provide an alternative to the classical analysis in terms of the Empty Category Principle.

The first section addresses the issue of movement as "last resort" and discusses the implementation of the operating mechanisms. The second section proposes a characterisation of A'-chains as connecting an s-selection position (for arguments, a thematic position) to a criterial position, a position dedicated to the expression of some scope-discourse property (Chomsky 2001a-b) through a Criterion, in the sense of Rizzi (1991) and related work. These two positions are relevant for the interface with semantics and form the backbone of A'-chains. Sections 3-6 try to determine if and under what conditions other positions are allowed to occur in well-formed chains, in addition to the two interpretively relevant positions. In particular, empirical evidence is provided for a principle according to which criterial positions terminate chains:

a phrase meeting a criterion is frozen in place, and its chain cannot extend further(Criterial Freezing).

This principle makes sure that the chain will be assigned a unique scope-discourse property, basically in parallel with the assignment of a unique theta role, thus contributing to a parsimonious definition of chains as constituted by unique occurrences of the elementary ingredients.


Posted by Tony Marmo at 01:01 BST
Wednesday, 29 September 2004


What is a Correspondence Theory of Truth?

By Douglas Patterson
Source: Synthese

It is often thought that instances of the T-schema such as `` `snow is white' is true if and only if snow is white'' state correspondences between sentences and the world, and that therefore such sentences play a crucial role in correspondence theories of truth. I argue that this assumption trivializes the correspondence theory: even a disquotational theory of truth would be a correspondence theory on this conception. This discussion allows one to get clearer about what a correspondence theory does claim, and toward the end of the paper I discuss what a true correspondence theory of truth would involve.


Posted by Tony Marmo at 01:01 BST
Updated: Monday, 27 September 2004 18:11 BST


Internally-Headed Relatives Instantiate Situation Subordination

by Min-Joo Kim
Source: Sematics Archive

Korean is one of the languages that have the Internally Headed Relative Clause (IHRC) construction, in addition to the more familiar Externally Headed Relative Clause (EHRC) construction. IHRCs in Korean are gapless, as the semantic head noun is contained inside, and they are always followed by the grammatical element kes, which is
best analyzed as a pronoun (see C. Chan and J. Kim 2003, M. Kim to appear, among others). (...)

The IHRC construction in Korean provides us with a unique opportunity to investigate the principles that govern the mapping between syntax and semantics, as there appear to be discrepancies between its form and meaning (Ohara 1993, Y. Kim 2002).
First, although an IHRC is located inside a DP, it is interpreted like an independent sentence, as the English translation for (2) suggests. Second, the semantic head is buried inside the IHRC, but it is interpreted in such a way that it seems to serve as an argument of the embedding predicate; for example, in (2), what John caught was a thief.

In this paper, I propose a way of resolving these syntax-semantics mismatches. I account for the mismatch exhibited by an IHRC by motivating an LF movement of the IHRC: I propose that the RC is a generalized quantifier that operates in the eventuality domain and hence is interpreted in a position higher than its surface position by combining with an event-level denotation of the embedding clause. To solve the other
mismatch problem, I propose that the semantic head appears to function as an argument of the embedding predicate because it is indirectly but formally linked to the pronoun kes.


Posted by Tony Marmo at 01:01 BST
Updated: Saturday, 25 September 2004 07:27 BST
Tuesday, 28 September 2004


Context-dependence and quantifier domains

by Orin Percus
Source: Semantics Archive

These are my lecture notes (i.e. revised handouts) for the EGG 2004 class Covert Variables at LF, co-taught with Luisa Mart? (see. Their general concern is with the phenomenon of context-dependence. Frequently, when a speaker utters a sentence, the pronounced words and the way they are put together do not fully determine our intuitions of when what the speaker said would be true. We see this clearly when, on different occasions when the same sentence is uttered, we have different intuitions about what it would mean for the speaker to have said something true. The issue is how best to treat context-dependence within a theory of how sentences get interpreted. These notes specifically address a kind of context-dependence that surfaces in sentences with quantifiers: the pronounced words do not fully determine what the domain of quantification is.

The notes rely heavily on Jason Stanley and Zoltan Szab?'s 2000 Mind and Language article "On Quantifier Domain Restriction" and on Kai von Fintel's 1998 web-accessible notes on the same topic, and to some degree (Section 1 particularly) on Fran?ois Recanati's 2003 book Literal Meaning. They are NOTES, and not all examples are attributed to their proper sources, so please check with me before citing anything as mine. Also, please let me know of any serious mistakes that you find. I will try to revise the notes accordingly.


Posted by Tony Marmo at 22:27 BST
Updated: Wednesday, 29 September 2004 10:24 BST


See related previous post

Why Knowledge is Unnecessary for Understanding Language

Dean Pettit

It is a natural thought that understanding language consists in possessing knowledge-to understand a word is to know what it means. It is also natural to suppose that this knowledge is propositional knowledge-to know what a word means is to know that it means such-and-such. Thus it is prima facie plausible to suppose that understanding a bit of language consists in possessing propositional knowledge of its meaning. I refer to this as the epistemic view of understanding language. The theoretical appeal of this view for the philosophy of language is that it provides for an attractive account of the project of the theory of meaning. If understanding language consists in possessing propositional knowledge of the meanings of expressions, then a meaning theory amounts to a theory of what speakers know in virtue of understanding language. In this paper I argue that, despite its intuitive and theoretical appeal, the epistemic view is false. Propositional knowledge is not necessary for understanding language, not even tacit knowledge. Unlike knowledge, I argue, linguistic understanding does not fail in Gettier cases, does not require epistemic warrant and does not even require belief. The intuitions about knowledge that have been central to epistemology do not seem to hold for linguistic understanding. So unless epistemologists have been radically mistaken about what knowledge requires, knowledge is unnecessary for understanding language.

Download for Mind subscribers

Posted by Tony Marmo at 01:01 BST
Updated: Tuesday, 28 September 2004 01:22 BST


Three Types of Kes-Nominalization in Korean

by Min-Joo Kim
Source: Sematics Archive

In Korean, the Internally Headed Relative Clause Construction (IHRC), illustrated in (1), the Direct Perception Construction (DPC), illustrated in (2), and the factive Propositional Attitude Construction (PAC), illustrated in (3), appear to have an identical syntactic structure: the complements of the verbs consist of clausal material and the grammatical element kes (Kim 1984, Jhang 1994, Chung 1999, Chung and Kim 2003).
Though these constructions look alike, they differ fundamentally in their interpretations. In the IHRC, the complement denotes an entity, in the DPC it denotes an eventuality, and in the factive PAC it denotes a fact. These differences trace back to the semantics of the embedding predicates, which we can therefore isolate as a defining property of each construction. In this paper, I investigate how these three constructions are similar and how they are dissimilar. I seek to establish that the factive PAC differs sharply from the other two kes-constructions and that there is also a subtle difference between the two constructions as well. I propose that the three constructions behave differently because they describe different semantic relations: the factive PAC describes a part-whole relation between two sets of worlds, whereas the IHRC and the DPC describe relations between two sets of eventualities. But the IHRC and the DPC also differ in that while the former describes an intersection relation, the latter describes a partwhole relation between two sets of eventualities.


Posted by Tony Marmo at 01:01 BST
Updated: Saturday, 25 September 2004 06:24 BST
Monday, 27 September 2004

Topic: Cognition & Epistemology

Tracking the Real: Through Thick and Thin

by Stathis Psillos

In this paper, I examine Azzouni's tracking requirement and its use as a normative constraint on theories about objects which we take as real. I focus on what he calls 'thick epistemic access' and argue that there is a logical-conceptual sense in which thick access to the real presupposes thin access to it. Then, I move on to advance an alternative-Sellarsian-way to ontic commitment and show that (a) it is better than Azzouni's, and (b) it can accommodate thick epistemic access as a bonus. Finally, I try to defend the Quinean theoretical virtues against some of Azzouni's objections.

Download doc file

Posted by Tony Marmo at 01:01 BST
Updated: Saturday, 25 September 2004 17:04 BST

Given the sentence {1} below:
{1} During refections a llama dawdles less than an aardvark.

A user of English should be able to understand it, at least in part, even if he does not know what a llama or an aardvark is or what to dawdle and refection mean. Or shouldn't he?

See related post

Linguistic Understanding and Belief

Discussion of Dean Pettit's Why Knowledge is Unnecessary for Understanding Language, Mind 111

by Steven Gross

In a recent paper, Dean Pettit argues against the view that understanding a bit of language consists in the possession of propositional knowledge of its meaning--what he labels the epistemic view of linguistic understanding. His objection to the epistemic view is that it entails that it's necessary to understand a bit of language that one possess propositional knowledge of its meaning, but this necessity claim is false: for linguistic understanding, unlike knowledge, does not fail in Gettier cases, does not require epistemic warrant, and does not require belief; in supplying cases demonstrating this, one supplies cases of linguistic understanding without such knowledge. Pettit's arguments, if successful, thus establish the falsity of a variety of weaker claims in addition to that of the epistemic view. They would establish, for example, the falsity of necessity claims as weak as the weakest modality allowing the possibility of his cases. Further, the third argument--that linguistic understanding does not require belief that the bit of language means such-and-such--establishes, if successful, the falsity of parallel claims concerning belief. Given the case Pettit appeals to in his third argument, this would include the falsity of the claim that belief about an expression's meaning is nomologically necessary for a human speaker to possess linguistic understanding of it. Pettit's case thus poses a challenge to prominent empirical accounts of semantic competence that advert to states with such propositional content.


Posted by Tony Marmo at 01:01 BST
Updated: Tuesday, 28 September 2004 01:21 BST
Sunday, 26 September 2004


On the Notion of Substitution

Marcel Crabbe

We consider a concept of substitutive structure, called "logos", in order to study simple substitution, independently of formal or programming languages. We provide a definition of simultaneous substitution in an arbitrary logos and use it to prove a completeness theorem expressing that the equational properties of the usual substitution can be proved from the logos axioms only.


Posted by Tony Marmo at 01:01 BST
Updated: Saturday, 25 September 2004 07:25 BST


Logics of Imperfect Information

by Gabriel Sandu
Source CLE

The paper contains a survey of results and interpretations of incomplete information in predicate and modal logics.


Posted by Tony Marmo at 01:01 BST
Updated: Friday, 24 September 2004 03:57 BST
Saturday, 25 September 2004


Phase structure, Phrase structure,
and Quantification

by Jonny Butler
Source: Semantics Archive

I combine two areas of investigation in current syntactic literature:

(1) the structure of the clause contains layers of hierarchically ordered quantificational heads, situated above the temporal and verbal fields (Beghelli & Stowell 1997);
(2) the clause is built in phases -- subclausal building blocks with parallel properties.

I claim these two threads should be tied together, in that phases should be defined in terms of such quantificational layers. Precisely, I claim a phase consists of some property denoting head H, topped by some associated `little' head(s) h (as v over V) -- this, the phase domain -- surmounted by a CP layer -- the phase edge -- that encodes quantificational information to close off domain-internal variables. This derives a V phase, C > v > V (corresponding to the standard vP phase); a T phase, C > t > T, (the standard CP phase); also an N phase (DP/QP), C > n > N, where C = D. Treating phases in these terms derives the major facets of orthodox phase theory, but in a much more elegant, less stipulative way.

Cyclicity, for example, is captured as an expected property of the system, rather than by the stipulated Phase Impenetrability Condition of Chomsky (2000). Evidence for this reinterpretation of phase theory is adduced from:

(1) The interpretation of QPs, treated uniformly like Heim (1982) indefinites: so any QP introduces a restricted variable, subject to closure by intra-clausal quantificational heads analogous to Heim's 9-closure operator.
(2) The structure and interpretation of temporal predicates, T/Perf/Prog, treated as embedding situation-denoting phases as internal argument, and introducing a situation variable, closed off by their own CP, as external argument.
(3) The interpretation and scope behaviour of modality, defined as quantification over possible situations: syntactically expressed as CP-/edge-level quantificational
heads operating over domain-level situation variables.


Posted by Tony Marmo at 17:24 BST



by Hana Flip
Source: Semantics Archive

In this paper I explore the function of prefixes as verb-internal operators that have distinct semantic effects on the interpretation of nominal arguments. I will focus on the Russian prefix n a - used in its cumulative sense of approximately a {relatively/sufficiently/exceedingly} large quantity (of), and to a lesser extent on its converse, namely, the delimitative/attenuative po-. Such prefixes have one notable and neglected property: namely, they systematically require that nominal arguments targeted by them have a non-specific indefinite interpretation, regardless whether the verb they form is perfective or imperfective. I will argue that the semantics of such prefixes is to be assimilated to that of measure phrases and propose an additional novel role for them: namely, as morphological markers of a particular mode of composition that is available for semantically incomplete nominal arguments that have a non-specific indefinite interpretation. If this analysis is correct, then it precludes measure prefixes in Slavic languages from being analyzed as overt morphological exponents of the perfective operator, contrary to the majority of current analyses which take this to be the main or the only function of Slavic prefixes as a whole class. Instead, this analysis enforces the view on which measure prefixes function as modifiers of eventuality types expressed by `aspectless' verbal predicates.


Posted by Tony Marmo at 07:24 BST
Updated: Saturday, 25 September 2004 07:26 BST
Friday, 24 September 2004



by Min-Joo Kim
Source: Sematics Archive

This dissertation investigates how syntactic, semantic, and pragmatic factors interact to produce the Internally-Headed Relative Clause (IHRC) construction in Korean and Japanese. The IHRC construction differs from the more familiar Externally-Headed Relative Clause (EHRC) construction in several ways. First, unlike an EHRC, an IHRC?s content restricts the content of the matrix clause rather than that of the semantic head. Second, its interpretation is heavily influenced by the discourse context in ways not seen with the EHRC. Third, unlike the head of an EHRC, the head of an IHRC does not correspond to any overt syntactic phrase and hence needs to be determined by language users based on the relative clause?s content, the matrix predicate?s semantics, and the discourse context.

The literature offers an abundance of sensitive analyses of the IHRC construction, but it leaves two central questions unanswered: what determines the interpretation of the construction? And, if pragmatic principles play a role, how do they interact with the morphosyntax and the semantics? I answer these questions with an event-based semantic analysis. I show that the construction?s interpretation is determined partly by grammatical factors (e.g., the embedded clause?s aspect and the matrix predicate?s semantics) and partly by pragmatic factors (the discourse context and the discourse participants? world knowledge). In particular, I isolate two sources of the semantic variability of the construction.

First, the matrix clause contains a pronominal definite description, whose denotation contains a free relation variable. The value of this variable is determined by the embedded clause?s event structure, the matrix predicate?s semantics, and the discourse context.

Second, the relative operator that occurs in this construction connects the content of the embedded clause with that of the matrix clause, establishing either a temporal or a causal relation between them, depending on whether the embedded clause describes a temporary state or a permanent state.

This study establishes important connections between the semantics of a definite description and event structure, thereby solving a particularly challenging formal-linking problem, one that afflicts existing E-type pronoun analyses of the IHRC construction. In addition, it provides a constrained but flexible interpretive mechanism for the construction, eliminating the need for many of the extra-grammatical constraints that characterize existing treatments.


Posted by Tony Marmo at 22:52 BST
Updated: Friday, 24 September 2004 22:56 BST
Thursday, 23 September 2004


Combining possibility and knowledge

By Alexandre Costa-Leite
Source: CLE

This paper is an attempt to define a new modality with philosophical interest by combining the basic modal ingredients of possibility and knowledge. This combination is realized via product of modal frames so as to construct a knowability modality, which is a bidimensional constructor of arity one defined in a two-dimensional modal frame. A semantical interpretation for the operator is proposed, as well as an axiomatic system able to account for inferences related to this new modality. The resulting logic for knowability LK is shown to be sound and complete with respect to its class of modal-epistemic product models.


Posted by Tony Marmo at 01:01 BST
Updated: Thursday, 23 September 2004 05:12 BST

Newer | Latest | Older