Click Here ">
« August 2004 »
S M T W T F S
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
You are not logged in. Log in
Entries by Topic
All topics  «
Counterfactuals
defl@tionism
GENERAL LOGIC
HUMAN SEMANTICS
Interconnections
PARACONSISTENCY
Polemics
SCIENCE & NEWS
Cognition & Epistemology
Notes on Pirah?
Ontology&possible worlds
PRAGMATICS
PROPAEDEUTICS
Syn-Sem Interface
Temporal Logic
Blog Tools
Edit your Blog
Build a Blog
RSS Feed
View Profile
Translate this
INTO JAPANESE
BROTHER BLOG
MAIEUTIKOS
LINGUISTIX&LOGIK, Tony Marmo's blog
Monday, 2 August 2004

Topic: HUMAN SEMANTICS

Constructions and Formal Semantics


by Marc Moffett
Source: Close Range June 27, 2004


I have been arguing, for instance in my dissertation, that the correctness of Construction Grammar is pretty much uncontroversial. The point, basically, is that no one has ever proposed a semantic theory for even a simple language that doesn't assume the existence of at least one linguistic construction, usually the subject-predicate construction. So in my view those guys over in Berkeley (and their followers) are on pretty solid ground. (The only way I can see to avoid this conclusion is to argue that predication, or function-application, isn't part of the semantics, but something extra.)

Unfortunately, in virtue of not taking explicit account of the role of constructions in their philosophical semantics, philosophers (and linguistic semanticists) philosophers of language have been led to, in my estimation, very implausible linguistic theses. My personal bugbear is the doctrine of logical forms, construed as a linguistic thesis. I want to be clear here that, although I am not convinced of the need for a level LF in syntax, that notion of logical form is far too weak to do the sort of work required by the sorts of robust semantic analyses posited these days. (Think, for instance, of the neoDavidsonian analysis of eventive sentences!) In order to accomodate these robust semantic analyses, the underlying logical forms would have to be vastly more complex than can be reasonably motivated on purely syntactic grounds.

So why have so many philosophers been suckered into accepting them? I'm not sure, but I wonder if it doesn't arise in part from an implicit acceptance of the Fregean view of the language-proposition relation. According to Frege (or, at least, Dummett's Frege), our only cognitive access onto propositions is via the linguistic structure of the sentences that express them. If Frege's Thesis is correct, then the need for a robust semantics will require a correspondingly complex underlying linguistic structure.

[It is also worth considering, in this quasi-historical context, whether or not Russell's notion of contextual definition and the associated doctrine of "incomplete symbols" doesn't mark out an inchoate construction-based theory of language.]


Comments


Jason Stanley

Your question should be -- why have so many *linguists* been suckered into accepted logical form, with rich covert syntactic structures. Once the point is put in this more adequate manner, it becomes clear you're being more than a little dogmatic.
Those philosophers who do accept rich logical forms do so, because, in taking syntax classes for many years, we've been introduced to the notion of a rich logical form with lots of covert structure (is Richard Larson in a philosophy department? Is Chomsky in a philosophy department? Pesetsky?). Robert May's book on logical form in the 1980's had a big impact on syntax and semantics, and many of us who started doing linguistics then were doing GB, and read that book. Minimalist syntax makes different assumptions than GB, and seeks to explain different evidence. But, if anything, it postulates much more covert structure.
In my experience, it's *philosophers* who are reluctant to buy linguistic arguments for covert structures.

Part of the problem has to do with what's meant by "purely syntactic grounds". If what you mean is, on the basis of judgements of grammaticality and ungrammaticality alone, then that is simply an oversimplistic conception of "purely syntactic grounds". For example, we distinguish bound vs. free readings of pronouns not on the grounds of grammaticality, but on the grounds that they give rise to different readings. We appeal to different potentential attachment sites of modifiers as arguments for underlying constituent structures. And so on -- so your post assumes some conception of "purely syntactic grounds" that is overly philosophical in nature.

Posted by Tony Marmo at 17:15 BST
Updated: Monday, 9 August 2004 07:38 BST
Wednesday, 28 July 2004
TIME-SPACE AND SYNTAX-SEMANTICS
Topic: HUMAN SEMANTICS

RECOMMENDED BOOKS:



BOOK #1

TIME, TENSE AND REFERENCE


Edited by Aleksandar Jokic &Varieties of Meaning
The 2002 Jean Nicod Lectures
By Ruth Garrett Millikan


Many different things are said to have meaning: people mean to do various things; tools and other artifacts are meant for various things; people mean various things by using words and sentences; natural signs mean things; representations in people's minds also presumably mean things. In Varieties of Meaning, Ruth Garrett Millikan argues that these different kinds of meaning can be understood only in relation to each other.

What does meaning in the sense of purpose (when something is said to be meant for something) have to do with meaning in the sense of representing or signifying? Millikan argues that the explicit human purposes, explicit human intentions, are represented purposes. They do not merely represent purposes; they possess the purposes that they represent. She argues further that things that signify, intentional signs such as sentences, are distinguished from natural signs by having purpose essentially; therefore, unlike natural signs, intentional signs can misrepresent or be false.

Part I discusses "Purposes and Cross-Purposes" -- what purposes are, the purposes of people, of their behaviors, of their body parts, of their artifacts, and of the signs they use. Part II then describes a previously unrecognized kind of natural sign,
"locally recurrent" natural signs, and several varieties of intentional signs, and discusses the ways in which representations themselves are represented. Part III offers a
novel interpretation of the way language is understood and of the relation between semantics and pragmatics. Part IV discusses perception and thought, exploring stages in the development of inner representations, from the simplest organisms whose behavior is governed by perception-action cycles to the perceptions and intentional attitudes of humans.
Quentin Smith

Among the many branches of philosophy, the philosophy of time and the philosophy of language are more intimately interconnected than most, yet their practitioners have long pursued independent paths. This book helps to bridge the gap between the two groups. As it makes clear, it is increasingly difficult to do philosophy of language without any metaphysical commitments as to the nature of time, and it is equally difficult to resolve the metaphysical question of whether time is tensed or tenseless independently of the philosophy of language. Indeed, one is tempted to see philosophy of language and metaphysics as a continuum with no sharp boundary.

The essays, which were written expressly for this book by leading philosophers of language and philosophers of time, discuss the philosophy of language and its implications for the philosophy of time and vice versa. The intention is not only to
further dialogue between philosophers of language and of time but also to present new theories to advance the state of knowledge in the two fields. The essays are organized in two sections -- one on the philosophy of tensed language, the other
on the metaphysics of time.


link

BOOK #2

THE SYNTAX OF TIME


Edited by Jacqueline Gueron and Jacqueline Lecarme


Any analysis of the syntax of time is based on a paradox: it must include a syntax-based theory of both tense construal and event construal. Yet while time is undimensional, events have a complex spatiotemporal structure that reflects their human participants. How can an event be flattened to fit into the linear time axis?
Chomsky's The Minimalist Program, published in 1995, offers a way to address this problem. The studies collected in The Syntax of Time investigate whether problems concerning the construal of tense and aspect can be reduced to syntactic problems for which the basic mechanism and principles of generative grammar already provide solutions.

These studies, recent work by leading international scholars in the field,offer varied perspectives on the syntax of tense and the temporal construal of events: models of tense interpretation, construal of verbal forms, temporal aspect versus lexical aspect, the relation between the event and its argument structure, and the interaction of case with aktionsart or tense construal. Advances in the theory of temporal interpretation in the sentence are also applied to the temporal interpretation of nominals.


link

Posted by Tony Marmo at 13:01 BST
Updated: Monday, 9 August 2004 07:44 BST

Topic: Cognition & Epistemology

Contrastivism and Hawthorne?s principle of practical reasoning


by Jon Kvanvig
Source: Certain Doubts 7/26/2004


Contrastivism holds that the truth makers for knowledge attributions always involve a contrast, and Hawthorne thinks that if you know something, you are entitled to use it in practical reasoning. So one way to test what it is known is to see what kinds of practical reasoning we?ll allow are acceptable.

Depending on what the contrast is, contrastive knowledge may be easy or hard to have. So, it is easier to know ?the train will be on time rather than a day late? than it is to know ?the train will be on time rather than 2 minutes late.? One way to put the difference is that one is presupposing more in knowing the first claim that one is in knowing the second.

Consider then a piece of practical reasoning using the following conditional: if you are pointing a gun at me, and if your gun is loaded and if you intend to shoot me, I should shoot you first. Suppose I know that you are pointing a gun at me rather than a twig, and that I know that your gun is loaded rather than having just been disassembled for cleaning, and suppose I know that you intend to shoot me rather than give me a million bucks. Should I shoot you? Maybe this is an anti-gun sentiment coming out, but I think it is far from obvious that I should.

Compare this case to another. In this case, I know that you are pointing a gun at me rather than any non-lethal item, and I know that your gun is loaded rather than merely having the appearance of being loaded from where I stand, and I know that you intend to shoot me rather than anyone else in the universe. Now I think I should shoot you first.

Why the difference? In contrastivist language, I?m presupposing too much in the first case, I think. The knowledge I have is easy knowledge because it presupposes so much. I?m presupposing that the thing in your hand is either a gun or a twig, that it?s either loaded or disassembled, that you intend to shoot me or make me rich. In the context of these assumptions, it?s too easy to come to the conclusion that I should shoot first and ask questions later. In the second, case, however, my presuppositions are much broader, broad enough that my knowledge is no longer easy. And since my knowledge is not easy, I doubt I could be faulted on grounds of rationality for shooting first.

It appears, then, that contrastivists will have to deny Hawthorne?s principle. Moreover, I don?t see any obvious way to qualify the principle for the following reason. If the action is relatively inconsequential, then easy knowledge may be enough to warrant performing the action. But if the action is immensely significant, as it is in the case of taking a life, then easy knowledge doesn?t seem to be enough.

One way to think about such cases is that they may provide a reason for including pragmatic issues in one?s account, either indirectly as contextualists typically do or directly as we find in the invariantist camp. Or maybe a reason for rejecting Hawthorne?s principle??

Posted by Tony Marmo at 12:28 BST
Updated: Monday, 9 August 2004 07:45 BST

Topic: SCIENCE & NEWS

John Passmore


*1914
+2004
Source: Philosophy Program of the
Research School of Social Sciences
of the Australian National University


John Passmore was Reader in Philosophy at the Research School of Social Sciences, ANU, from 1955 through until 1957, and Professor of Philosophy from 1958 until his retirement in 1979. He was Head of the Philosophy Program from 1962 until 1976. Passmore's book A Hundred Years of Philosophy was recognised as a major feat of philosophical scholarship throughout the international philosophical community.

It was followed by influential books on a whole range of issues, including Man's Responsibility for Nature, one of the first books on the philosophical issues raised by the environmental movement. Passmore was one of the very first to give shape to what is now, under his influence, called 'applied philosophy.' His many books have been translated into a wide variety of languages. He remains a major figure in the history of ideas. In recognition of his service to education, Passmore was made a Companion in the General Division of the Order of Australia in 1992. The first volume of his autobiography, Memoirs of a Semi-detached Australian, was published by Melbourne University Press in 1997.

He was Emeritus Professor of Philosophy and Visiting Fellow in the History Program at RSSS, and died in Canberra on Sunday.


Obituary


Posted by Tony Marmo at 05:48 BST
Updated: Monday, 9 August 2004 07:46 BST
Tuesday, 27 July 2004
A NEW VERSION OF RADICAL MENTALISM?
Topic: SCIENCE & NEWS

Be warned, this could be the matrix


Source: The Sidney Morning Herald, July 22, 2004

The multiverse theory has spawned another - that our universe is a simulation, writes Paul Davies.


If you've ever thought life was actually a dream, take comfort. Some pretty distinguished scientists may agree with you. Philosophers have long questioned whether there is in fact a real world out there, or whether "reality" is just a figment of our imagination.

Then along came the quantum physicists, who unveiled an Alice-in-Wonderland realm of atomic uncertainty, where particles can be waves and solid objects dissolve away into ghostly patterns of quantum energy.

Now cosmologists have got in on the act, suggesting that what we perceive as the universe might in fact be nothing more than a gigantic simulation.

The story behind this bizarre suggestion began with a vexatious question: why is the universe so bio-friendly? Cosmologists have long been perplexed by the fact that the laws of nature seem to be cunningly concocted to enable life to emerge. Take the element carbon, the vital stuff that is the basis of all life. It wasn't made in the big bang that gave birth to the universe. Instead, carbon has been cooked in the innards of giant stars, which then exploded and spewed soot around the universe.
Advertisement Advertisement

The process that generates carbon is a delicate nuclear reaction. It turns out that the whole chain of events is a damned close run thing, to paraphrase Lord Wellington. If the force that holds atomic nuclei together were just a tiny bit stronger or a tiny bit weaker, the reaction wouldn't work properly and life may never have happened.

The late British astronomer Fred Hoyle was so struck by the coincidence that the nuclear force possessed just the right strength to make beings like Fred Hoyle, he proclaimed the universe to be "a put-up job". Since this sounds a bit too much like divine providence, cosmologists have been scrambling to find a scientific answer to the conundrum of cosmic bio-friendliness.

The one they have come up with is multiple universes, or "the multiverse". This theory says that what we have been calling "the universe" is nothing of the sort. Rather, it is an infinitesimal fragment of a much grander and more elaborate system in which our cosmic region, vast though it is, represents but a single bubble of space amid a countless number of other bubbles, or pocket universes.

Things get interesting when the multiverse theory is combined with ideas from sub-atomic particle physics. Evidence is mounting that what physicists took to be God-given unshakeable laws may be more like local by-laws, valid in our particular cosmic patch, but different in other pocket universes. Travel a trillion light years beyond the Andromeda galaxy, and you might find yourself in a universe where gravity is a bit stronger or electrons a bit heavier.

The vast majority of these other universes will not have the necessary fine-tuned coincidences needed for life to emerge; they are sterile and so go unseen. Only in Goldilocks universes like ours where things have fallen out just right, purely by accident, will sentient beings arise to be amazed at how ingeniously bio-friendly their universe is.

It's a pretty neat idea, and very popular with scientists. But it carries a bizarre implication. Because the total number of pocket universes is unlimited, there are bound to be at least some that are not only inhabited, but populated by advanced civilisations - technological communities with enough computer power to create artificial consciousness. Indeed, some computer scientists think our technology may be on the verge of achieving thinking machines.

It is but a small step from creating artificial minds in a machine, to simulating entire virtual worlds for the simulated beings to inhabit. This scenario has become familiar since it was popularised in The Matrix movies.

Now some scientists are suggesting it should be taken seriously. "We may be a simulation ... creations of some supreme, or super-being," muses Britain's astronomer royal, Sir Martin Rees, a staunch advocate of the multiverse theory. He wonders whether the entire physical universe might be an exercise in virtual reality, so that "we're in the matrix rather than the physics itself".

Is there any justification for believing this wacky idea? You bet, says Nick Bostrom, a philosopher at Oxford University, who even has a website devoted to the topic ( http://www.simulation-argument.com ). "Because their computers are so powerful, they could run a great many simulations," he writes in The Philosophical Quarterly .

So if there exist civilisations with cosmic simulating ability, then the fake universes they create would rapidly proliferate to outnumber the real ones. After all, virtual reality is a lot cheaper than the real thing. So by simple statistics, a random observer like you or me is most probably a simulated being in a fake world. And viewed from inside the matrix, we could never tell the difference.

Or could we? John Barrow, a colleague of Martin Rees at Cambridge University, wonders whether the simulators would go to the trouble and expense of making the virtual reality foolproof. Perhaps if we look closely enough we might catch the scenery wobbling.

He even suggests that a glitch in our simulated cosmic history may have already been discovered, by John Webb at the University of NSW. Webb has analysed the light from distant quasars, and found that something funny happened about 6 billion years ago - a minute shift in the speed of light. Could this be the simulators taking their eye off the ball?

I have to confess to being partly responsible for this mischief. Last year I wrote an item for The New York Times , saying that once the multiverse genie was let out of the bottle, Matrix -like scenarios inexorably follow. My conclusion was that perhaps we should retain a healthy scepticism for the multiverse concept until this was sorted out. But far from being a dampener on the theory, it only served to boost enthusiasm for it.

Where will it all end? Badly, perhaps. Now the simulators know we are on to them, and the game is up, they may lose interest and decide to hit the delete button. For your own sake, don't believe a word that I have written.


Paul Davies is professor of natural philosophy at Macquarie University's Australian Centre for Astrobiology. His latest book is How to Build a Time Machine.

Posted by Tony Marmo at 17:25 BST
Updated: Monday, 9 August 2004 07:55 BST
Monday, 26 July 2004
QUESTION ON ENGLISH
I have a question to all English native speakers around the world. Given the pair of sentences below:

(1) The postman is evil.
(2) The postman is like evil.


What are the meaning differences you grasp when you read these two sentences?

I thank you for your replies.

Posted by Tony Marmo at 17:40 BST
Updated: Monday, 26 July 2004 17:46 BST
ON COUNTERFACTUALS [2]
Topic: GENERAL LOGIC

Predictive Prophecy and Counterfactuals


by Jeremy Pierce

Source: Orange Philosophy June 25, 2004

In line with the discussions of time and time travel at my blog and to some degree here also, our own Gnu has a related puzzle using a fun fantasy role-playing kind of example for a philosophical puzzle about conditional predictive prophecy (i.e. predicting what someone will do and then telling him that A will have already happened if he ends up doing P but B will have already happened if he turns out to do Q). I think this case is interesting in terms of its view of time and of the relation of guaranteed prediction to time, but it also has some relevance to how to evaluate counterfactual statements. Read the case first at Gnu's blog, then read on here for my analysis.

The Liche Lord has predicted what Thurvan would do. That means he knew that Thurvan would go to all the rooms. Therefore, assuming he isn't lying, he hasn't placed the sword in the room he said it would be in if Thurvan had chosen not to go to the other rooms. Thurvan was correct to say that the sword is either there or not, but he was wrong to think that it was there independent of his decision. It was there because of what he would do. If Thurvan had chosen otherwise, and the Liche Lord had still set up the same deal, the sword would have been there. But unless he's lying, the sword can't be there as things stand. Given that the Liche Lord can take the shape of any object and enjoys taking people to be his undead slaves, you might expect that what the dwarf sees as the sword is probably the Liche Lord himself waiting to trap him.

Of course, the Liche Lord can see the future, so this is probably only the case if the Liche Lord has predicted that the dwarf will take the sword. He may well have predicted that the dwarf would reason through all this and leave without going for the sword, in which case he may have lied and put the sword there anyway. What's great about this is that the sword might really be there but only if he doesn't try to get it, and it's not there if he does. So he can't get it one way or the other. The only way to get the sword would have been to do what the Liche Lord knew he wouldn't do, and that would have been to avoid the other rooms.

In working through this, I had a hard time thinking about what the Liche Lord would have done if Thurvan had chosen otherwise, because it may well be that the Liche Lord would not have chosen to set this scenario up at all without the knowledge of Thurvan choosing the way he did. It's hard to think about counterfactual possibilities where the thing that would have been different depends on knowledge of the future in the counterfactual world.

According to David Lewis' semantics of counterfactual statements, 'if Thurvan had chosen to go straight to the sword room, the sword would have been there' is unclear to me. Lewis says to go to the nearest possible world where Thurvan goes straight to the sword room, meaning that you should find the world most like the actual world except for that detail and then see what's true. So if we change nothing in the world except that and what changing it will require, what happens? I can think of three kinds of candidate worlds for the closest:

1. My first thought would be to say that if Thurvan had chosen differently, and if you kept as much intact as possible, then the Liche Lord would have predicted differently and as a result put the sword in the chamber to honor his deal. This world holds the Liche Lord's honesty and abilities constant and changes the state of the world for the entire time between the writing of the letter and the present so that the sword has been there all along.

2. Lewis prefers to find a world intrinsically as much like the actual world as possible. That would require keeping the tomb , just as things are in the actual world. But then the Liche Lord would have to have told something false to Thurvan. Either he was lying (2a), or his predictive abilities failed in this one case (2b). I think Lewis has to favor 2b, because even 2a has intrinsic changes with the Liche Lord's beliefs and intents, whereas 2b could be just a surprising failure of his abilities, something like the miracle worlds Lewis discusses in his paper on whether free will requires breaking the laws of nature.

3. Lewis wouldn't like this at all, because it requires even more of a change of the intrinsic state of the world so far than 1, but some might argue that if Thurvan had chosen to take the sword and not go to the other rooms, the Liche Lord would not have set up the case this way at all and wouldn't have given a deal that would mean he'd end up losing. I'm bring this up only to argue against it as a legitimate near possibility. Seeing this as a near possibility of what would happen given Thurvan's choice to go only for the sword assumes something false. It assumes the Liche Lord is predicting what Thurvan would do given that the Liche Lord sets things up a certain way. According to Gnu's setup, the Liche Lord predicts what Thurvan will do, period. He doesn't consider all the possibilities and make things go his way. His ability only tells him what will happen. So this one requires a difference in the intrinsic state of the world and in the abilities of the Liche Lord. 1 has a difference only in the state of the world (and not even as much of a difference), and 2 has a difference in the abilities or intent of the Liche Lord (and not as much of a difference -- either a one-time failure of the same ability rather than a completely different ability or a different motivation rather than a whole change in the nature of his abilities).

So I think 1 and 2 are the real options for which world is closest to the actual one Gnu has constructed. This is a particularly vivid example of those who agree with Lewis on nearness of worlds based on intrinsic likeness and what I think is the more commonsense view of nearness of worlds based on preserving the abilities of the Liche Lord that related causally to the future in certain guaranteed ways. Lewis' view is required for those who reduce causality to relations between instrinsic properties of things across time, and my intuitions against his view on this case are therefore intuitions against his reduction of causality to such things. The causal relations between the Liche Lord and the future that he sees are an important part of the structure of the world, and a world seems to me to be much further from the actual one (of the case) if the Liche Lord has to have different abilities or failure of his abilities to keep the world intrinsically as close as possible. Simply changing some more intrinsic facts seems to me to be less of a change.

Posted by Tony Marmo at 15:17 BST
Updated: Monday, 9 August 2004 07:59 BST
Sunday, 25 July 2004

Topic: GENERAL LOGIC

Seminar on Plurality


By MarkSteen
Source: Orange Philosophy, July 24, 2004


Tom McKay approved of my idea of posting his announcement about his seminar on plural quantification, along with related topics (such as non-distributive predication). Tom has a new book on this subject which you can check out by clicking on the departmental webpage on the links list, then clicking on faculty, then McKay [sorry, for some reason my linking feature isn't working now].

I think some local-ish non-Syracusan (e.g., Cornell, Rochester) folk might be interested in attending. Here's the announcement [note that Tom will not have computer access until the end of the month and so you should wait a bit to email him or post questions here for him until August]:


Seminar, Fall 2004, on "Plurality"
(McKay)

There are lots of topics, and I want students' own interests to determine some of what we do.

My fundamental project (in a book I have just finished) has been to explore the issue of expanding first-order predicate logic to allow non-distributive predication. A predicate F is distributive iff whenever some things are F, each of them is F. Consider:

(1) They are students. They are young.
(2) They are classmates. They are surrounding the building.
The predications in (1) are distributive, but the predications in (2) are non-distributive. Non-distributive plural predication is irreducibly plural. In ordinary first-order logic, only distributive predicates are employed.

The incorporation of irreducibly plural predicates is related to a wide range of issues in metaphysics, philosophy of language, foundations of mathematics, logic, and natural language semantics. Some of the issues that we might consider:

What is the nature of plurality? How should we think of the relations among sets, mereological sums, pluralities and individuals? What (if anything) are these different ontological kinds, and how are they related? Can one thing be many? (Is one deck identical to the 52 cards? Or is this not an identity relation?)

Singular and plural predication; singular and plural quantification; singular and plural reference. How do those fit together? When we consider the full range of determiners in English and try to incorporate quantifiers to represent that, there are many interesting semantic issues to resolve.

How does the semantics of plurality relate to the semantics of mass terms?

In the foundations of mathematics, how far can plurals take us without set theory? What is the relationship of second-order logic to plurals and to the foundations of mathematics?

What is the nature of ontological commitment? What does semantics commit the semanticist to? What does it say speakers are committed to? (For example, if I say that the analysis of adverbs requires an event semantics, does that mean that an ordinary user of adverbs is committed to the existence of events? This kind of issue becomes interesting when we look at the semantics of plurals.)

Can we talk about everything without paradox? Are plurals a special resource to enable us to do so?

A large number of issues about the relationship of semantics and pragmatics come together when we consider definite descriptions. Usually discussions focus on singular definite descriptions, but we can see what difference (if any) it makes when we insist that the account be general enough for plural and mass definite descriptions. This then also relates to the consideration of pronominal cross-reference and demonstrative reference.

Some have argued that an event semantics is important for getting plurals right. It will be interesting to look at event semantics and how that relates to plurals.

I will meet with each enrolled student early on in the semester to identify some areas of interest and get started on developing the student's presentation and paper on a topic of the student's choice.

If people are interested in looking into this before the semester begins, my book is available on the department's website: http://philosophy.syr.edu/
(Click on my name in the list of faculty.) Also, Oystein Linnebo has posted a draft of his forthcoming Stanford Encyclopedia article, and it is a good introduction: http://folk.uio.no/oysteinl/. Scroll down to "Plural Quantification."

We will not presume any greater familiarity with logic than you would acquire by being alive and awake through most of PHI 651.

Please get in touch with me if you have questions.

tjmckay@syr.edu

Posted by Tony Marmo at 23:11 BST
Updated: Monday, 9 August 2004 08:02 BST
KNOW-HOW versus KNOWING THAT [1]
Topic: GENERAL LOGIC

Knowing That and Knowing How


By David Bzdak
(Source Orange Philosophy 07/17/2004)


I've begun delving into the literature recently on the difference between knowing that and knowing how (re-delving, actually, but that's neither here nor there). I've been quite surprised to find that it almost all jumps off from Ryle's discussion of the topic in The Concept of Mind which I believe was published at 1950.

I see hardly any mention of this topic other than in response to Ryle, and not much on the topic pre-Ryle. This strikes me as odd for such an important epistemological distinction (I realize that the distinction was recognized, pre-Ryle -- I'm wondering if it was philosophically analyzed). Am I missing a mountain of books/journals/articles out there (perhaps in the non-analytic tradition)? Or did Ryle really essentially begin this discussion?


REACTIONS

Dave,
I've wondered about that myself. I wish I could offer some help. The only thing that comes to mind is Cook Wilson, but I don't think he published much.

-Chuck
Posted by: chuck at July 17, 2004 09:36 AM

Here's some notes from a fellow in Edinburgh, with a nice little Bibliography:
http://homepages.ed.ac.uk/wpollard/knowinghowthat.pdf

Also, I saw:
Snowdon, Paul. "Knowing How and Knowing That: A Distinction Reconsidered" Proceedings of the Aristotelian Society 104:1 Sep 2003.
Posted by: chuck at July 18, 2004 09:02 AM

Arguable one of the great discussions in ancient chinese thought was precisely about the knowing how & knowling that, with the Daoists perhaps holding that knowing that is useless without knowing how. If you want to read more I would suggest Chad Hansen's ingightful book, A Daoist theory of chinese thought.

Cheers David
Posted by: David Hunter at July 20, 2004 07:55 PM

While it doesn't fit exactly into Ryle's distinction, one could also bring up Heidegger and his distinction between present-at-hand and ready-at-hand. Present at hand was more propositional knowledge and he inverted the usual relationship saying that read-at-hand or utility was more fundamental. One could, I suppose, move to the Ryle distinction with Heidegger in his middle phase arguing that knowledge-how is more fundamental than knowledge-that. I think one would have to be careful though.
Posted by: clarkgoble at July 20, 2004 08:36 PM

Gosh that was truly terrible spelling. The Daoist story which is supposed to to illustrate this distinction is from (I think) Zhuangzi and is the story of wheel wright Slab, basically the story goes that a Duke is wandering round studying one of the books of the Sages and this lowly Wheelwright laughs at him being a Duke he basically says tell me whats so funny and you better have a good reason for laughing or off with your head. The wheel wright says pardon me Duke but I don't understand why you are wasting your time with the leavings of the Sages. "What" goes the Duke! the wheelwright says well I am growing old and I am trying to teach my son how to bend wood to make wheels, I have taught him everything I know, what to do when making a wheel, still he cannot do it. When I die all I will leave behind is him, and as a wheelwright he will be no good, what I can't teach him is the skill I have, it is the same thing with reading the Sages, what made them Sages cannot be found in their leavings.

There is a fair bit throughout Zhuangzi (Also know as Chuang-Tzu) to do with this distinction usually making the point that skill is more important than knowledge.
Posted by: David Hunter at July 20, 2004 08:45 PM

Posted by Tony Marmo at 14:36 BST
Updated: Monday, 9 August 2004 08:04 BST
KNOW-HOW versus KNOWING THAT [2]
Topic: GENERAL LOGIC

Knowing How v. Knowing That: Some Heterodox Idea-Sketches


by Uriah Kriegel

(Source: Desert Landscapes 7/19/2004)

Since Ryle, orthodoxy had it that knowledge-how is categorically different from knowledge-that. The latter is a form of propositional representation of the way things are, whereas the former is just a capacity. The occurrence of the word "knowledge" in both expressions should not mislead us to think that they have something significant in common. There is no way to reduce Agent's knowledge *how* to ride a bike to some knowledge *that* certain things have to be done.

I have a somewhat different view. On my view, knowledge-how consists in *non-conceptual conative representations*. (...)

By "conative representations" I mean representations with a telic, or world-to-mind, direction of fit (wishing that p, wanting that p, hoping that p, intending that p, etc). These are to be distinguished from cognitive representations, which have a thetic, or mind-to-world, direction of fit (believing that p, expecting that p, suspecting that p, etc.).

The above parenthetized examples of conative representations all have propositional, and therefore (presumably) conceptual, content. But just like there are non-conceptual thetic representations, so we should expect there to be non-conceptual telic representations.

(..)

If my suggestion is on the right track, then although we cannot say that knowledge-how reduces to knowledge-that, since knowledge-that is propositional (because of the `that'-clause), we *can* say that it reduced some sort of representational knowledge.


REACTIONS

Hi Uriah,

In what sense are your non-conceptual conative representations actually representational? One specific worry: what makes it the case that every step in the causal chain leading from my desire to ride the bike to my actual riding of the bike doesn't count as a non-conceptual conative representation of the next step along the chain?

Brad.

Comment by Brad Weslake -- 7/20/2004 @ 12:54 am

Ok, I'm only half getting this. Suppose I know how to pedal, and the way to pedal is to use M16 or M17. Intuitively, it's both possible and quite likely that I don't know that the way to pedal is to use M16 or M17 - I don't even have concepts for M16 or M17. But if I know how to pedal, I *must* have a concept for pedaling.

You say, knowledge-how consists in *non-conceptual conative representations*. But there must be at least a little more, right? Because I have to at least have the concept of the thing I know how to do. And it seems that it's optional for me to have the concepts of M16 and M17 - if I'm a physiologist with a detailed understanding of the mechanics of pedaling, I still (can) know how to pedal.

I'm not quite sure I'm interpreting you right, so I'm going to stop here. Does this sound correct?

Comment by Jonathan Ichikawa -- 7/20/2004 @ 6:41 am

Woo-hoo, my favorite kind of response: everybody's right (except me?).

Christian, I think your formulation of the conclusion in terms of ability rather than knowledge is better than my original formulation. Here's why I was thinking of knowledge nonetheless. The common account of knowledge - as a belief that is true, justified, and Gettier-proof - seems to be tailored to knowledge-that. But once we agree that there is something fundamental in common to knowledge-that and knowledge-how, then we need some wider uderstanding of knowledge simpliciter. My view of knowledge-how isolates only one fundamental commonality with knowledge-that, namely, that the respective states are representational. However, with this may come other commonalities. Representatons have conditions of satisfaction (truth conditions in the case of knowledge-that, and on my view, fulfilment conditions in the case of knowledge-how) and they are answerable to certain standards of representation formation (compliance with which gives "justification," "wareant," or something in the vicinity). So maybe a case could be made for talk of knowledge rather than mere ability. But talk of ability is certainly more cautious.

Brad: I think you're also right that talk of conative states being representational is problematic. Conative states are certainly intentional, they are about something. But do they represent something? Only in a somewhat technical sense. In the regular sense of the word, as I hear it, to represent something is to represent it to be the case, or just to be. In that sense, conative intentional states are not normally representational, because they don't represent the way the world is, but rather the way one would want the world to be. In using the term "representation" I had in mind the more technical sense used in discussions of the representational theory of mind etc. It is common in these discussions to take desires and other conative states to be representational, in the minimal sense that they have conditions of satisfactions - indeed, in the minimal sense that they are intentional or have aboutness.

Comment by Uriah Kriegel -- 7/20/2004 @ 9:21 am

Uriah Kriegel,

Even with representation limited to the standard philosophical usage, I am worried about how much representation there is in your idea of non-conceptual conative representations. If I intend to ride a bike, but fall off, it is clear that my intention has failed to be satisfied; moreover this normative component of intention seems integral to it being an intention in the first place. Similarly, if I see in my schooner (as we say here in Sydney) a particular hue of yellow beer, it seems I could come to believe that this seeming was mistaken, and that the beer is actually some other hue (even if I couldn't articulate the difference beyond saying "it seems different"). What is the analog for your non-conceptual conative representations? My question about the causal chain was designed to get at this - some criteria is needed for sorting out causal links that count as representational from those that do not; it seems that this criteria needs to be normative; and it doesn't seem that the processes that underlie know-how are normative in the right way.

Brad.

Comment by Brad Weslake -- 7/21/2004 @ 10:53 pm

Right, I forgot to address the Brad's problem of fixing the content of those "non-conceptual conative representations." The problem described by Brad is parallel, however, to the *horizontal problem of disjunction* for naturalist theories of cognitive representations. There are two disjunction problems that arise for cognitive representations. The first is: what makes it the case that my cat-thought is a representation of a cat and not of a cat-or-small-dog-on a moonless night? Call this the vertical problem of disjunction. The second is: what makes it the case that my cat-thought is a representation of a cat and not of a cat-or-big-bang? Call this the horizontal problem of disjunction. Similarly, we may ask what makes it the case that a desire to bike is a (conative) representation of biking and not of biking-or-getting-to-the-office? I don't have a good answer to this problem, or any answer really, but I'm comforted by having the partners in innocence I have in naturalist theories of cognitive representations. In particular, Fodor, Dretske, and Gillet had some interesting things to say about this problem at the end of the 80s.

Comment by Uriah Kriegel -- 7/22/2004 @ 3:15 pm

OK,

Seems I am just more sceptical than you of causal theories of content in general (I don't think any of the replies to these sorts of problems ended up succeeding). To expose my own biases, I think the prospects of an explanation of know-that in terms of know-how (via functionalist or pragmatist approaches to content) are far better than those going in the other direction...

Comment by Brad Weslake -- 7/22/2004 @ 9:32 pm

Posted by Tony Marmo at 13:58 BST
Updated: Monday, 9 August 2004 08:05 BST
Wednesday, 21 July 2004
NOTES ON OPACITY [2]

On Opacity


Continuing from...

B. Language and Interpretative Techniques


Here I assume that truth-conditions alone do not automatically trigger any process of verifying sentences or reviewing beliefs or other propositional attitudes. Rather it is the users of natural languages that play a proactive role in the comprehension and evaluation of sentences. The proactive role of speakers is evident by the fact that the normal usage of such languages does not support the famous deflationist claim that adding the predicate is true to a sentence ? adds no content to it .
The fact that a string like it is true that contributes to the meaning of a natural language sentence is made evident in the cases the same sentence is evaluated differently by language users. Or in other words, a sentence may be deemed true by the person that says it and false by those who hear it, regardless of the conditions obtained in a certain world or situation. This means that the claim that a sentence like (2a) means (2b) can only be made from the perspective of the person who utters it, if and only if he is sincere:
(2) a. Tom Jobim composed a new song.
b. It is true that Tom Jobim composed a new song.

It cannot be made from the perspective of the hearer, who may doubt (2a). And, if the utter is a conscious liar, it cannot be made from his perspective either. Notice that this discrepancy of opinions may occur independently of whether Tom Jobim has or has not composed a new song in a certain world or situation. In part this observation shows a reactive role played by hearers, when they accept or doubt a sentence. But the possibility that even the utter of a sentence may deem it false evidences the proactive role he plays. And if one takes into account that interlocutors in a conversation also have their intentions and communicate them, then their evaluation of any sentence may also reflect a proactive role, more than a reactive one.
Thus, a sentence ? is not inherently construed as true (T?), false (??), undecidable (U?), possible (??) or necessary (o?). Those are judgements made by the language users when they handle sentences.
But the language users' proactive role is not limited to their capacity of merely ascribing truth-values to sentences. It is often the case that language users are able to construe gibberish utterances in a manner that they make sense, without incurring into some paradoxes or into some explosive inconsistency a la Pseudo-Scotus (99). And they usually do so, unless they choose not to.
Let us illustrate this idea with examples, like the sentences like (3). While the equivalents to in any artificial logic language are absurdities or paradoxes that require a sophisticated philosophical engineering to solve them, users of natural languages somehow manage to extract non-paradoxical coherent meanings from them:
(3) a. This sentence means the converse of whatever it means.
b. There's no such thing as legacies. At least, there is a legacy, but I'll never see it.
c. The ambitious are more likely to succeed with success, which is the opposite of failure.

There is an evident self-referential paradox in (3a), a clear contradiction in (3b) and a redundant or circular thought in (3c). And they can be interpreted in this manner, if the language users that read them proactively choose to see the possible paradoxes, contradictions and redundancies. But, for the reasons that I shall go into further on, that is not what they frequently do in the everyday common usage of a natural language. Accordingly, (3a) is usually interpreted either as referring to another sentence or as a potential metaphor for something, while (3b) may be taken as a review of statements and (3c) as an attempt to stress something. This evidences that the hearers/readers are able to recover the intended messages behind clumsily constructed sentences. Methinks that, in the exercise of this capacity, language users proactively employ certain techniques. But these techniques are not just any techniques: they are not merely ad hoc inventions, neither are they hazardously chosen.


C. Paraconsistency


Sch?ter (1994), among others, claims that the semantics of natural languages exhibit four basic properties that are already acknowledged and have been investigated in non-classic logic theories: paraconsistency (76), defeasibility, which contrast with monotonicity defined in (98), partiality, which contrasts to totality defined in (102), and relevance (101). Though paraconsistent approaches in Logic have been developed since the seminal works of Jas?kowsky (1948) and da Costa (1963, 1997) , and though they have important consequences for Linguistics, similar approaches in natural language semantics are only beginning.
As Priest (2002) explains, most of Paraconsistent Logic consists of proposing and applying strategies against non-consistency (Cf 93). There are several possible techniques to avoid or contain explosion within a logic system or semantics for artificial or natural languages, such as propositional filtration, non-adjunction, non-truth functional approach of negation, de Morgan algebra, etc. But those are all techniques invented by logicians for artificial languages. Should we accept the idea that in construing sentences the users of natural languages also use techniques to control explosions, then such techniques must be available as inherent interpretative devices of the human linguistic systems. In other words, as Logicians have their paraconsistent techniques for artificial languages, so the users of natural languages have techniques of their own, which are made possible by the fundamental properties of such languages.
Accordingly, the capacity of humans to ascribe values to sentences independently of the actual conditions obtained in a certain world or situation, which underlies the phenomenon called opacity in human languages, has to do with at least one of these techniques humans naturally posses: the contextualisation of sentences. I shall explore and unfold this matter in the following.


Continue

Posted by Tony Marmo at 14:16 BST
Updated: Wednesday, 21 July 2004 14:20 BST
NOTES ON OPACITY [1]
Here I share some pieces of the current draft version of one of my articles on opacity in natural languages. I shall post one excerpt or two by day. Hope you like it. Comments are wellcome.

ON OPACITY


1. Prelude


1.1 General Considerations


A. The Issue


In this work I shall examine some aspects of the semantic phenomenon called opacity from the perspective of human languages in their common usage (rather than artificial languages or usages created by Logicians), relating it to the manner such languages, as computational systems, equip their users with the tools and the techniques to handle (pseudo-) paradoxes and explosions caused by contradictions. Although I resort to the work of Logicians and Philosophers, the principles and theoretic notions herein proposed to formalise such phenomena are primarily hypotheses respecting the inherent machinery of human languages, rather than merely invented solutions to approach the issues in question.
The latu sensu notion of opacity can be initially figuratively characterised as the phenomenon of a sentential context not allowing the light of a semantic/logic principle to pass through, i.e., a certain context is opaque because a certain (mode of) inference is not visibly valid therein.
There have been some more specific and/or stronger hypotheses trying to define actual instantiations of this notion occur and/or to predict when and to explain why they occur. Indeed, as far as I know, there have been at least two basic approaches on opacity.
The first basic approach on opacity, which is herein called classic or traditional, and which will be questioned in Section 2, assumes the definition of opaque context given in (1) below or variants thereof: Although (1) has never been accepted by important academic factions, like the Russellian philosophers among others, it is still the most spread conception in the literature:

(1) Opaque context (classic version)
A sentential context C containing an occurrence of a term t is opaque, if the substitution of co-referential terms is an invalid mode of inference with respect to this occurrence. (See Mckinsey 1998, Quine 1956)

The second basic view, which will be approached in Section 4, revolves around the idea of non-symmetry of accessibility relations. This second view has more adepts among linguists.
The alternative view I shall sketch here attempts to determine how fundamental opacity is and to relate it to issues of consistency and non-paradoxical interpretation in the common usage of natural languages.


Continue

Posted by Tony Marmo at 14:04 BST
Updated: Wednesday, 21 July 2004 14:19 BST
Tuesday, 20 July 2004
DYNAMIC EXCURSIONS ON WEAK ISLANDS AGAIN
Topic: HUMAN SEMANTICS

MARTIN HONCOOP'S LEGACY LIVES


I am very happy to announce that my dear friend and colleague Martin Honcoop's work, Dynamic Excursions on Weak Islands, has been re-edited and published via the Semantics Archive. Martin, a proud disciple of both Groenendijk and Szabolcsi, was a very competent linguist, a true expert in many things and I had the privilege to meet him during his short life time and call him a friend. He together with Marcel den Dikken, Eddy Ruys and Rene Mulder made up an outstanding group of young formalists unmatched by their country-fellows (either of their age or younger). Martin was exceptionally patient when had to explain what formal linguistics is about to fanatic and intolerant empiricists. After Mulder had moved to the publishing business and Marcel gone to the States, it is not exaggerate to say that when Martin died, one quarter of the future of formal Linguistics in the Netherlands perished too. We all miss him a lot and I congratulate the blessed soul who put his paper in the Semantics Archive.

Posted by Tony Marmo at 17:31 BST
Updated: Monday, 9 August 2004 08:08 BST
Monday, 19 July 2004
SCIENCE NEWS
Topic: GENERAL LOGIC
ON BOOLEAN NETWORKS

Boolean networks with variable number of inputs (K)


Metod Skarja, Barbara Remic, and Igor Jerman


We studied a random Boolean network model with a variable number of inputs K per element. An interesting feature of this model, compared to the well-known fixed- Knetworks, is its higher orderliness. It seems that the distribution of connectivity alone contributes to a certain amount of order. In the present research, we tried to disentangle some of the reasons for this unexpected order. We also studied the influence of different numbers of source elements (elements with no inputs) on the network's dynamics. An analysis carried out on the networks with an average value of K= 2 revealed a correlation between the number of source elements and the dynamic diversity of the network. As a diversity measure we used the number of attractors, their lengths and similarity. As a quantitative measure of the attractors' similarity, we developed two methods, one taking into account the size and the overlapping of the frozen areas, and the other in which active elements are also taken into account. As the number of source elements increases, the dynamic diversity of the networks does likewise: the number of attractors increases exponentially, while their similarity diminishes linearly. The length of attractors remains approximately the same, which indicates that the orderliness of the networks remains the same. We also determined the level of order that originates from the canalizing properties of Boolean functions and the propagation of this influence through the network. This source of order can account only for one-half of the frozen elements; the other half presumably freezes due to the complex dynamics of the network. Our work also demonstrates that different ways of assigning and redirecting connections between elements may influence the results significantly. Studying such systems can also help with modeling and understanding a complex organization and self-ordering in biological systems, especially the genetic ones.

Keywords: Boolean networks , biological systems, connectivity distribution, variable K,
sources of order, canalization, frozen elements, no input elements (source elements), attractor similarity, effective distribution, genetic networks , high orderliness.


Link

Posted by Tony Marmo at 17:57 BST
Updated: Monday, 9 August 2004 08:09 BST
Friday, 16 July 2004

Topic: SCIENCE & NEWS
We have often heard the same complaint when an influential linguist, such as Chomsky or Kayne, releases a new paper: Oh no! He changed everything again!

A lot of non-theoretic linguists, who are inquisitors sank in the darkness of 19th century empiricist dogmas, are very reactionary in this sense: they hate changes in the theoretic framework, they do not want any of them and deem it absurd to change things all the time. But, if the concepts of some form of thought never change then it is not real science.

Now, newspapers around the world give us the good example of what solid science really means:

Hawking finds hole in his theory



Source: Associated Press, The Globe and Mail

After almost 30 years of arguing a black hole swallows up everything that falls into it, astrophysicist Stephen Hawking did a scientific back-flip Thursday.

The world famous author of a Brief History of Time said he and other scientists had it wrong -- the galactic traps may in fact allow information to escape.

The findings, which Dr. Hawking is due to present at the 17th International Conference on General Relativity and Gravitation in Dublin on July 21, could help solve the "black hole information paradox," which is a crucial puzzle of modern physics.

Current theory holds that Hawking radiation contains no information about the matter inside a black hole and once the black hole has evaporated, all the information within it is lost.

However this conflicts with a central tenet of quantum physics, which says such information can never be completely wiped out.


Congratulations Professor Hawking! You are a truly wise man!

Read more:

Het Volk
Los Andes
The Australian
Corriere della Sera
La Cr?nica de Hoy
The Houston Chronicle
The Guardian
The Globe and Mail
The Independent
El Mundo
Nature
No Olhar
El Periodico de Catalunya
RP Online
The Telegraph
Ziua Magazin

Posted by Tony Marmo at 10:32 BST
Updated: Monday, 9 August 2004 08:10 BST

Newer | Latest | Older