Click Here ">
« July 2004 »
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
You are not logged in. Log in
Entries by Topic
All topics  «
Cognition & Epistemology
Notes on Pirah?
Ontology&possible worlds
Syn-Sem Interface
Temporal Logic
Blog Tools
Edit your Blog
Build a Blog
RSS Feed
View Profile
Translate this
LINGUISTIX&LOGIK, Tony Marmo's blog
Tuesday, 27 July 2004

Be warned, this could be the matrix

Source: The Sidney Morning Herald, July 22, 2004

The multiverse theory has spawned another - that our universe is a simulation, writes Paul Davies.

If you've ever thought life was actually a dream, take comfort. Some pretty distinguished scientists may agree with you. Philosophers have long questioned whether there is in fact a real world out there, or whether "reality" is just a figment of our imagination.

Then along came the quantum physicists, who unveiled an Alice-in-Wonderland realm of atomic uncertainty, where particles can be waves and solid objects dissolve away into ghostly patterns of quantum energy.

Now cosmologists have got in on the act, suggesting that what we perceive as the universe might in fact be nothing more than a gigantic simulation.

The story behind this bizarre suggestion began with a vexatious question: why is the universe so bio-friendly? Cosmologists have long been perplexed by the fact that the laws of nature seem to be cunningly concocted to enable life to emerge. Take the element carbon, the vital stuff that is the basis of all life. It wasn't made in the big bang that gave birth to the universe. Instead, carbon has been cooked in the innards of giant stars, which then exploded and spewed soot around the universe.
Advertisement Advertisement

The process that generates carbon is a delicate nuclear reaction. It turns out that the whole chain of events is a damned close run thing, to paraphrase Lord Wellington. If the force that holds atomic nuclei together were just a tiny bit stronger or a tiny bit weaker, the reaction wouldn't work properly and life may never have happened.

The late British astronomer Fred Hoyle was so struck by the coincidence that the nuclear force possessed just the right strength to make beings like Fred Hoyle, he proclaimed the universe to be "a put-up job". Since this sounds a bit too much like divine providence, cosmologists have been scrambling to find a scientific answer to the conundrum of cosmic bio-friendliness.

The one they have come up with is multiple universes, or "the multiverse". This theory says that what we have been calling "the universe" is nothing of the sort. Rather, it is an infinitesimal fragment of a much grander and more elaborate system in which our cosmic region, vast though it is, represents but a single bubble of space amid a countless number of other bubbles, or pocket universes.

Things get interesting when the multiverse theory is combined with ideas from sub-atomic particle physics. Evidence is mounting that what physicists took to be God-given unshakeable laws may be more like local by-laws, valid in our particular cosmic patch, but different in other pocket universes. Travel a trillion light years beyond the Andromeda galaxy, and you might find yourself in a universe where gravity is a bit stronger or electrons a bit heavier.

The vast majority of these other universes will not have the necessary fine-tuned coincidences needed for life to emerge; they are sterile and so go unseen. Only in Goldilocks universes like ours where things have fallen out just right, purely by accident, will sentient beings arise to be amazed at how ingeniously bio-friendly their universe is.

It's a pretty neat idea, and very popular with scientists. But it carries a bizarre implication. Because the total number of pocket universes is unlimited, there are bound to be at least some that are not only inhabited, but populated by advanced civilisations - technological communities with enough computer power to create artificial consciousness. Indeed, some computer scientists think our technology may be on the verge of achieving thinking machines.

It is but a small step from creating artificial minds in a machine, to simulating entire virtual worlds for the simulated beings to inhabit. This scenario has become familiar since it was popularised in The Matrix movies.

Now some scientists are suggesting it should be taken seriously. "We may be a simulation ... creations of some supreme, or super-being," muses Britain's astronomer royal, Sir Martin Rees, a staunch advocate of the multiverse theory. He wonders whether the entire physical universe might be an exercise in virtual reality, so that "we're in the matrix rather than the physics itself".

Is there any justification for believing this wacky idea? You bet, says Nick Bostrom, a philosopher at Oxford University, who even has a website devoted to the topic ( ). "Because their computers are so powerful, they could run a great many simulations," he writes in The Philosophical Quarterly .

So if there exist civilisations with cosmic simulating ability, then the fake universes they create would rapidly proliferate to outnumber the real ones. After all, virtual reality is a lot cheaper than the real thing. So by simple statistics, a random observer like you or me is most probably a simulated being in a fake world. And viewed from inside the matrix, we could never tell the difference.

Or could we? John Barrow, a colleague of Martin Rees at Cambridge University, wonders whether the simulators would go to the trouble and expense of making the virtual reality foolproof. Perhaps if we look closely enough we might catch the scenery wobbling.

He even suggests that a glitch in our simulated cosmic history may have already been discovered, by John Webb at the University of NSW. Webb has analysed the light from distant quasars, and found that something funny happened about 6 billion years ago - a minute shift in the speed of light. Could this be the simulators taking their eye off the ball?

I have to confess to being partly responsible for this mischief. Last year I wrote an item for The New York Times , saying that once the multiverse genie was let out of the bottle, Matrix -like scenarios inexorably follow. My conclusion was that perhaps we should retain a healthy scepticism for the multiverse concept until this was sorted out. But far from being a dampener on the theory, it only served to boost enthusiasm for it.

Where will it all end? Badly, perhaps. Now the simulators know we are on to them, and the game is up, they may lose interest and decide to hit the delete button. For your own sake, don't believe a word that I have written.

Paul Davies is professor of natural philosophy at Macquarie University's Australian Centre for Astrobiology. His latest book is How to Build a Time Machine.

Posted by Tony Marmo at 17:25 BST
Updated: Monday, 9 August 2004 07:55 BST
Monday, 26 July 2004
I have a question to all English native speakers around the world. Given the pair of sentences below:

(1) The postman is evil.
(2) The postman is like evil.

What are the meaning differences you grasp when you read these two sentences?

I thank you for your replies.

Posted by Tony Marmo at 17:40 BST
Updated: Monday, 26 July 2004 17:46 BST

Predictive Prophecy and Counterfactuals

by Jeremy Pierce

Source: Orange Philosophy June 25, 2004

In line with the discussions of time and time travel at my blog and to some degree here also, our own Gnu has a related puzzle using a fun fantasy role-playing kind of example for a philosophical puzzle about conditional predictive prophecy (i.e. predicting what someone will do and then telling him that A will have already happened if he ends up doing P but B will have already happened if he turns out to do Q). I think this case is interesting in terms of its view of time and of the relation of guaranteed prediction to time, but it also has some relevance to how to evaluate counterfactual statements. Read the case first at Gnu's blog, then read on here for my analysis.

The Liche Lord has predicted what Thurvan would do. That means he knew that Thurvan would go to all the rooms. Therefore, assuming he isn't lying, he hasn't placed the sword in the room he said it would be in if Thurvan had chosen not to go to the other rooms. Thurvan was correct to say that the sword is either there or not, but he was wrong to think that it was there independent of his decision. It was there because of what he would do. If Thurvan had chosen otherwise, and the Liche Lord had still set up the same deal, the sword would have been there. But unless he's lying, the sword can't be there as things stand. Given that the Liche Lord can take the shape of any object and enjoys taking people to be his undead slaves, you might expect that what the dwarf sees as the sword is probably the Liche Lord himself waiting to trap him.

Of course, the Liche Lord can see the future, so this is probably only the case if the Liche Lord has predicted that the dwarf will take the sword. He may well have predicted that the dwarf would reason through all this and leave without going for the sword, in which case he may have lied and put the sword there anyway. What's great about this is that the sword might really be there but only if he doesn't try to get it, and it's not there if he does. So he can't get it one way or the other. The only way to get the sword would have been to do what the Liche Lord knew he wouldn't do, and that would have been to avoid the other rooms.

In working through this, I had a hard time thinking about what the Liche Lord would have done if Thurvan had chosen otherwise, because it may well be that the Liche Lord would not have chosen to set this scenario up at all without the knowledge of Thurvan choosing the way he did. It's hard to think about counterfactual possibilities where the thing that would have been different depends on knowledge of the future in the counterfactual world.

According to David Lewis' semantics of counterfactual statements, 'if Thurvan had chosen to go straight to the sword room, the sword would have been there' is unclear to me. Lewis says to go to the nearest possible world where Thurvan goes straight to the sword room, meaning that you should find the world most like the actual world except for that detail and then see what's true. So if we change nothing in the world except that and what changing it will require, what happens? I can think of three kinds of candidate worlds for the closest:

1. My first thought would be to say that if Thurvan had chosen differently, and if you kept as much intact as possible, then the Liche Lord would have predicted differently and as a result put the sword in the chamber to honor his deal. This world holds the Liche Lord's honesty and abilities constant and changes the state of the world for the entire time between the writing of the letter and the present so that the sword has been there all along.

2. Lewis prefers to find a world intrinsically as much like the actual world as possible. That would require keeping the tomb , just as things are in the actual world. But then the Liche Lord would have to have told something false to Thurvan. Either he was lying (2a), or his predictive abilities failed in this one case (2b). I think Lewis has to favor 2b, because even 2a has intrinsic changes with the Liche Lord's beliefs and intents, whereas 2b could be just a surprising failure of his abilities, something like the miracle worlds Lewis discusses in his paper on whether free will requires breaking the laws of nature.

3. Lewis wouldn't like this at all, because it requires even more of a change of the intrinsic state of the world so far than 1, but some might argue that if Thurvan had chosen to take the sword and not go to the other rooms, the Liche Lord would not have set up the case this way at all and wouldn't have given a deal that would mean he'd end up losing. I'm bring this up only to argue against it as a legitimate near possibility. Seeing this as a near possibility of what would happen given Thurvan's choice to go only for the sword assumes something false. It assumes the Liche Lord is predicting what Thurvan would do given that the Liche Lord sets things up a certain way. According to Gnu's setup, the Liche Lord predicts what Thurvan will do, period. He doesn't consider all the possibilities and make things go his way. His ability only tells him what will happen. So this one requires a difference in the intrinsic state of the world and in the abilities of the Liche Lord. 1 has a difference only in the state of the world (and not even as much of a difference), and 2 has a difference in the abilities or intent of the Liche Lord (and not as much of a difference -- either a one-time failure of the same ability rather than a completely different ability or a different motivation rather than a whole change in the nature of his abilities).

So I think 1 and 2 are the real options for which world is closest to the actual one Gnu has constructed. This is a particularly vivid example of those who agree with Lewis on nearness of worlds based on intrinsic likeness and what I think is the more commonsense view of nearness of worlds based on preserving the abilities of the Liche Lord that related causally to the future in certain guaranteed ways. Lewis' view is required for those who reduce causality to relations between instrinsic properties of things across time, and my intuitions against his view on this case are therefore intuitions against his reduction of causality to such things. The causal relations between the Liche Lord and the future that he sees are an important part of the structure of the world, and a world seems to me to be much further from the actual one (of the case) if the Liche Lord has to have different abilities or failure of his abilities to keep the world intrinsically as close as possible. Simply changing some more intrinsic facts seems to me to be less of a change.

Posted by Tony Marmo at 15:17 BST
Updated: Monday, 9 August 2004 07:59 BST
Sunday, 25 July 2004


Seminar on Plurality

By MarkSteen
Source: Orange Philosophy, July 24, 2004

Tom McKay approved of my idea of posting his announcement about his seminar on plural quantification, along with related topics (such as non-distributive predication). Tom has a new book on this subject which you can check out by clicking on the departmental webpage on the links list, then clicking on faculty, then McKay [sorry, for some reason my linking feature isn't working now].

I think some local-ish non-Syracusan (e.g., Cornell, Rochester) folk might be interested in attending. Here's the announcement [note that Tom will not have computer access until the end of the month and so you should wait a bit to email him or post questions here for him until August]:

Seminar, Fall 2004, on "Plurality"

There are lots of topics, and I want students' own interests to determine some of what we do.

My fundamental project (in a book I have just finished) has been to explore the issue of expanding first-order predicate logic to allow non-distributive predication. A predicate F is distributive iff whenever some things are F, each of them is F. Consider:

(1) They are students. They are young.
(2) They are classmates. They are surrounding the building.
The predications in (1) are distributive, but the predications in (2) are non-distributive. Non-distributive plural predication is irreducibly plural. In ordinary first-order logic, only distributive predicates are employed.

The incorporation of irreducibly plural predicates is related to a wide range of issues in metaphysics, philosophy of language, foundations of mathematics, logic, and natural language semantics. Some of the issues that we might consider:

What is the nature of plurality? How should we think of the relations among sets, mereological sums, pluralities and individuals? What (if anything) are these different ontological kinds, and how are they related? Can one thing be many? (Is one deck identical to the 52 cards? Or is this not an identity relation?)

Singular and plural predication; singular and plural quantification; singular and plural reference. How do those fit together? When we consider the full range of determiners in English and try to incorporate quantifiers to represent that, there are many interesting semantic issues to resolve.

How does the semantics of plurality relate to the semantics of mass terms?

In the foundations of mathematics, how far can plurals take us without set theory? What is the relationship of second-order logic to plurals and to the foundations of mathematics?

What is the nature of ontological commitment? What does semantics commit the semanticist to? What does it say speakers are committed to? (For example, if I say that the analysis of adverbs requires an event semantics, does that mean that an ordinary user of adverbs is committed to the existence of events? This kind of issue becomes interesting when we look at the semantics of plurals.)

Can we talk about everything without paradox? Are plurals a special resource to enable us to do so?

A large number of issues about the relationship of semantics and pragmatics come together when we consider definite descriptions. Usually discussions focus on singular definite descriptions, but we can see what difference (if any) it makes when we insist that the account be general enough for plural and mass definite descriptions. This then also relates to the consideration of pronominal cross-reference and demonstrative reference.

Some have argued that an event semantics is important for getting plurals right. It will be interesting to look at event semantics and how that relates to plurals.

I will meet with each enrolled student early on in the semester to identify some areas of interest and get started on developing the student's presentation and paper on a topic of the student's choice.

If people are interested in looking into this before the semester begins, my book is available on the department's website:
(Click on my name in the list of faculty.) Also, Oystein Linnebo has posted a draft of his forthcoming Stanford Encyclopedia article, and it is a good introduction: Scroll down to "Plural Quantification."

We will not presume any greater familiarity with logic than you would acquire by being alive and awake through most of PHI 651.

Please get in touch with me if you have questions.

Posted by Tony Marmo at 23:11 BST
Updated: Monday, 9 August 2004 08:02 BST

Knowing That and Knowing How

By David Bzdak
(Source Orange Philosophy 07/17/2004)

I've begun delving into the literature recently on the difference between knowing that and knowing how (re-delving, actually, but that's neither here nor there). I've been quite surprised to find that it almost all jumps off from Ryle's discussion of the topic in The Concept of Mind which I believe was published at 1950.

I see hardly any mention of this topic other than in response to Ryle, and not much on the topic pre-Ryle. This strikes me as odd for such an important epistemological distinction (I realize that the distinction was recognized, pre-Ryle -- I'm wondering if it was philosophically analyzed). Am I missing a mountain of books/journals/articles out there (perhaps in the non-analytic tradition)? Or did Ryle really essentially begin this discussion?


I've wondered about that myself. I wish I could offer some help. The only thing that comes to mind is Cook Wilson, but I don't think he published much.

Posted by: chuck at July 17, 2004 09:36 AM

Here's some notes from a fellow in Edinburgh, with a nice little Bibliography:

Also, I saw:
Snowdon, Paul. "Knowing How and Knowing That: A Distinction Reconsidered" Proceedings of the Aristotelian Society 104:1 Sep 2003.
Posted by: chuck at July 18, 2004 09:02 AM

Arguable one of the great discussions in ancient chinese thought was precisely about the knowing how & knowling that, with the Daoists perhaps holding that knowing that is useless without knowing how. If you want to read more I would suggest Chad Hansen's ingightful book, A Daoist theory of chinese thought.

Cheers David
Posted by: David Hunter at July 20, 2004 07:55 PM

While it doesn't fit exactly into Ryle's distinction, one could also bring up Heidegger and his distinction between present-at-hand and ready-at-hand. Present at hand was more propositional knowledge and he inverted the usual relationship saying that read-at-hand or utility was more fundamental. One could, I suppose, move to the Ryle distinction with Heidegger in his middle phase arguing that knowledge-how is more fundamental than knowledge-that. I think one would have to be careful though.
Posted by: clarkgoble at July 20, 2004 08:36 PM

Gosh that was truly terrible spelling. The Daoist story which is supposed to to illustrate this distinction is from (I think) Zhuangzi and is the story of wheel wright Slab, basically the story goes that a Duke is wandering round studying one of the books of the Sages and this lowly Wheelwright laughs at him being a Duke he basically says tell me whats so funny and you better have a good reason for laughing or off with your head. The wheel wright says pardon me Duke but I don't understand why you are wasting your time with the leavings of the Sages. "What" goes the Duke! the wheelwright says well I am growing old and I am trying to teach my son how to bend wood to make wheels, I have taught him everything I know, what to do when making a wheel, still he cannot do it. When I die all I will leave behind is him, and as a wheelwright he will be no good, what I can't teach him is the skill I have, it is the same thing with reading the Sages, what made them Sages cannot be found in their leavings.

There is a fair bit throughout Zhuangzi (Also know as Chuang-Tzu) to do with this distinction usually making the point that skill is more important than knowledge.
Posted by: David Hunter at July 20, 2004 08:45 PM

Posted by Tony Marmo at 14:36 BST
Updated: Monday, 9 August 2004 08:04 BST

Knowing How v. Knowing That: Some Heterodox Idea-Sketches

by Uriah Kriegel

(Source: Desert Landscapes 7/19/2004)

Since Ryle, orthodoxy had it that knowledge-how is categorically different from knowledge-that. The latter is a form of propositional representation of the way things are, whereas the former is just a capacity. The occurrence of the word "knowledge" in both expressions should not mislead us to think that they have something significant in common. There is no way to reduce Agent's knowledge *how* to ride a bike to some knowledge *that* certain things have to be done.

I have a somewhat different view. On my view, knowledge-how consists in *non-conceptual conative representations*. (...)

By "conative representations" I mean representations with a telic, or world-to-mind, direction of fit (wishing that p, wanting that p, hoping that p, intending that p, etc). These are to be distinguished from cognitive representations, which have a thetic, or mind-to-world, direction of fit (believing that p, expecting that p, suspecting that p, etc.).

The above parenthetized examples of conative representations all have propositional, and therefore (presumably) conceptual, content. But just like there are non-conceptual thetic representations, so we should expect there to be non-conceptual telic representations.


If my suggestion is on the right track, then although we cannot say that knowledge-how reduces to knowledge-that, since knowledge-that is propositional (because of the `that'-clause), we *can* say that it reduced some sort of representational knowledge.


Hi Uriah,

In what sense are your non-conceptual conative representations actually representational? One specific worry: what makes it the case that every step in the causal chain leading from my desire to ride the bike to my actual riding of the bike doesn't count as a non-conceptual conative representation of the next step along the chain?


Comment by Brad Weslake -- 7/20/2004 @ 12:54 am

Ok, I'm only half getting this. Suppose I know how to pedal, and the way to pedal is to use M16 or M17. Intuitively, it's both possible and quite likely that I don't know that the way to pedal is to use M16 or M17 - I don't even have concepts for M16 or M17. But if I know how to pedal, I *must* have a concept for pedaling.

You say, knowledge-how consists in *non-conceptual conative representations*. But there must be at least a little more, right? Because I have to at least have the concept of the thing I know how to do. And it seems that it's optional for me to have the concepts of M16 and M17 - if I'm a physiologist with a detailed understanding of the mechanics of pedaling, I still (can) know how to pedal.

I'm not quite sure I'm interpreting you right, so I'm going to stop here. Does this sound correct?

Comment by Jonathan Ichikawa -- 7/20/2004 @ 6:41 am

Woo-hoo, my favorite kind of response: everybody's right (except me?).

Christian, I think your formulation of the conclusion in terms of ability rather than knowledge is better than my original formulation. Here's why I was thinking of knowledge nonetheless. The common account of knowledge - as a belief that is true, justified, and Gettier-proof - seems to be tailored to knowledge-that. But once we agree that there is something fundamental in common to knowledge-that and knowledge-how, then we need some wider uderstanding of knowledge simpliciter. My view of knowledge-how isolates only one fundamental commonality with knowledge-that, namely, that the respective states are representational. However, with this may come other commonalities. Representatons have conditions of satisfaction (truth conditions in the case of knowledge-that, and on my view, fulfilment conditions in the case of knowledge-how) and they are answerable to certain standards of representation formation (compliance with which gives "justification," "wareant," or something in the vicinity). So maybe a case could be made for talk of knowledge rather than mere ability. But talk of ability is certainly more cautious.

Brad: I think you're also right that talk of conative states being representational is problematic. Conative states are certainly intentional, they are about something. But do they represent something? Only in a somewhat technical sense. In the regular sense of the word, as I hear it, to represent something is to represent it to be the case, or just to be. In that sense, conative intentional states are not normally representational, because they don't represent the way the world is, but rather the way one would want the world to be. In using the term "representation" I had in mind the more technical sense used in discussions of the representational theory of mind etc. It is common in these discussions to take desires and other conative states to be representational, in the minimal sense that they have conditions of satisfactions - indeed, in the minimal sense that they are intentional or have aboutness.

Comment by Uriah Kriegel -- 7/20/2004 @ 9:21 am

Uriah Kriegel,

Even with representation limited to the standard philosophical usage, I am worried about how much representation there is in your idea of non-conceptual conative representations. If I intend to ride a bike, but fall off, it is clear that my intention has failed to be satisfied; moreover this normative component of intention seems integral to it being an intention in the first place. Similarly, if I see in my schooner (as we say here in Sydney) a particular hue of yellow beer, it seems I could come to believe that this seeming was mistaken, and that the beer is actually some other hue (even if I couldn't articulate the difference beyond saying "it seems different"). What is the analog for your non-conceptual conative representations? My question about the causal chain was designed to get at this - some criteria is needed for sorting out causal links that count as representational from those that do not; it seems that this criteria needs to be normative; and it doesn't seem that the processes that underlie know-how are normative in the right way.


Comment by Brad Weslake -- 7/21/2004 @ 10:53 pm

Right, I forgot to address the Brad's problem of fixing the content of those "non-conceptual conative representations." The problem described by Brad is parallel, however, to the *horizontal problem of disjunction* for naturalist theories of cognitive representations. There are two disjunction problems that arise for cognitive representations. The first is: what makes it the case that my cat-thought is a representation of a cat and not of a cat-or-small-dog-on a moonless night? Call this the vertical problem of disjunction. The second is: what makes it the case that my cat-thought is a representation of a cat and not of a cat-or-big-bang? Call this the horizontal problem of disjunction. Similarly, we may ask what makes it the case that a desire to bike is a (conative) representation of biking and not of biking-or-getting-to-the-office? I don't have a good answer to this problem, or any answer really, but I'm comforted by having the partners in innocence I have in naturalist theories of cognitive representations. In particular, Fodor, Dretske, and Gillet had some interesting things to say about this problem at the end of the 80s.

Comment by Uriah Kriegel -- 7/22/2004 @ 3:15 pm


Seems I am just more sceptical than you of causal theories of content in general (I don't think any of the replies to these sorts of problems ended up succeeding). To expose my own biases, I think the prospects of an explanation of know-that in terms of know-how (via functionalist or pragmatist approaches to content) are far better than those going in the other direction...

Comment by Brad Weslake -- 7/22/2004 @ 9:32 pm

Posted by Tony Marmo at 13:58 BST
Updated: Monday, 9 August 2004 08:05 BST
Wednesday, 21 July 2004

On Opacity

Continuing from...

B. Language and Interpretative Techniques

Here I assume that truth-conditions alone do not automatically trigger any process of verifying sentences or reviewing beliefs or other propositional attitudes. Rather it is the users of natural languages that play a proactive role in the comprehension and evaluation of sentences. The proactive role of speakers is evident by the fact that the normal usage of such languages does not support the famous deflationist claim that adding the predicate is true to a sentence ? adds no content to it .
The fact that a string like it is true that contributes to the meaning of a natural language sentence is made evident in the cases the same sentence is evaluated differently by language users. Or in other words, a sentence may be deemed true by the person that says it and false by those who hear it, regardless of the conditions obtained in a certain world or situation. This means that the claim that a sentence like (2a) means (2b) can only be made from the perspective of the person who utters it, if and only if he is sincere:
(2) a. Tom Jobim composed a new song.
b. It is true that Tom Jobim composed a new song.

It cannot be made from the perspective of the hearer, who may doubt (2a). And, if the utter is a conscious liar, it cannot be made from his perspective either. Notice that this discrepancy of opinions may occur independently of whether Tom Jobim has or has not composed a new song in a certain world or situation. In part this observation shows a reactive role played by hearers, when they accept or doubt a sentence. But the possibility that even the utter of a sentence may deem it false evidences the proactive role he plays. And if one takes into account that interlocutors in a conversation also have their intentions and communicate them, then their evaluation of any sentence may also reflect a proactive role, more than a reactive one.
Thus, a sentence ? is not inherently construed as true (T?), false (??), undecidable (U?), possible (??) or necessary (o?). Those are judgements made by the language users when they handle sentences.
But the language users' proactive role is not limited to their capacity of merely ascribing truth-values to sentences. It is often the case that language users are able to construe gibberish utterances in a manner that they make sense, without incurring into some paradoxes or into some explosive inconsistency a la Pseudo-Scotus (99). And they usually do so, unless they choose not to.
Let us illustrate this idea with examples, like the sentences like (3). While the equivalents to in any artificial logic language are absurdities or paradoxes that require a sophisticated philosophical engineering to solve them, users of natural languages somehow manage to extract non-paradoxical coherent meanings from them:
(3) a. This sentence means the converse of whatever it means.
b. There's no such thing as legacies. At least, there is a legacy, but I'll never see it.
c. The ambitious are more likely to succeed with success, which is the opposite of failure.

There is an evident self-referential paradox in (3a), a clear contradiction in (3b) and a redundant or circular thought in (3c). And they can be interpreted in this manner, if the language users that read them proactively choose to see the possible paradoxes, contradictions and redundancies. But, for the reasons that I shall go into further on, that is not what they frequently do in the everyday common usage of a natural language. Accordingly, (3a) is usually interpreted either as referring to another sentence or as a potential metaphor for something, while (3b) may be taken as a review of statements and (3c) as an attempt to stress something. This evidences that the hearers/readers are able to recover the intended messages behind clumsily constructed sentences. Methinks that, in the exercise of this capacity, language users proactively employ certain techniques. But these techniques are not just any techniques: they are not merely ad hoc inventions, neither are they hazardously chosen.

C. Paraconsistency

Sch?ter (1994), among others, claims that the semantics of natural languages exhibit four basic properties that are already acknowledged and have been investigated in non-classic logic theories: paraconsistency (76), defeasibility, which contrast with monotonicity defined in (98), partiality, which contrasts to totality defined in (102), and relevance (101). Though paraconsistent approaches in Logic have been developed since the seminal works of Jas?kowsky (1948) and da Costa (1963, 1997) , and though they have important consequences for Linguistics, similar approaches in natural language semantics are only beginning.
As Priest (2002) explains, most of Paraconsistent Logic consists of proposing and applying strategies against non-consistency (Cf 93). There are several possible techniques to avoid or contain explosion within a logic system or semantics for artificial or natural languages, such as propositional filtration, non-adjunction, non-truth functional approach of negation, de Morgan algebra, etc. But those are all techniques invented by logicians for artificial languages. Should we accept the idea that in construing sentences the users of natural languages also use techniques to control explosions, then such techniques must be available as inherent interpretative devices of the human linguistic systems. In other words, as Logicians have their paraconsistent techniques for artificial languages, so the users of natural languages have techniques of their own, which are made possible by the fundamental properties of such languages.
Accordingly, the capacity of humans to ascribe values to sentences independently of the actual conditions obtained in a certain world or situation, which underlies the phenomenon called opacity in human languages, has to do with at least one of these techniques humans naturally posses: the contextualisation of sentences. I shall explore and unfold this matter in the following.


Posted by Tony Marmo at 14:16 BST
Updated: Wednesday, 21 July 2004 14:20 BST
Here I share some pieces of the current draft version of one of my articles on opacity in natural languages. I shall post one excerpt or two by day. Hope you like it. Comments are wellcome.


1. Prelude

1.1 General Considerations

A. The Issue

In this work I shall examine some aspects of the semantic phenomenon called opacity from the perspective of human languages in their common usage (rather than artificial languages or usages created by Logicians), relating it to the manner such languages, as computational systems, equip their users with the tools and the techniques to handle (pseudo-) paradoxes and explosions caused by contradictions. Although I resort to the work of Logicians and Philosophers, the principles and theoretic notions herein proposed to formalise such phenomena are primarily hypotheses respecting the inherent machinery of human languages, rather than merely invented solutions to approach the issues in question.
The latu sensu notion of opacity can be initially figuratively characterised as the phenomenon of a sentential context not allowing the light of a semantic/logic principle to pass through, i.e., a certain context is opaque because a certain (mode of) inference is not visibly valid therein.
There have been some more specific and/or stronger hypotheses trying to define actual instantiations of this notion occur and/or to predict when and to explain why they occur. Indeed, as far as I know, there have been at least two basic approaches on opacity.
The first basic approach on opacity, which is herein called classic or traditional, and which will be questioned in Section 2, assumes the definition of opaque context given in (1) below or variants thereof: Although (1) has never been accepted by important academic factions, like the Russellian philosophers among others, it is still the most spread conception in the literature:

(1) Opaque context (classic version)
A sentential context C containing an occurrence of a term t is opaque, if the substitution of co-referential terms is an invalid mode of inference with respect to this occurrence. (See Mckinsey 1998, Quine 1956)

The second basic view, which will be approached in Section 4, revolves around the idea of non-symmetry of accessibility relations. This second view has more adepts among linguists.
The alternative view I shall sketch here attempts to determine how fundamental opacity is and to relate it to issues of consistency and non-paradoxical interpretation in the common usage of natural languages.


Posted by Tony Marmo at 14:04 BST
Updated: Wednesday, 21 July 2004 14:19 BST
Tuesday, 20 July 2004


I am very happy to announce that my dear friend and colleague Martin Honcoop's work, Dynamic Excursions on Weak Islands, has been re-edited and published via the Semantics Archive. Martin, a proud disciple of both Groenendijk and Szabolcsi, was a very competent linguist, a true expert in many things and I had the privilege to meet him during his short life time and call him a friend. He together with Marcel den Dikken, Eddy Ruys and Rene Mulder made up an outstanding group of young formalists unmatched by their country-fellows (either of their age or younger). Martin was exceptionally patient when had to explain what formal linguistics is about to fanatic and intolerant empiricists. After Mulder had moved to the publishing business and Marcel gone to the States, it is not exaggerate to say that when Martin died, one quarter of the future of formal Linguistics in the Netherlands perished too. We all miss him a lot and I congratulate the blessed soul who put his paper in the Semantics Archive.

Posted by Tony Marmo at 17:31 BST
Updated: Monday, 9 August 2004 08:08 BST
Monday, 19 July 2004

Boolean networks with variable number of inputs (K)

Metod Skarja, Barbara Remic, and Igor Jerman

We studied a random Boolean network model with a variable number of inputs K per element. An interesting feature of this model, compared to the well-known fixed- Knetworks, is its higher orderliness. It seems that the distribution of connectivity alone contributes to a certain amount of order. In the present research, we tried to disentangle some of the reasons for this unexpected order. We also studied the influence of different numbers of source elements (elements with no inputs) on the network's dynamics. An analysis carried out on the networks with an average value of K= 2 revealed a correlation between the number of source elements and the dynamic diversity of the network. As a diversity measure we used the number of attractors, their lengths and similarity. As a quantitative measure of the attractors' similarity, we developed two methods, one taking into account the size and the overlapping of the frozen areas, and the other in which active elements are also taken into account. As the number of source elements increases, the dynamic diversity of the networks does likewise: the number of attractors increases exponentially, while their similarity diminishes linearly. The length of attractors remains approximately the same, which indicates that the orderliness of the networks remains the same. We also determined the level of order that originates from the canalizing properties of Boolean functions and the propagation of this influence through the network. This source of order can account only for one-half of the frozen elements; the other half presumably freezes due to the complex dynamics of the network. Our work also demonstrates that different ways of assigning and redirecting connections between elements may influence the results significantly. Studying such systems can also help with modeling and understanding a complex organization and self-ordering in biological systems, especially the genetic ones.

Keywords: Boolean networks , biological systems, connectivity distribution, variable K,
sources of order, canalization, frozen elements, no input elements (source elements), attractor similarity, effective distribution, genetic networks , high orderliness.


Posted by Tony Marmo at 17:57 BST
Updated: Monday, 9 August 2004 08:09 BST
Friday, 16 July 2004

We have often heard the same complaint when an influential linguist, such as Chomsky or Kayne, releases a new paper: Oh no! He changed everything again!

A lot of non-theoretic linguists, who are inquisitors sank in the darkness of 19th century empiricist dogmas, are very reactionary in this sense: they hate changes in the theoretic framework, they do not want any of them and deem it absurd to change things all the time. But, if the concepts of some form of thought never change then it is not real science.

Now, newspapers around the world give us the good example of what solid science really means:

Hawking finds hole in his theory

Source: Associated Press, The Globe and Mail

After almost 30 years of arguing a black hole swallows up everything that falls into it, astrophysicist Stephen Hawking did a scientific back-flip Thursday.

The world famous author of a Brief History of Time said he and other scientists had it wrong -- the galactic traps may in fact allow information to escape.

The findings, which Dr. Hawking is due to present at the 17th International Conference on General Relativity and Gravitation in Dublin on July 21, could help solve the "black hole information paradox," which is a crucial puzzle of modern physics.

Current theory holds that Hawking radiation contains no information about the matter inside a black hole and once the black hole has evaporated, all the information within it is lost.

However this conflicts with a central tenet of quantum physics, which says such information can never be completely wiped out.

Congratulations Professor Hawking! You are a truly wise man!

Read more:

Het Volk
Los Andes
The Australian
Corriere della Sera
La Cr?nica de Hoy
The Houston Chronicle
The Guardian
The Globe and Mail
The Independent
El Mundo
No Olhar
El Periodico de Catalunya
RP Online
The Telegraph
Ziua Magazin

Posted by Tony Marmo at 10:32 BST
Updated: Monday, 9 August 2004 08:10 BST

Roumyana Pancheva has a paper on the present perfect tense puzzle, a linguistic phenomenon:

Another Perfect Puzzle

The interaction of the perfect with temporal adverbials is the domain of
the well-known present perfect puzzle (Klein 1992) - the fact that certain
adverbials are prohibited with the present perfect in English (though not
some other languages) while acceptable with non-present perfects. As is
generally agreed, the prohibition is against past specific adverbials (cf.
Heny 1982, Klein 1992, Giorgi and Pianesi 1998, a.o.).
This paper adds yet another puzzle to the area of perfect-adverbial
interactions. It establishes a new generalization regarding the modification
of perfects by both past and non-past specific temporal adverbials. The
puzzling facts are illustrated in (1).

(1) a. ?? We saw John last night. He had arrived yesterday...
b. We saw John this morning. He had arrived yesterday...
c. We saw John last night. He had arrived the same day...

Adverbials like yesterday are allowed in past perfects, and they may specify
the time of the event, as in (1b).However, their presence is restricted,
depending on what the reference time in the past perfect is. The reference
time is the interval which tenses relate to the speech time, and which the
event time is situated relative to. In the case of the past perfects in (1), the
reference time is a past interval anaphoric to the reference time of the
preceding past sentences: last night in (1a) vs. this morning in (1b). The
choice of a reference time contained in the interval denoted by the adverbial
modifying the perfect results in degraded acceptability, as in (1a).
When the reference time in the past perfect is not contained in the denotation
of the adverbial modifying the perfect, the result is an acceptable sentence, as
in (1b).

Read it

Posted by Tony Marmo at 07:24 BST
Updated: Monday, 9 August 2004 08:11 BST
Thursday, 15 July 2004

Brian Leiter has a very important and interesting post on expertise and knowledge, which is both a defence of scientists against ignorance and a starting point to discuss what kind of attitude is really 'arrogance' and whetherit is or is not something natural of academic life. Although I do not agree with one line of his text, where he says that 'science is not a democracy', his paper in its entirety seems correct and highly relevant to other issues of this blog:

Arrogance and Knowledge

by Brian Leiter, July the 13th, 2004

Andrea Lafferty, executive director of the Traditional Values Coalition, a conservative religious organization, delivers what could be the signature line for our backwards times in America:

There's an arrogance in the scientific community that they know better than the average American.[Source the NYT]

In fact, of course, scientists do know quite a bit better than the average American about the matters for which their scientific expertise equips them. Those with knowledge, surprisingly, know more than those who are ignorant. Is that arrogance?

As Chris Mooney remarked , science is not a democracy,
[sic] and in a democratic culture, that inevitably becomes a cause of resentment, as Ms. Lafferty's comment attests. This resentment of competence was first made vivid to me when I appeared on CNN more than a year ago to discuss the textbook selection process in Texas. When I dismissed the argument that the textbook selection process should be democratic (which it isn't, though it pretends to be) on the grounds that competent educators should vet textbooks, not political and religious groups, the CNN host, Anderson Cooper, cut me rather short: that reply clearly made him uncomfortable, and he changed the topic to how the selection process wasn't really democratic anyway.

Resentment of competence was also a motif suggested by my exchange with Professor Eastman --one of the ignorant law professors shilling for teaching creationist lies to schoolchildren--who used that favorite rhetorical device of the anti-Darwin crowd by referring to its tyrannical orthodoxy. Unfortunately, as I noted on that occasion, views that are correct ought to be orthodox, and they ought to exercise the tyranny appropriate to truth, namely, a tyranny over falsehood and dishonesty.

But when truth and knowledge clash with deep-seated prejudices--especially those reinforced from the pulpit and in the public culture--resentment towards the arrogance of those with knowledge and competence grows.

Unfortunately, I don't see much room for compromise in this domain. Knowledge and competence can not become meek and abashed merely to avoid offending the vanity of the undereducated, the parochial, and the unworldly. The Enlightenment dream was to extend the blessings of reason and knowledge as widely as possible. In the United States, that Enlightenment project has been stymied: at the highest echelons of the culture, the material and institutional support for the pursuit of knowledge and competence is unparalleled, yet the fruits of these labors are often either regarded with suspicion and resentment in the public culture at large--or simply go unrecognized and unnoted altogether.

Could there be a greater failure of the Enlightenment project than that a huge majority of U.S. citizens actually believe there is an intellectual competition between Darwin's theory of evolution by natural selection and intelligent design creationism? Or that the President of the country publically affirms their skepticism, without being held up for ridicule in the media and the public culture?

These are, for various reasons, scary times in [the United States of] America, but the increasingly brazen haughtiness of the purveyors of ignorance and lies--who cloak their backwardness in the judgmental rhetorc of "arrogance" and a none-too-subtle appeal to the "ordinary" person's sense of democratic equality--may be the most worrisome development of all. That the empire of ignorance spreads its domain portends calamities from which it could take centuries to heal.

Permanent link

Posted by Tony Marmo at 00:35 BST
Updated: Thursday, 15 July 2004 01:00 BST
Sunday, 11 July 2004

Purver and Ginzburg shed some light on the Semantics of Noun Phrases, from the perspective of the HPSG school, which I both respect and dissent from:

Clarifying Noun Phrase Semantics

Matthew Purver and Jonathan Ginzburg

Reprise questions are a common dialogue device allowing a conversational participant to request clarification of the meaning intended by a speaker when uttering a word or phrase. As such they can act as semantic probes, providing us with information about what meaning can be associated with word and phrase types and thus helping to sharpen the principle of compositionality. This paper discusses the evidence provided by reprise questions concerning the meaning of nouns, noun phrases and determiners. Our central claim is that reprise questions strongly suggest that quantified noun phrases denote (situation-dependent) individuals-or sets of individuals-rather than sets of sets, or properties of properties. We outline a resulting analysis within the HPSG framework, and discuss its extension to such phenomena as quantifier scope, anaphora and monotone decreasing quantifiers.

Download link

Posted by Tony Marmo at 07:19 BST
Updated: Monday, 9 August 2004 08:12 BST
Saturday, 10 July 2004


Yoad Winter on Choice Functions

Winter's page has many papers, and his concerns include computational linguistics. It is worthy to check it. One of his recent works, Choice Functions and the Semantics of Indefinites, is a sort of advanced introduction to the issue.

Methinks that choice functions can be used for almost any thing in semantics. Hamblin approaches, according to what the more experienced folks told me, began with questions. Thenceforth, Kratzer and many others have applied them to the semantics of scope. But, for me, the obvious application of Hamblin approach would firstly be binding/linking theory. It seems that there have already been some attempts to do so. (Anyone correct me if I'm wrong, please).

To my dismay, however, people still insist in separating binding from control. I love syntax though, I dislike a syntactic configuration solution for binding and control. A choice function solution is more agreeable to my intuitions.


One friend from This is not the name of the blog has a crucial question:


by Chris Tillman

I'm probably overlooking something obvious, but I was wondering if someone could help me out with this.

Uses of 'without' sometimes help express conjunctions with a negated conjunct, as in 'Al is going to the store without Mary going'. This should be symbolized as A & ~ M. Sometimes it is used to express a conditional, as in 'Without going to the store, John will have nothing to eat for dinner.' Here is the sentence that is troubling me:

(S) Bill drinks without Harry drinking.

Should (S) be read as a conjunction, a conditional or neither? And if neither, then what?

See it

Posted by Tony Marmo at 14:34 BST
Updated: Monday, 9 August 2004 08:13 BST

Newer | Latest | Older