Click Here ">
« August 2004 »
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31
You are not logged in. Log in
Entries by Topic
All topics  «
Cognition & Epistemology
Notes on Pirah?
Ontology&possible worlds
Syn-Sem Interface
Temporal Logic
Blog Tools
Edit your Blog
Build a Blog
RSS Feed
View Profile
Translate this
LINGUISTIX&LOGIK, Tony Marmo's blog
Sunday, 8 August 2004

Topic: Cognition & Epistemology


I often like to play the following game in telling stories to people (either grown ups or children):

First-- I pick the title of a known tale and start telling another story with the characters of a third source. Eg.:

Puss in Boots

Once upon a time there lived three young sisters: Snow White, Goldilocks and Red Ridding Hood. Their father was a very good woodchopper who had married to an evil woman. Their stepmother made their lives miserable and forced them to do all the chores, while she kept practising her witchcraft. One day she put a spell on the the woodchopper and made him go to the market with his young daughters in order to sell them. 'We shall need the money to buy our victuals' she said. Then, the mesmerised man went to the market with his daughters...

Second-- In the middle of the story, I ask the hearers some unexpected question, like:

Who the three girls will meet on the road before they can get to the market? The Wolf or the Charming Prince?

People often got confused with this kind of game and made all sorts of guesses. It took a long time before they realised that there could be no right answer accordingly to common lore, because the story had been twisted from the beginning.

I have seen this kind of problem in scientific discussions too. People usually try to discuss the implications and the empirical testing methods employed to confirm or discard assumptions that were absurd from the start.

Indeed, people do not like to question premises, but are eager to have heated arguments on the consequences. Why?

Posted by Tony Marmo at 01:01 BST
Updated: Monday, 9 August 2004 07:41 BST
Kai von Fintel's reply to Weatherson's comments:

Your thoughts here are quite on target. One can take distributional/syntactic facts (NPI-licensing in conditional antecedents) as an argument for a semantic analysis (monotonic semantics for conditionals with additional epicycles). But one can also take semantic evidence (apparent entailment patterns) as an argument against a particular analysis of the distribution patterns (against the Fauconnier-Ladusaw theory of NPIs for example). So, there is a tension here between syntax and semantics, which is precisely why it is necessary to always do both of them: you can't be a semanticist without knowing a whole lot about syntax, and vice versa. On top of that, it is inevitable that one needs to take pragmatics into account. In the end, this kind of inquiry is part of a complex science and there are a lot of moving parts.

The particular fact of NPI-licensing in conditional antecedents has been a major focus of my own work on conditionals, see my two papers:

Counterfactuals in a Dynamic Context (2001) in Michael Kenstowicz (ed.) Ken Hale: A Life in Language, MIT Press. pp. 123-152.

NPI Licensing, Strawson Entailment, and Context Dependency (1999) Journal of Semantics, 16(2), pp. 97-148.

Source: von Fintel's blog

Posted by Tony Marmo at 00:01 BST
Updated: Monday, 9 August 2004 07:56 BST

On King's Syntactic Evidence for Semantic Theory

by Brian Weatherson
Source: Thoughts Arguments and Rants 2/14/2003

I finally got around to reading Jeff King's paper on syntactic evidence for semantic theories, and I was struck by one of the examples he uses. At first I thought what he said was obviously wrong, but on reflection I think it might not be so obvious. (Most of the paper seems right to me, at least on a first reading through, but I didn't have anything detailed to say about those parts. Well, except to note that debates in philosophy of language are getting pointed nowadays.)

Anyway, here was the point that I think I disagree with. Jeff wants to argue that syntactic data can be sometimes used to support semantic theories. One example of this (not the only one) involves negative polarity items (NPIs). Examples of NPIs are ever and any when it is meant as an existential quantifier. It seems these words can only appear inside the scope of a negation, or in a context that behaves in some ways as if it were inside the scope of a negation.

Simplifying the argument a little bit, Jeff seems to suggest that the following argument could be used to provide support for its conclusion.

(1) NPIs are licenced in the antecedents of conditionals

(2) NPIs are only licenced in downwards entailing contexts

(3) The antecedent of a conditional is a downwards entailing context

A `downwards entailing context' is (roughly) one where replacing a more general term with a more specific term produces a logically weaker sentence. So while (3a) does not entail (3b), thus showing ordinary contexts are not downwards entailing, (3c) does entail (3d), showing negated contexts are not downwards entailing.

(3a) I will be given a birthday cake tomorrow.

(3b) I will be given a poisonous birthday cake tomorrow.

(3c) I will not be given a birthday cake tomorrow.

(3d) I will not be given a poisonous birthday cake tomorrow.

(I assume here that poisonous birthday cakes are still birthday cakes. I do hope that's true, or all my examples here will be no good.)

(2) was first proposed (to the best of my knowledge) in William Ladusaw's dissertation in I think 1979, and it has been revised a little since then, but many people I think hold that it is something like the right theory of when NPIs are licenced. But it does have one striking consequence: it implies (3). To give you a sense of how surprising (3) is, note that it implies that (4) entails (5).

(4) If I am given a birthday cake tomorrow, I will be happy.

(5) If I am given a poisonous birthday cake tomorrow, I will be happy.

Now, many people think that (4) could be true while (5) is false. It is certainly the case that there are contexts in which one could say (4) and not say (5). Perhaps the best explanation for that is pragmatic. Those who think that indicative conditionals are either material or strict implications will hold that it is pragmatic. But perhaps it is semantic. Officially, I think it is semantic, though I think the other side of the case has merit.

Here's where I think I disagree with Jeff. Imagine I am undecided about whether (4) really does entail (5). I think that the argument (1), (2) therefore (3) has no force whatsoever towards pushing me to think that it does. Rather, I think that only evidence to do with conditionals can tell in favour of the entailment of (5) by (4), and if that evidence is not sufficient to support the entailment claim, all the worse for premise (2).

At least, that was what I thought at first. On second thoughts, I think maybe I was a little too dogmatic here. On further review, though, I think my dogmatism was in the right direction. To test this, try a little thought experiment.

Imagine you think that all the evidence, except the evidence about conditionals, supports (2), or some slightly tidied up version of it. (This is not too hard to imagine I think, (2) does remarkably well at capturing most of the data.) And imagine that you think that while there are pragmatic explanations of the apparent counter-examples to If Fa then p entails If Fa and Ga then p , you think those explanations are fairly weak. (Again, not too hard to imagine.) Does the inductive evidence in favour of (2), which we acknowledge is substantial, and the obvious truth of (1) give you reason to take those pragmatic explanations more seriously, and more generally more reason to believe that If Fa then p does entail If Fa and Ga then p ? I still think no , but I can see why this might look like dogmatism to some.

I sometimes play at being a semanticist, but at heart I'm always a philosopher. And one of the occupational hazards of being a philosopher is that one takes methodological questions much more seriously than perhaps one ought. So at some level I care more about the methodological question raised in the last paragraph than I care about the facts about conditionals and NPIs. At that level, I'm rather grateful to Jeff for raising this question, because it's one of the harder methodological questions I think I've seen for a while.

Posted by Tony Marmo at 00:01 BST
Updated: Monday, 9 August 2004 07:57 BST

Topic: Cognition & Epistemology


Knowledge and Stability

by Joe Shieber
June 08, 2004

Marc Moffett has been considering some interesting questions concerning knowledge and stable belief and justification at Close Range. In response to some probing questions, he submitted a follow-up post, including the following example :

The other day I was going out of town and was supposed to call some friends when I got into the airport. My wife wrote their number down and I glanced over it. As I was leaving, she reminded me to take the number. I said, 'I know it' and proceeded to recite it from memory. Knowing that the number was still fresh in my mind her response was, 'Do you really know it?'

Marc suggests that the example shows that knowledge sometimes requires not simply reliably-produced true belief (let's grant that the short-term memorial faculty allowing Marc to rattle off the number correctly is reliable), but stable belief, or stably justified belief. Marc claims that we have an intuitive grasp of stability and instability to which he can appeal in making this suggestion. However, and without meaning to be difficult, I still don't know what stability is; nevertheless, let's leave this problem aside.

What I want to do here is suggest an alternate diagnosis for Marc's example.Continue

Knowledge Discourses and Interaction Technology

by Carsten S?rensen & Masao Kakihara

Research within knowledge management tends to either overemphasize or underestimate the role of Information and Communication Technology (ICT). Furthermore, much of the ICT support debate has been shaped by the data-information- knowledge trichotomy and too focused on repository-based approaches.
We wish to engage in a principled debate concerning the character and role of knowledge technologies in contemporary organizational settings. The aim of this paper is to apply four perspectives on the management of knowledge to highlight four perspectives on technological options. The paper presents, based on four knowledge discourses --four interrelated perspectives on the management of knowledge-- four perspectives on ICT support for the management of knowledge each reviewing relevant literature and revealing a facet of how we can conceptualize the role of technology for knowledge management.
The four technology discourses focus on the: Production and distribution of information; interpretation and navigation of information; codification and embedding of collaboration; and establishment and maintenance of connections.Continue

The Relationship Between Knowledge and Understanding

by Michelle Jenkins

I've been thinking a lot lately about the relationship between knowledge and understanding. Knowledge and understanding, I think, are quite different sorts of things. My general grasp of the nature of understanding is influenced largely by the Ancients. One understands something if she 1) is able to provide a comprehensive explanation 2) has a systematic grasp of all of the information and 3) can defend her explanation against any questions or criticisms. First, in order to understand something I must be able to provide a comprehensive explanation of it. A physicist, for example, who understands the theory of relativity must be able to provide an explanation about why the theory of relativity is as it is, how it works, how it affects a variety of other physical laws and observations, and so forth. Second, to understand something, one must be able to `see' the relationship between different bits of information in the whole of the field to which the bit of information belongs. You must have a systematic grasp of the information relating to the matter at hand such that you see that the information, and the relationships that the different bits of information have with one another, forms an almost organic whole. Thus, a car mechanic who understands why the part of the car is making the sound that it is, has this understanding because he has a systematic grasp of the whole of the vehicle. He knows how the different parts relate to each other and how and in what ways certain conditions will affect both the different parts of the vehicle and the vehicle as a whole. This ties closely into the need for a comprehensive explanation. The physicist (or car mechanic) is able to provide a comprehensive explanation about the thing that she understands because she understands and `sees' the thing as a whole, as part of a complete system. Finally, in order to understand something, one must be able to defend her claim against any criticisms that are leveled against it. This defense must itself be explanatory. One cannot defend her view by pointing to the words of another, but must defend it by demonstrating an ability to look at the issue in a variety of ways and as part of a systematic whole. She is not proving her certainty with regard to an issue, but is demonstrating her understanding of the issue. In defending her view successfully, she demonstrates a reliability and stability within her account. Apparent in this account of understanding (I hope!) is that one must have a rather large web of information about the matter which one claims to understand. In order to develop and defend a suitably comprehensive explanation, one must be able to employ a huge number (and variety) of facts and bits of knowledge that relate to the thing she claims to understand. And, as the systematicity requirement shows, that web of information must be structured in a systematic manner.Continue

Not Every Truth Can Be Known:
at least, not all at once

According to the knowability thesis , every truth is knowable. Fitch's paradox refutes the knowability thesis by showing that if we are not omniscient, then not only are some truths not known, but there are some truths that are not knowable. In this paper, I propose a weakening of the knowability thesis (which I call the "conjunctive knowability thesis") to the effect that for every truth pthere is a collection of truths such that
(i) each of them is knowable and
(ii) their conjunction is equivalent to p.

I show that the conjunctive knowability thesis avoids triviality arguments against it, and that it fares very differently depending on one other issue connecting knowledge and possibility. If some things are knowable but false , then the conjunctive knowability thesis is trivially true. On the other hand, if knowability entails truth, the conjunctive knowability thesis is coherent, but only if the logic of possibility is quite weak.Continue

Some Thoughts About the Relationship Between Information and Understanding

Michael O. Luke
Paper to be presented at the American Society for Information Science Conference, San Diego, CA, May 20-22, 1996

That there is a relationship between information and understanding seems intuitively obvious. If we try to express this relationship mathematically, however, it soon becomes clear that the relationship is complex and mysterious. Knowing more about the connection, however, is important, not the least because we need more understanding as our world becomes faster paced and increasingly complex. The influence of increasing the amount of information, increasing the effectiveness of information mining tools and ways of organizing information to aid the cognitive process are briefly discussed.Continue

Posted by Tony Marmo at 00:01 BST
Updated: Monday, 9 August 2004 08:20 BST
Monday, 2 August 2004


Constructions and Formal Semantics

by Marc Moffett
Source: Close Range June 27, 2004

I have been arguing, for instance in my dissertation, that the correctness of Construction Grammar is pretty much uncontroversial. The point, basically, is that no one has ever proposed a semantic theory for even a simple language that doesn't assume the existence of at least one linguistic construction, usually the subject-predicate construction. So in my view those guys over in Berkeley (and their followers) are on pretty solid ground. (The only way I can see to avoid this conclusion is to argue that predication, or function-application, isn't part of the semantics, but something extra.)

Unfortunately, in virtue of not taking explicit account of the role of constructions in their philosophical semantics, philosophers (and linguistic semanticists) philosophers of language have been led to, in my estimation, very implausible linguistic theses. My personal bugbear is the doctrine of logical forms, construed as a linguistic thesis. I want to be clear here that, although I am not convinced of the need for a level LF in syntax, that notion of logical form is far too weak to do the sort of work required by the sorts of robust semantic analyses posited these days. (Think, for instance, of the neoDavidsonian analysis of eventive sentences!) In order to accomodate these robust semantic analyses, the underlying logical forms would have to be vastly more complex than can be reasonably motivated on purely syntactic grounds.

So why have so many philosophers been suckered into accepting them? I'm not sure, but I wonder if it doesn't arise in part from an implicit acceptance of the Fregean view of the language-proposition relation. According to Frege (or, at least, Dummett's Frege), our only cognitive access onto propositions is via the linguistic structure of the sentences that express them. If Frege's Thesis is correct, then the need for a robust semantics will require a correspondingly complex underlying linguistic structure.

[It is also worth considering, in this quasi-historical context, whether or not Russell's notion of contextual definition and the associated doctrine of "incomplete symbols" doesn't mark out an inchoate construction-based theory of language.]


Jason Stanley

Your question should be -- why have so many *linguists* been suckered into accepted logical form, with rich covert syntactic structures. Once the point is put in this more adequate manner, it becomes clear you're being more than a little dogmatic.
Those philosophers who do accept rich logical forms do so, because, in taking syntax classes for many years, we've been introduced to the notion of a rich logical form with lots of covert structure (is Richard Larson in a philosophy department? Is Chomsky in a philosophy department? Pesetsky?). Robert May's book on logical form in the 1980's had a big impact on syntax and semantics, and many of us who started doing linguistics then were doing GB, and read that book. Minimalist syntax makes different assumptions than GB, and seeks to explain different evidence. But, if anything, it postulates much more covert structure.
In my experience, it's *philosophers* who are reluctant to buy linguistic arguments for covert structures.

Part of the problem has to do with what's meant by "purely syntactic grounds". If what you mean is, on the basis of judgements of grammaticality and ungrammaticality alone, then that is simply an oversimplistic conception of "purely syntactic grounds". For example, we distinguish bound vs. free readings of pronouns not on the grounds of grammaticality, but on the grounds that they give rise to different readings. We appeal to different potentential attachment sites of modifiers as arguments for underlying constituent structures. And so on -- so your post assumes some conception of "purely syntactic grounds" that is overly philosophical in nature.

Posted by Tony Marmo at 17:15 BST
Updated: Monday, 9 August 2004 07:38 BST
Wednesday, 28 July 2004




Edited by Aleksandar Jokic &Varieties of Meaning
The 2002 Jean Nicod Lectures
By Ruth Garrett Millikan

Many different things are said to have meaning: people mean to do various things; tools and other artifacts are meant for various things; people mean various things by using words and sentences; natural signs mean things; representations in people's minds also presumably mean things. In Varieties of Meaning, Ruth Garrett Millikan argues that these different kinds of meaning can be understood only in relation to each other.

What does meaning in the sense of purpose (when something is said to be meant for something) have to do with meaning in the sense of representing or signifying? Millikan argues that the explicit human purposes, explicit human intentions, are represented purposes. They do not merely represent purposes; they possess the purposes that they represent. She argues further that things that signify, intentional signs such as sentences, are distinguished from natural signs by having purpose essentially; therefore, unlike natural signs, intentional signs can misrepresent or be false.

Part I discusses "Purposes and Cross-Purposes" -- what purposes are, the purposes of people, of their behaviors, of their body parts, of their artifacts, and of the signs they use. Part II then describes a previously unrecognized kind of natural sign,
"locally recurrent" natural signs, and several varieties of intentional signs, and discusses the ways in which representations themselves are represented. Part III offers a
novel interpretation of the way language is understood and of the relation between semantics and pragmatics. Part IV discusses perception and thought, exploring stages in the development of inner representations, from the simplest organisms whose behavior is governed by perception-action cycles to the perceptions and intentional attitudes of humans.
Quentin Smith

Among the many branches of philosophy, the philosophy of time and the philosophy of language are more intimately interconnected than most, yet their practitioners have long pursued independent paths. This book helps to bridge the gap between the two groups. As it makes clear, it is increasingly difficult to do philosophy of language without any metaphysical commitments as to the nature of time, and it is equally difficult to resolve the metaphysical question of whether time is tensed or tenseless independently of the philosophy of language. Indeed, one is tempted to see philosophy of language and metaphysics as a continuum with no sharp boundary.

The essays, which were written expressly for this book by leading philosophers of language and philosophers of time, discuss the philosophy of language and its implications for the philosophy of time and vice versa. The intention is not only to
further dialogue between philosophers of language and of time but also to present new theories to advance the state of knowledge in the two fields. The essays are organized in two sections -- one on the philosophy of tensed language, the other
on the metaphysics of time.




Edited by Jacqueline Gueron and Jacqueline Lecarme

Any analysis of the syntax of time is based on a paradox: it must include a syntax-based theory of both tense construal and event construal. Yet while time is undimensional, events have a complex spatiotemporal structure that reflects their human participants. How can an event be flattened to fit into the linear time axis?
Chomsky's The Minimalist Program, published in 1995, offers a way to address this problem. The studies collected in The Syntax of Time investigate whether problems concerning the construal of tense and aspect can be reduced to syntactic problems for which the basic mechanism and principles of generative grammar already provide solutions.

These studies, recent work by leading international scholars in the field,offer varied perspectives on the syntax of tense and the temporal construal of events: models of tense interpretation, construal of verbal forms, temporal aspect versus lexical aspect, the relation between the event and its argument structure, and the interaction of case with aktionsart or tense construal. Advances in the theory of temporal interpretation in the sentence are also applied to the temporal interpretation of nominals.


Posted by Tony Marmo at 13:01 BST
Updated: Monday, 9 August 2004 07:44 BST

Topic: Cognition & Epistemology

Contrastivism and Hawthorne?s principle of practical reasoning

by Jon Kvanvig
Source: Certain Doubts 7/26/2004

Contrastivism holds that the truth makers for knowledge attributions always involve a contrast, and Hawthorne thinks that if you know something, you are entitled to use it in practical reasoning. So one way to test what it is known is to see what kinds of practical reasoning we?ll allow are acceptable.

Depending on what the contrast is, contrastive knowledge may be easy or hard to have. So, it is easier to know ?the train will be on time rather than a day late? than it is to know ?the train will be on time rather than 2 minutes late.? One way to put the difference is that one is presupposing more in knowing the first claim that one is in knowing the second.

Consider then a piece of practical reasoning using the following conditional: if you are pointing a gun at me, and if your gun is loaded and if you intend to shoot me, I should shoot you first. Suppose I know that you are pointing a gun at me rather than a twig, and that I know that your gun is loaded rather than having just been disassembled for cleaning, and suppose I know that you intend to shoot me rather than give me a million bucks. Should I shoot you? Maybe this is an anti-gun sentiment coming out, but I think it is far from obvious that I should.

Compare this case to another. In this case, I know that you are pointing a gun at me rather than any non-lethal item, and I know that your gun is loaded rather than merely having the appearance of being loaded from where I stand, and I know that you intend to shoot me rather than anyone else in the universe. Now I think I should shoot you first.

Why the difference? In contrastivist language, I?m presupposing too much in the first case, I think. The knowledge I have is easy knowledge because it presupposes so much. I?m presupposing that the thing in your hand is either a gun or a twig, that it?s either loaded or disassembled, that you intend to shoot me or make me rich. In the context of these assumptions, it?s too easy to come to the conclusion that I should shoot first and ask questions later. In the second, case, however, my presuppositions are much broader, broad enough that my knowledge is no longer easy. And since my knowledge is not easy, I doubt I could be faulted on grounds of rationality for shooting first.

It appears, then, that contrastivists will have to deny Hawthorne?s principle. Moreover, I don?t see any obvious way to qualify the principle for the following reason. If the action is relatively inconsequential, then easy knowledge may be enough to warrant performing the action. But if the action is immensely significant, as it is in the case of taking a life, then easy knowledge doesn?t seem to be enough.

One way to think about such cases is that they may provide a reason for including pragmatic issues in one?s account, either indirectly as contextualists typically do or directly as we find in the invariantist camp. Or maybe a reason for rejecting Hawthorne?s principle??

Posted by Tony Marmo at 12:28 BST
Updated: Monday, 9 August 2004 07:45 BST


John Passmore

Source: Philosophy Program of the
Research School of Social Sciences
of the Australian National University

John Passmore was Reader in Philosophy at the Research School of Social Sciences, ANU, from 1955 through until 1957, and Professor of Philosophy from 1958 until his retirement in 1979. He was Head of the Philosophy Program from 1962 until 1976. Passmore's book A Hundred Years of Philosophy was recognised as a major feat of philosophical scholarship throughout the international philosophical community.

It was followed by influential books on a whole range of issues, including Man's Responsibility for Nature, one of the first books on the philosophical issues raised by the environmental movement. Passmore was one of the very first to give shape to what is now, under his influence, called 'applied philosophy.' His many books have been translated into a wide variety of languages. He remains a major figure in the history of ideas. In recognition of his service to education, Passmore was made a Companion in the General Division of the Order of Australia in 1992. The first volume of his autobiography, Memoirs of a Semi-detached Australian, was published by Melbourne University Press in 1997.

He was Emeritus Professor of Philosophy and Visiting Fellow in the History Program at RSSS, and died in Canberra on Sunday.


Posted by Tony Marmo at 05:48 BST
Updated: Monday, 9 August 2004 07:46 BST
Tuesday, 27 July 2004

Be warned, this could be the matrix

Source: The Sidney Morning Herald, July 22, 2004

The multiverse theory has spawned another - that our universe is a simulation, writes Paul Davies.

If you've ever thought life was actually a dream, take comfort. Some pretty distinguished scientists may agree with you. Philosophers have long questioned whether there is in fact a real world out there, or whether "reality" is just a figment of our imagination.

Then along came the quantum physicists, who unveiled an Alice-in-Wonderland realm of atomic uncertainty, where particles can be waves and solid objects dissolve away into ghostly patterns of quantum energy.

Now cosmologists have got in on the act, suggesting that what we perceive as the universe might in fact be nothing more than a gigantic simulation.

The story behind this bizarre suggestion began with a vexatious question: why is the universe so bio-friendly? Cosmologists have long been perplexed by the fact that the laws of nature seem to be cunningly concocted to enable life to emerge. Take the element carbon, the vital stuff that is the basis of all life. It wasn't made in the big bang that gave birth to the universe. Instead, carbon has been cooked in the innards of giant stars, which then exploded and spewed soot around the universe.
Advertisement Advertisement

The process that generates carbon is a delicate nuclear reaction. It turns out that the whole chain of events is a damned close run thing, to paraphrase Lord Wellington. If the force that holds atomic nuclei together were just a tiny bit stronger or a tiny bit weaker, the reaction wouldn't work properly and life may never have happened.

The late British astronomer Fred Hoyle was so struck by the coincidence that the nuclear force possessed just the right strength to make beings like Fred Hoyle, he proclaimed the universe to be "a put-up job". Since this sounds a bit too much like divine providence, cosmologists have been scrambling to find a scientific answer to the conundrum of cosmic bio-friendliness.

The one they have come up with is multiple universes, or "the multiverse". This theory says that what we have been calling "the universe" is nothing of the sort. Rather, it is an infinitesimal fragment of a much grander and more elaborate system in which our cosmic region, vast though it is, represents but a single bubble of space amid a countless number of other bubbles, or pocket universes.

Things get interesting when the multiverse theory is combined with ideas from sub-atomic particle physics. Evidence is mounting that what physicists took to be God-given unshakeable laws may be more like local by-laws, valid in our particular cosmic patch, but different in other pocket universes. Travel a trillion light years beyond the Andromeda galaxy, and you might find yourself in a universe where gravity is a bit stronger or electrons a bit heavier.

The vast majority of these other universes will not have the necessary fine-tuned coincidences needed for life to emerge; they are sterile and so go unseen. Only in Goldilocks universes like ours where things have fallen out just right, purely by accident, will sentient beings arise to be amazed at how ingeniously bio-friendly their universe is.

It's a pretty neat idea, and very popular with scientists. But it carries a bizarre implication. Because the total number of pocket universes is unlimited, there are bound to be at least some that are not only inhabited, but populated by advanced civilisations - technological communities with enough computer power to create artificial consciousness. Indeed, some computer scientists think our technology may be on the verge of achieving thinking machines.

It is but a small step from creating artificial minds in a machine, to simulating entire virtual worlds for the simulated beings to inhabit. This scenario has become familiar since it was popularised in The Matrix movies.

Now some scientists are suggesting it should be taken seriously. "We may be a simulation ... creations of some supreme, or super-being," muses Britain's astronomer royal, Sir Martin Rees, a staunch advocate of the multiverse theory. He wonders whether the entire physical universe might be an exercise in virtual reality, so that "we're in the matrix rather than the physics itself".

Is there any justification for believing this wacky idea? You bet, says Nick Bostrom, a philosopher at Oxford University, who even has a website devoted to the topic ( ). "Because their computers are so powerful, they could run a great many simulations," he writes in The Philosophical Quarterly .

So if there exist civilisations with cosmic simulating ability, then the fake universes they create would rapidly proliferate to outnumber the real ones. After all, virtual reality is a lot cheaper than the real thing. So by simple statistics, a random observer like you or me is most probably a simulated being in a fake world. And viewed from inside the matrix, we could never tell the difference.

Or could we? John Barrow, a colleague of Martin Rees at Cambridge University, wonders whether the simulators would go to the trouble and expense of making the virtual reality foolproof. Perhaps if we look closely enough we might catch the scenery wobbling.

He even suggests that a glitch in our simulated cosmic history may have already been discovered, by John Webb at the University of NSW. Webb has analysed the light from distant quasars, and found that something funny happened about 6 billion years ago - a minute shift in the speed of light. Could this be the simulators taking their eye off the ball?

I have to confess to being partly responsible for this mischief. Last year I wrote an item for The New York Times , saying that once the multiverse genie was let out of the bottle, Matrix -like scenarios inexorably follow. My conclusion was that perhaps we should retain a healthy scepticism for the multiverse concept until this was sorted out. But far from being a dampener on the theory, it only served to boost enthusiasm for it.

Where will it all end? Badly, perhaps. Now the simulators know we are on to them, and the game is up, they may lose interest and decide to hit the delete button. For your own sake, don't believe a word that I have written.

Paul Davies is professor of natural philosophy at Macquarie University's Australian Centre for Astrobiology. His latest book is How to Build a Time Machine.

Posted by Tony Marmo at 17:25 BST
Updated: Monday, 9 August 2004 07:55 BST
Monday, 26 July 2004
I have a question to all English native speakers around the world. Given the pair of sentences below:

(1) The postman is evil.
(2) The postman is like evil.

What are the meaning differences you grasp when you read these two sentences?

I thank you for your replies.

Posted by Tony Marmo at 17:40 BST
Updated: Monday, 26 July 2004 17:46 BST

Predictive Prophecy and Counterfactuals

by Jeremy Pierce

Source: Orange Philosophy June 25, 2004

In line with the discussions of time and time travel at my blog and to some degree here also, our own Gnu has a related puzzle using a fun fantasy role-playing kind of example for a philosophical puzzle about conditional predictive prophecy (i.e. predicting what someone will do and then telling him that A will have already happened if he ends up doing P but B will have already happened if he turns out to do Q). I think this case is interesting in terms of its view of time and of the relation of guaranteed prediction to time, but it also has some relevance to how to evaluate counterfactual statements. Read the case first at Gnu's blog, then read on here for my analysis.

The Liche Lord has predicted what Thurvan would do. That means he knew that Thurvan would go to all the rooms. Therefore, assuming he isn't lying, he hasn't placed the sword in the room he said it would be in if Thurvan had chosen not to go to the other rooms. Thurvan was correct to say that the sword is either there or not, but he was wrong to think that it was there independent of his decision. It was there because of what he would do. If Thurvan had chosen otherwise, and the Liche Lord had still set up the same deal, the sword would have been there. But unless he's lying, the sword can't be there as things stand. Given that the Liche Lord can take the shape of any object and enjoys taking people to be his undead slaves, you might expect that what the dwarf sees as the sword is probably the Liche Lord himself waiting to trap him.

Of course, the Liche Lord can see the future, so this is probably only the case if the Liche Lord has predicted that the dwarf will take the sword. He may well have predicted that the dwarf would reason through all this and leave without going for the sword, in which case he may have lied and put the sword there anyway. What's great about this is that the sword might really be there but only if he doesn't try to get it, and it's not there if he does. So he can't get it one way or the other. The only way to get the sword would have been to do what the Liche Lord knew he wouldn't do, and that would have been to avoid the other rooms.

In working through this, I had a hard time thinking about what the Liche Lord would have done if Thurvan had chosen otherwise, because it may well be that the Liche Lord would not have chosen to set this scenario up at all without the knowledge of Thurvan choosing the way he did. It's hard to think about counterfactual possibilities where the thing that would have been different depends on knowledge of the future in the counterfactual world.

According to David Lewis' semantics of counterfactual statements, 'if Thurvan had chosen to go straight to the sword room, the sword would have been there' is unclear to me. Lewis says to go to the nearest possible world where Thurvan goes straight to the sword room, meaning that you should find the world most like the actual world except for that detail and then see what's true. So if we change nothing in the world except that and what changing it will require, what happens? I can think of three kinds of candidate worlds for the closest:

1. My first thought would be to say that if Thurvan had chosen differently, and if you kept as much intact as possible, then the Liche Lord would have predicted differently and as a result put the sword in the chamber to honor his deal. This world holds the Liche Lord's honesty and abilities constant and changes the state of the world for the entire time between the writing of the letter and the present so that the sword has been there all along.

2. Lewis prefers to find a world intrinsically as much like the actual world as possible. That would require keeping the tomb , just as things are in the actual world. But then the Liche Lord would have to have told something false to Thurvan. Either he was lying (2a), or his predictive abilities failed in this one case (2b). I think Lewis has to favor 2b, because even 2a has intrinsic changes with the Liche Lord's beliefs and intents, whereas 2b could be just a surprising failure of his abilities, something like the miracle worlds Lewis discusses in his paper on whether free will requires breaking the laws of nature.

3. Lewis wouldn't like this at all, because it requires even more of a change of the intrinsic state of the world so far than 1, but some might argue that if Thurvan had chosen to take the sword and not go to the other rooms, the Liche Lord would not have set up the case this way at all and wouldn't have given a deal that would mean he'd end up losing. I'm bring this up only to argue against it as a legitimate near possibility. Seeing this as a near possibility of what would happen given Thurvan's choice to go only for the sword assumes something false. It assumes the Liche Lord is predicting what Thurvan would do given that the Liche Lord sets things up a certain way. According to Gnu's setup, the Liche Lord predicts what Thurvan will do, period. He doesn't consider all the possibilities and make things go his way. His ability only tells him what will happen. So this one requires a difference in the intrinsic state of the world and in the abilities of the Liche Lord. 1 has a difference only in the state of the world (and not even as much of a difference), and 2 has a difference in the abilities or intent of the Liche Lord (and not as much of a difference -- either a one-time failure of the same ability rather than a completely different ability or a different motivation rather than a whole change in the nature of his abilities).

So I think 1 and 2 are the real options for which world is closest to the actual one Gnu has constructed. This is a particularly vivid example of those who agree with Lewis on nearness of worlds based on intrinsic likeness and what I think is the more commonsense view of nearness of worlds based on preserving the abilities of the Liche Lord that related causally to the future in certain guaranteed ways. Lewis' view is required for those who reduce causality to relations between instrinsic properties of things across time, and my intuitions against his view on this case are therefore intuitions against his reduction of causality to such things. The causal relations between the Liche Lord and the future that he sees are an important part of the structure of the world, and a world seems to me to be much further from the actual one (of the case) if the Liche Lord has to have different abilities or failure of his abilities to keep the world intrinsically as close as possible. Simply changing some more intrinsic facts seems to me to be less of a change.

Posted by Tony Marmo at 15:17 BST
Updated: Monday, 9 August 2004 07:59 BST
Sunday, 25 July 2004


Seminar on Plurality

By MarkSteen
Source: Orange Philosophy, July 24, 2004

Tom McKay approved of my idea of posting his announcement about his seminar on plural quantification, along with related topics (such as non-distributive predication). Tom has a new book on this subject which you can check out by clicking on the departmental webpage on the links list, then clicking on faculty, then McKay [sorry, for some reason my linking feature isn't working now].

I think some local-ish non-Syracusan (e.g., Cornell, Rochester) folk might be interested in attending. Here's the announcement [note that Tom will not have computer access until the end of the month and so you should wait a bit to email him or post questions here for him until August]:

Seminar, Fall 2004, on "Plurality"

There are lots of topics, and I want students' own interests to determine some of what we do.

My fundamental project (in a book I have just finished) has been to explore the issue of expanding first-order predicate logic to allow non-distributive predication. A predicate F is distributive iff whenever some things are F, each of them is F. Consider:

(1) They are students. They are young.
(2) They are classmates. They are surrounding the building.
The predications in (1) are distributive, but the predications in (2) are non-distributive. Non-distributive plural predication is irreducibly plural. In ordinary first-order logic, only distributive predicates are employed.

The incorporation of irreducibly plural predicates is related to a wide range of issues in metaphysics, philosophy of language, foundations of mathematics, logic, and natural language semantics. Some of the issues that we might consider:

What is the nature of plurality? How should we think of the relations among sets, mereological sums, pluralities and individuals? What (if anything) are these different ontological kinds, and how are they related? Can one thing be many? (Is one deck identical to the 52 cards? Or is this not an identity relation?)

Singular and plural predication; singular and plural quantification; singular and plural reference. How do those fit together? When we consider the full range of determiners in English and try to incorporate quantifiers to represent that, there are many interesting semantic issues to resolve.

How does the semantics of plurality relate to the semantics of mass terms?

In the foundations of mathematics, how far can plurals take us without set theory? What is the relationship of second-order logic to plurals and to the foundations of mathematics?

What is the nature of ontological commitment? What does semantics commit the semanticist to? What does it say speakers are committed to? (For example, if I say that the analysis of adverbs requires an event semantics, does that mean that an ordinary user of adverbs is committed to the existence of events? This kind of issue becomes interesting when we look at the semantics of plurals.)

Can we talk about everything without paradox? Are plurals a special resource to enable us to do so?

A large number of issues about the relationship of semantics and pragmatics come together when we consider definite descriptions. Usually discussions focus on singular definite descriptions, but we can see what difference (if any) it makes when we insist that the account be general enough for plural and mass definite descriptions. This then also relates to the consideration of pronominal cross-reference and demonstrative reference.

Some have argued that an event semantics is important for getting plurals right. It will be interesting to look at event semantics and how that relates to plurals.

I will meet with each enrolled student early on in the semester to identify some areas of interest and get started on developing the student's presentation and paper on a topic of the student's choice.

If people are interested in looking into this before the semester begins, my book is available on the department's website:
(Click on my name in the list of faculty.) Also, Oystein Linnebo has posted a draft of his forthcoming Stanford Encyclopedia article, and it is a good introduction: Scroll down to "Plural Quantification."

We will not presume any greater familiarity with logic than you would acquire by being alive and awake through most of PHI 651.

Please get in touch with me if you have questions.

Posted by Tony Marmo at 23:11 BST
Updated: Monday, 9 August 2004 08:02 BST

Knowing That and Knowing How

By David Bzdak
(Source Orange Philosophy 07/17/2004)

I've begun delving into the literature recently on the difference between knowing that and knowing how (re-delving, actually, but that's neither here nor there). I've been quite surprised to find that it almost all jumps off from Ryle's discussion of the topic in The Concept of Mind which I believe was published at 1950.

I see hardly any mention of this topic other than in response to Ryle, and not much on the topic pre-Ryle. This strikes me as odd for such an important epistemological distinction (I realize that the distinction was recognized, pre-Ryle -- I'm wondering if it was philosophically analyzed). Am I missing a mountain of books/journals/articles out there (perhaps in the non-analytic tradition)? Or did Ryle really essentially begin this discussion?


I've wondered about that myself. I wish I could offer some help. The only thing that comes to mind is Cook Wilson, but I don't think he published much.

Posted by: chuck at July 17, 2004 09:36 AM

Here's some notes from a fellow in Edinburgh, with a nice little Bibliography:

Also, I saw:
Snowdon, Paul. "Knowing How and Knowing That: A Distinction Reconsidered" Proceedings of the Aristotelian Society 104:1 Sep 2003.
Posted by: chuck at July 18, 2004 09:02 AM

Arguable one of the great discussions in ancient chinese thought was precisely about the knowing how & knowling that, with the Daoists perhaps holding that knowing that is useless without knowing how. If you want to read more I would suggest Chad Hansen's ingightful book, A Daoist theory of chinese thought.

Cheers David
Posted by: David Hunter at July 20, 2004 07:55 PM

While it doesn't fit exactly into Ryle's distinction, one could also bring up Heidegger and his distinction between present-at-hand and ready-at-hand. Present at hand was more propositional knowledge and he inverted the usual relationship saying that read-at-hand or utility was more fundamental. One could, I suppose, move to the Ryle distinction with Heidegger in his middle phase arguing that knowledge-how is more fundamental than knowledge-that. I think one would have to be careful though.
Posted by: clarkgoble at July 20, 2004 08:36 PM

Gosh that was truly terrible spelling. The Daoist story which is supposed to to illustrate this distinction is from (I think) Zhuangzi and is the story of wheel wright Slab, basically the story goes that a Duke is wandering round studying one of the books of the Sages and this lowly Wheelwright laughs at him being a Duke he basically says tell me whats so funny and you better have a good reason for laughing or off with your head. The wheel wright says pardon me Duke but I don't understand why you are wasting your time with the leavings of the Sages. "What" goes the Duke! the wheelwright says well I am growing old and I am trying to teach my son how to bend wood to make wheels, I have taught him everything I know, what to do when making a wheel, still he cannot do it. When I die all I will leave behind is him, and as a wheelwright he will be no good, what I can't teach him is the skill I have, it is the same thing with reading the Sages, what made them Sages cannot be found in their leavings.

There is a fair bit throughout Zhuangzi (Also know as Chuang-Tzu) to do with this distinction usually making the point that skill is more important than knowledge.
Posted by: David Hunter at July 20, 2004 08:45 PM

Posted by Tony Marmo at 14:36 BST
Updated: Monday, 9 August 2004 08:04 BST

Knowing How v. Knowing That: Some Heterodox Idea-Sketches

by Uriah Kriegel

(Source: Desert Landscapes 7/19/2004)

Since Ryle, orthodoxy had it that knowledge-how is categorically different from knowledge-that. The latter is a form of propositional representation of the way things are, whereas the former is just a capacity. The occurrence of the word "knowledge" in both expressions should not mislead us to think that they have something significant in common. There is no way to reduce Agent's knowledge *how* to ride a bike to some knowledge *that* certain things have to be done.

I have a somewhat different view. On my view, knowledge-how consists in *non-conceptual conative representations*. (...)

By "conative representations" I mean representations with a telic, or world-to-mind, direction of fit (wishing that p, wanting that p, hoping that p, intending that p, etc). These are to be distinguished from cognitive representations, which have a thetic, or mind-to-world, direction of fit (believing that p, expecting that p, suspecting that p, etc.).

The above parenthetized examples of conative representations all have propositional, and therefore (presumably) conceptual, content. But just like there are non-conceptual thetic representations, so we should expect there to be non-conceptual telic representations.


If my suggestion is on the right track, then although we cannot say that knowledge-how reduces to knowledge-that, since knowledge-that is propositional (because of the `that'-clause), we *can* say that it reduced some sort of representational knowledge.


Hi Uriah,

In what sense are your non-conceptual conative representations actually representational? One specific worry: what makes it the case that every step in the causal chain leading from my desire to ride the bike to my actual riding of the bike doesn't count as a non-conceptual conative representation of the next step along the chain?


Comment by Brad Weslake -- 7/20/2004 @ 12:54 am

Ok, I'm only half getting this. Suppose I know how to pedal, and the way to pedal is to use M16 or M17. Intuitively, it's both possible and quite likely that I don't know that the way to pedal is to use M16 or M17 - I don't even have concepts for M16 or M17. But if I know how to pedal, I *must* have a concept for pedaling.

You say, knowledge-how consists in *non-conceptual conative representations*. But there must be at least a little more, right? Because I have to at least have the concept of the thing I know how to do. And it seems that it's optional for me to have the concepts of M16 and M17 - if I'm a physiologist with a detailed understanding of the mechanics of pedaling, I still (can) know how to pedal.

I'm not quite sure I'm interpreting you right, so I'm going to stop here. Does this sound correct?

Comment by Jonathan Ichikawa -- 7/20/2004 @ 6:41 am

Woo-hoo, my favorite kind of response: everybody's right (except me?).

Christian, I think your formulation of the conclusion in terms of ability rather than knowledge is better than my original formulation. Here's why I was thinking of knowledge nonetheless. The common account of knowledge - as a belief that is true, justified, and Gettier-proof - seems to be tailored to knowledge-that. But once we agree that there is something fundamental in common to knowledge-that and knowledge-how, then we need some wider uderstanding of knowledge simpliciter. My view of knowledge-how isolates only one fundamental commonality with knowledge-that, namely, that the respective states are representational. However, with this may come other commonalities. Representatons have conditions of satisfaction (truth conditions in the case of knowledge-that, and on my view, fulfilment conditions in the case of knowledge-how) and they are answerable to certain standards of representation formation (compliance with which gives "justification," "wareant," or something in the vicinity). So maybe a case could be made for talk of knowledge rather than mere ability. But talk of ability is certainly more cautious.

Brad: I think you're also right that talk of conative states being representational is problematic. Conative states are certainly intentional, they are about something. But do they represent something? Only in a somewhat technical sense. In the regular sense of the word, as I hear it, to represent something is to represent it to be the case, or just to be. In that sense, conative intentional states are not normally representational, because they don't represent the way the world is, but rather the way one would want the world to be. In using the term "representation" I had in mind the more technical sense used in discussions of the representational theory of mind etc. It is common in these discussions to take desires and other conative states to be representational, in the minimal sense that they have conditions of satisfactions - indeed, in the minimal sense that they are intentional or have aboutness.

Comment by Uriah Kriegel -- 7/20/2004 @ 9:21 am

Uriah Kriegel,

Even with representation limited to the standard philosophical usage, I am worried about how much representation there is in your idea of non-conceptual conative representations. If I intend to ride a bike, but fall off, it is clear that my intention has failed to be satisfied; moreover this normative component of intention seems integral to it being an intention in the first place. Similarly, if I see in my schooner (as we say here in Sydney) a particular hue of yellow beer, it seems I could come to believe that this seeming was mistaken, and that the beer is actually some other hue (even if I couldn't articulate the difference beyond saying "it seems different"). What is the analog for your non-conceptual conative representations? My question about the causal chain was designed to get at this - some criteria is needed for sorting out causal links that count as representational from those that do not; it seems that this criteria needs to be normative; and it doesn't seem that the processes that underlie know-how are normative in the right way.


Comment by Brad Weslake -- 7/21/2004 @ 10:53 pm

Right, I forgot to address the Brad's problem of fixing the content of those "non-conceptual conative representations." The problem described by Brad is parallel, however, to the *horizontal problem of disjunction* for naturalist theories of cognitive representations. There are two disjunction problems that arise for cognitive representations. The first is: what makes it the case that my cat-thought is a representation of a cat and not of a cat-or-small-dog-on a moonless night? Call this the vertical problem of disjunction. The second is: what makes it the case that my cat-thought is a representation of a cat and not of a cat-or-big-bang? Call this the horizontal problem of disjunction. Similarly, we may ask what makes it the case that a desire to bike is a (conative) representation of biking and not of biking-or-getting-to-the-office? I don't have a good answer to this problem, or any answer really, but I'm comforted by having the partners in innocence I have in naturalist theories of cognitive representations. In particular, Fodor, Dretske, and Gillet had some interesting things to say about this problem at the end of the 80s.

Comment by Uriah Kriegel -- 7/22/2004 @ 3:15 pm


Seems I am just more sceptical than you of causal theories of content in general (I don't think any of the replies to these sorts of problems ended up succeeding). To expose my own biases, I think the prospects of an explanation of know-that in terms of know-how (via functionalist or pragmatist approaches to content) are far better than those going in the other direction...

Comment by Brad Weslake -- 7/22/2004 @ 9:32 pm

Posted by Tony Marmo at 13:58 BST
Updated: Monday, 9 August 2004 08:05 BST
Wednesday, 21 July 2004

On Opacity

Continuing from...

B. Language and Interpretative Techniques

Here I assume that truth-conditions alone do not automatically trigger any process of verifying sentences or reviewing beliefs or other propositional attitudes. Rather it is the users of natural languages that play a proactive role in the comprehension and evaluation of sentences. The proactive role of speakers is evident by the fact that the normal usage of such languages does not support the famous deflationist claim that adding the predicate is true to a sentence ? adds no content to it .
The fact that a string like it is true that contributes to the meaning of a natural language sentence is made evident in the cases the same sentence is evaluated differently by language users. Or in other words, a sentence may be deemed true by the person that says it and false by those who hear it, regardless of the conditions obtained in a certain world or situation. This means that the claim that a sentence like (2a) means (2b) can only be made from the perspective of the person who utters it, if and only if he is sincere:
(2) a. Tom Jobim composed a new song.
b. It is true that Tom Jobim composed a new song.

It cannot be made from the perspective of the hearer, who may doubt (2a). And, if the utter is a conscious liar, it cannot be made from his perspective either. Notice that this discrepancy of opinions may occur independently of whether Tom Jobim has or has not composed a new song in a certain world or situation. In part this observation shows a reactive role played by hearers, when they accept or doubt a sentence. But the possibility that even the utter of a sentence may deem it false evidences the proactive role he plays. And if one takes into account that interlocutors in a conversation also have their intentions and communicate them, then their evaluation of any sentence may also reflect a proactive role, more than a reactive one.
Thus, a sentence ? is not inherently construed as true (T?), false (??), undecidable (U?), possible (??) or necessary (o?). Those are judgements made by the language users when they handle sentences.
But the language users' proactive role is not limited to their capacity of merely ascribing truth-values to sentences. It is often the case that language users are able to construe gibberish utterances in a manner that they make sense, without incurring into some paradoxes or into some explosive inconsistency a la Pseudo-Scotus (99). And they usually do so, unless they choose not to.
Let us illustrate this idea with examples, like the sentences like (3). While the equivalents to in any artificial logic language are absurdities or paradoxes that require a sophisticated philosophical engineering to solve them, users of natural languages somehow manage to extract non-paradoxical coherent meanings from them:
(3) a. This sentence means the converse of whatever it means.
b. There's no such thing as legacies. At least, there is a legacy, but I'll never see it.
c. The ambitious are more likely to succeed with success, which is the opposite of failure.

There is an evident self-referential paradox in (3a), a clear contradiction in (3b) and a redundant or circular thought in (3c). And they can be interpreted in this manner, if the language users that read them proactively choose to see the possible paradoxes, contradictions and redundancies. But, for the reasons that I shall go into further on, that is not what they frequently do in the everyday common usage of a natural language. Accordingly, (3a) is usually interpreted either as referring to another sentence or as a potential metaphor for something, while (3b) may be taken as a review of statements and (3c) as an attempt to stress something. This evidences that the hearers/readers are able to recover the intended messages behind clumsily constructed sentences. Methinks that, in the exercise of this capacity, language users proactively employ certain techniques. But these techniques are not just any techniques: they are not merely ad hoc inventions, neither are they hazardously chosen.

C. Paraconsistency

Sch?ter (1994), among others, claims that the semantics of natural languages exhibit four basic properties that are already acknowledged and have been investigated in non-classic logic theories: paraconsistency (76), defeasibility, which contrast with monotonicity defined in (98), partiality, which contrasts to totality defined in (102), and relevance (101). Though paraconsistent approaches in Logic have been developed since the seminal works of Jas?kowsky (1948) and da Costa (1963, 1997) , and though they have important consequences for Linguistics, similar approaches in natural language semantics are only beginning.
As Priest (2002) explains, most of Paraconsistent Logic consists of proposing and applying strategies against non-consistency (Cf 93). There are several possible techniques to avoid or contain explosion within a logic system or semantics for artificial or natural languages, such as propositional filtration, non-adjunction, non-truth functional approach of negation, de Morgan algebra, etc. But those are all techniques invented by logicians for artificial languages. Should we accept the idea that in construing sentences the users of natural languages also use techniques to control explosions, then such techniques must be available as inherent interpretative devices of the human linguistic systems. In other words, as Logicians have their paraconsistent techniques for artificial languages, so the users of natural languages have techniques of their own, which are made possible by the fundamental properties of such languages.
Accordingly, the capacity of humans to ascribe values to sentences independently of the actual conditions obtained in a certain world or situation, which underlies the phenomenon called opacity in human languages, has to do with at least one of these techniques humans naturally posses: the contextualisation of sentences. I shall explore and unfold this matter in the following.


Posted by Tony Marmo at 14:16 BST
Updated: Wednesday, 21 July 2004 14:20 BST

Newer | Latest | Older