Category : Tutorials + Patches
Archive   : AIMSG.ZIP
Filename : AI3
Output of file : AI3 contained in archive : AIMSG.ZIP
Article 1347 (17 more) in comp.ai:
From: [email protected]
Subject: lexical items
Message-ID: <[email protected]>
Date: 5 Apr 89 20:40:20 GMT
Sender: [email protected]
Organization: UC, Santa Barbara. Physics Computer Services
Lines: 12
--MORE--(32%)
Frans van Otten, Algemene Hogeschool, Amsterdam, in a paragraph
dealing with the ongoing Searle topic, uses the following phrases:
sic! >> It passes the TT. This means: a human being can't tell the
sick!>> difference between a Chinaman [SIC!] and The Room. The behaviour
His use of English suggests to me that he knows it well enough to be
conscioussly or subconsciouslly aware that "chinaman" is a pejorative,
a term invoking contempt. Racism has no place in scholarship, buster!
Holland is a small country which once had a big empire (Indonesia etc.)
where some of the worst colonial abuses were committed!
End of article 1347 (of 1364)--what next? [npq]
Article 1348 (16 more) in comp.ai:
From: [email protected] (Greg Lee)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 6 Apr 89 02:31:57 GMT
References: <[email protected]>
Organization: University of Hawaii
Lines: 31
--MORE--(22%)
From article <[email protected]>, by [email protected] (Gilbert Cockton):
" In article <[email protected]> [email protected] (Greg Lee) writes:
" >From article <[email protected]>, by [email protected] (Gilbert Cockton):
" >" ... As 'mind' was not designed, and not by us more
" >" importantly, it is not fully understood for any of its activities
" >" ('brains' are of course, e.g. sleep regulation). Hence we cannot yet
" >" build an equivalent artefact until we understand it. ...
" >
" >It doesn't follow. Think of a diamond, for instance.
" >
" Category mistake.
Whose category mistake? Yours? Certainly not mine. If you had
argued:
As 'mind' was not designed, and not by us more
importantly, and as it is abstract and not 'assayable'
and not provably synthesizable, it is not fully
understood ... Hence we cannot build an equivalent
artifact ...
--MORE--(77%)
then I would not have made the particular objection that I made (though
I might have pointed out the curcularity). But that's not what you
said.
" ...
" Minds are
" a) abstract
" b) not 'assayable' - what the word covers is vague.
" c) not provably sythesisable becuase of (b) no test
" for mindhood, and also no
" theory of how minds get made and function
End of article 1348 (of 1364)--what next? [npq]
Article 1349 (15 more) in comp.ai:
From: [email protected] (Greg Lee)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 6 Apr 89 02:56:34 GMT
References: <[email protected]>
Organization: University of Hawaii
Lines: 25
--MORE--(22%)
From article <[email protected]>, by [email protected] (Frans van Otten):
" >no such thing in the world to be found in us or to be put into a machine.
"
" Without getting too philosophical, let me explain that this is partly
" true and partly false. Humans are conscious. This is true simply
" because we state it. But what do we mean by "conscious" ?
I think there is a crucial confusion in what you say here. The
fact that humans say they are conscious is a fact of human
behavior, worthy of study. That humans are consious can
alternatively be taken as a theory some of us have about
human behavior, though perhaps not a very well defined one.
Taken as a fact, it's undeniable. Taken as a theory, if it
can be made specific enough to have empirical consequences,
it can be wrong. I think it is wrong. But more importantly,
I think we can't even make sense of these issues, if from the
true fact that humans say they are conscious, we draw the
conclusion that the (or some) corresponding theory must be
correct. That's not sensible. The fact is different from
the theory; the theory does not follow from the fact.
" ... The Chinese Room Argument is nonsense. ...
--MORE--(96%)
I'm with you on that point.
Greg, [email protected]
End of article 1349 (of 1364)--what next? [npq]
Article 1350 (14 more) in comp.ai:
From: [email protected] (Gilbert Cockton)
Subject: Understanding (what is this thing called) Mind
Message-ID: <[email protected]>
Date: 5 Apr 89 11:05:23 GMT
References:
Reply-To: [email protected] (Gilbert Cockton)
Organization: Comp Sci, Glasgow Univ, Scotland
Lines: 65
--MORE--(12%)
In article <[email protected]> [email protected] (Frans van Otten) writes:
>Jerry Jackson writes:
>Please see the difference between "many simple tasks, all the same" and
>"many different and difficult tasks". But yes: AI was invented (at least)
>20 years ago. The cheque clearing system you write about does understand
>how to process a check.
No it doesn't, it relies on humans to get the cheques to the right
place, and to input cheques where the magnetic characters can't be read
(hint on how to slow down cheque clearing:-)). The decision on
'bouncing' a cheque, part of cheque-clearing, is rarely made by
machines.
Cheque clearing is a human-machine system with a clearly defined set of
subordinate tasks assigned to the machine. Humans hold the system
together, and therefore only they understand cheque clearing. The
automated tasks possess no concept of cheque clearing in all its
glory.
>So when we understand how the human mind works, we can build a machine
>which has properties like "consciousness", "understanding" etc.
No, you assume these are properties of mind. Until you give me a
sensible account of the role of 'mind' in human agency, I cannot accept
--MORE--(42%)
or reject anything you state on the issue.
How about the act of making a cup of tea. Where does mind come in in
the Chinese Tea Room?
As for it's artefactness, it is quesionable whether any artefact fully
Copies any entity in the physical world. Indeed, it may be impossible
to fully synthesise anything, since there is no objective test for
knowing that a natural entity is fully understood. There are many
objective criteria for knowing that something is not fully understood,
as the natural entity and the simulating artefact perform differently.
Natural entities and simulating/surrogate artefacts can only be
equivalent in so far as they perform the same way under a finite and
enumerated set of conditions (tasks for mind machines). Under these
circumstances, 'complete understanding' is impossible.
The fact is, societies only seek knowledge on useful things. The
post-war knowledge for its own sake institutionalised is a minor
deviation which is on the way out. Total knowledge is uninteresting.
Sensible folk restrict themselves to useful knowledge with an obvious
application (I count filling out existing applied theories as useful,
so 'basic' research is not ruled out by this dogma).
--MORE--(73%)
The only things that matter are tasks, and even these are slippery, as
I/O equivalence cannot be tightly defined for most interesting tasks.
The Turing test is thus an uninteresting subjective game. People are
unlikely to agree that a system is 'intelligent'. It depends on who
you ask, and what they ask the system to do.
Given all these epistemelogical problems - and more (see all 17 volumes
of obscure Euro-drivel) - I stick to my argument that computer
simulation cannot advance our understanding of 'mind', rather it always
lags behind it (even pulling it back by showing gaps in current
understandings). The gap between understanding and computability is
even larger, due to the lack of sources used by strong AI research.
Current computer models come nowhere near our cultural understanding of
human agency, and given the preference for hacking over reading, the
gap is unlikely to be closed.
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
[email protected]
End of article 1350 (of 1364)--what next? [npq]
Article 1351 (13 more) in comp.ai:
From: [email protected]
Newsgroups: comp.edu,comp.ai
Subject: post-doctoral programs in CS/CIS/MIS: references wanted
Message-ID: <79935A5V@PSUVM>
Date: 6 Apr 89 12:35:09 GMT
Organization: Penn State University
Lines: 8
--MORE--(52%)
I am interested in receiving information about post-doctoral programs in
Computer Science, Computer Inofrmation Systems, Artificial Intelligence, Manage
ment Information systems. Could anyone give a lead. Thanks.
AL Valbuena
Penn State University
114 Woodsdale Court
State College, Pa. 16801
End of article 1351 (of 1364)--what next? [npq]
Article 1352 (12 more) in comp.ai:
From: [email protected] (Krishna)
Newsgroups: comp.ai,comp.ai.edu,comp.ai.neural-nets
Subject: AI, Expert Systems, Machine Learning IN Info. Retrieval
Keywords: AI, ES, M/c Learning, IR
Message-ID: <[email protected]>
Date: 6 Apr 89 18:54:07 GMT
Organization: U of Minnesota-Duluth, Information Services
Lines: 12
--MORE--(50%)
Hi netters,
I'm looking for publications and conference proceedings dealing with
the aplication of AI techniques to Information Retrieval.
And am also interested in knowing about US and Canadian universities that
do active research in the application of AI techniques to Information
Retrieval (Expert systems, Machine Learning, NLP etc.)
Please mail responses to [email protected]
or [email protected]
Thanks,
krishna
End of article 1352 (of 1364)--what next? [npq]
Article 1353 (11 more) in comp.ai:
From: [email protected] (Scott Woyak)
Subject: IJCAI-89 Workshop
Keywords: Object-Oriented Programming in AI
Message-ID: <[email protected]>
Date: 6 Apr 89 12:58:04 GMT
Organization: EDS Research and Development, Auburn Hills, MI 48057
Lines: 101
--MORE--(9%)
CALL FOR PARTICIPATION
IJCAI-89 Workshop on
Object-Oriented Programming in AI
Sponsored by AAAI
Tuesday, August 22, 1989
Detroit, Michigan, U.S.A.
Description
-----------
The use of object-oriented programming (OOP) has resulted in many practical,
implemented AI systems, both AI programming langauges/environments and domain
specific knowledge based applications. This workshop will provide an
informal forum where researchers can exchange ideas, experiences, and issues
regarding the merits of OOP for various AI problems.
The goal of the workshop is that the participants will categorize problems
for which OOP is most appropriate and identify how specific features of OOP
are beneficial.
--MORE--(35%)
Topics
------
Some of the areas of AI that OOP is being used for are:
o knowledge representation
o integrating multiple paradigms
o cooperating, intelligent agents
o model-based reasoning
o constraint propagation
o simulation
o natural language processing
o knowledge-based applications.
In addition to the discussion of the utility of OOP in these various areas,
the following topics are relevant:
o comparison of objects and frames
o use of objects to integrate rules, logic, and procedural knowledge
o OO approaches to knowledge base design
o comparison of OOP inheritance and AI inheritance
o objects and pattern matching
--MORE--(54%)
o object classes and AI classification
o OO protocols for tasks such as inference.
Format
------
The workshop will take place on Tuesday, August 22, and will consist of 2-3
segments consisting of a few short presentations followed by ample discussion.
Each segment will be moderated by a member of the Workshop Committee.
Submission Information
----------------------
Workshop invitations will be issued on the basis of short papers 5 pages or
less in length. Send 4 copies of the paper to the contact below. Each short
paper will be reviewed by members of the Workshop Committee. Accepted papers
will emphasize the merits of OOP in an implemented AI system. In keeping
with an informal workshop, the total number of invitations will be limited to
30-35 people.
Workshop Committee
--MORE--(78%)
------------------
Sherman Alpert, IBM T.J. Watson Research Center
Lloyd Arrowood, Oak Ridge National Laboratory
Howard Shrobe, Symbolics
Scott Woyak, EDS Research & Development
Important Dates
---------------
May 15, 1989 Short papers must be received
July 3, 1989 Notification of invitation or rejection
August 22, 1989 Workshop date
Contact
-------
Scott W. Woyak
EDS Research and Development
3551 Hamlin Rd, Fourth Floor
Auburn Hills, MI 48057 USA
--MORE--(91%)
Phone: (313) 370-1669
Net: [email protected]
-----------------------------------------------------------------------------
--
Scott W. Woyak - Electronic Data Systems
[email protected]
USENET: ... {rutgers!rel,uunet}!edsews!edsdrd!sww
End of article 1353 (of 1364)--what next? [npq]
Article 1354 (10 more) in comp.ai:
From: [email protected] (andrew)
Subject: Re: Understanding (what is this thing called) Mind
Summary: wood for trees
Message-ID: <[email protected]>
Date: 7 Apr 89 01:37:45 GMT
References:
Organization: National Semiconductor, Santa Clara
Lines: 12
--MORE--(41%)
In article <[email protected]>, [email protected] (Gilbert Cockton) writes:
> [..] I stick to my argument that computer simulation cannot advance our
> understanding of 'mind' [..]
Unless, of course, one viewed `mind' itself as a computer simulation.
I don't like the use of the word `computer' here, but you get my drift.
=====
Andrew Palfreyman USENET: ...{this biomass}!nsc!logic!andrew
National Semiconductor M/S D3969, 2900 Semiconductor Dr., PO Box 58090,
Santa Clara, CA 95052-8090 ; 408-721-4788 there's many a slip
'twixt cup and lip
End of article 1354 (of 1364)--what next? [npq]
Article 1355 (9 more) in comp.ai:
From: [email protected] (daniel mocsny)
Subject: Re: Thinking about the reduction of Entropy.
Summary: Emergent Properties.
Keywords: Heat Death and Light Life.
Message-ID: <[email protected]>
Date: 6 Apr 89 16:55:28 GMT
References: <550002@hpfelg.HP.COM> <[email protected]> <[email protected]>
Organization: Univ. of Cincinnati, College of Engg.
Lines: 20
--MORE--(32%)
In article <[email protected]>, [email protected] (Barry W. Kort) writes:
> I just don't understand why an intelligent species would consciously
> take self-destruction as a goal.
Self-destruction is a _conscious_ goal of only the deranged. For most
individuals, self-destruction is an accident of taking the route of
momentary pleasure (e.g., nicotine and alcohol abuse). For societies,
self-destruction is not a goal at all. Instead, it is an emergent
property resulting from a collection of entities each trying to
maximize its claim on available resources while minimizing its
personal effort. Designing complex nonlinear systems to have arbitrary
pre-specified emergent properties is quite beyond the state of current
engineering (see the papers of Steven Wolfram).
> --Barry
Dan Mocsny Snail:
Internet: [email protected] Dept. of Chemical Engng. M.L. 171
513/751-6824 (home) University of Cincinnati
513/556-2007 (lab) Cincinnati, Ohio 45221-0171
End of article 1355 (of 1364)--what next? [npq]
Article 1356 (8 more) in comp.ai:
From: [email protected] (Chris Malcolm [email protected] 031 667 1011 x2550)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 5 Apr 89 18:57:06 GMT
References: <[email protected]> <[email protected]> <[email protected]>
Reply-To: cam@edai (Chris Malcolm)
Organization: University of Edinburgh, Edinburgh
Lines: 88
--MORE--(9%)
In article <[email protected]> [email protected] (Gilbert Cockton) writes:
>In article <[email protected]> [email protected] (Greg Lee) writes:
>>From article <[email protected]>, by [email protected] (Gilbert Cockton):
>>" ... As 'mind' was not designed, and not by us more
>>" importantly, it is not fully understood for any of its activities
>>" ('brains' are of course, e.g. sleep regulation). Hence we cannot yet
>>" build an equivalent artefact until we understand it. ...
>>
>>It doesn't follow. Think of a diamond, for instance.
>>
>Category mistake.
>
>Diamonds are
> a) concrete
> b) 'assayable' - i.e. you can test chemically that X is indeed a
> diamond
> c) synthesisable by following well-understood chemical theories
>
>Minds are
> a) abstract
--MORE--(25%)
> b) not 'assayable' - what the word covers is vague.
> c) not provably sythesisable becuase of (b) no test for mindhood,
> and also no theory of how minds get made and function
There are many different kinds of understanding. People are
extremely good at fiddling about and getting things to work, using
the minimum understanding necessary for the job. Consequently
sailing ships achieved considerable sophistication before the theory
of aerodynamics was discovered; and steam engines were made to work
not only before the theory of heat engines and thermodynamics
existed, but in the face of some wierd and quite wrong ideas about
the principles involved. And don't forget that evolution has
re-invented the optical eye a number of times, despite never having
been to school, let alone having a scientific understanding of
optics.
Richard Gregory in "Mind in Science" argues that not only do people
sometimes make working devices in advance of a proper theoretical
understanding of the principles involved, but that this is actually
the way science usually progresses: somebody makes something work,
and then speculates "how interesting - I wonder WHY it works?"
So I expect that AI will produce working examples of mental
--MORE--(49%)
behaviour BEFORE anyone understands how they work in the analytic
sense (as opposed to the follow-this-construction-recipe sense), and
that it will be examination and experimentation with these working
models which will then lead to a scientific understanding of mind.
As for "mind" not being assayable, it's a pity nobody has invented a
mind-meter, but we are all equipped with sufficient understanding to
be able to say "that looks pretty like a mind to me". Even if
closer examination or analysis proves such a judgement wrong,
subjecting these judgements to analysis such as the Chinese Room
argument, and testing them on the products of AI labs, is a good way
of refining them. Current ideas about what constitutes mental
behaviour are a good deal more sophisticated than those of several
decades ago, partly due to the experience of exercising our concepts
of mentality on such devices as the von Neumann computer. I don't
see any reason why AI, psychology, and philosophy, shouldn't
continue to muddle along in the same sort of way, gradually refining
our understanding of mind until the point where it becomes
scientific.
A (new) category mistake? I assert that I will have a scientific
understanding of mind when I can tell you exactly how to make one of
a given performance, and be proven right by the constructed device,
although such a device had never before been built. Unfortunately I
--MORE--(77%)
don't expect any of us to live that long, but that's just a
technical detail.
This idea that you have to understand something properly before
being able to make it is a delusion of armchair scientists who have
swallowed the rational reconstruction of science usually taught in
schools, and corresponds to the notion sometimes held by
schoolteachers of English that no author could possibly write a
novel or poem of worth without being formally educated in the 57
varieties of figures of speech. It also corresponds to the notion
that one can translate one language into another by purely syntactic
processing, a notion that AI disabused itself of some time ago after
contemplating its early experimental failure to do just that.
The human mind is fortunately far too subtle and robust to permit a
little thing like not understanding what it's doing to get in the
way of doing it. Otherwise we wouldn't even be able to think, let
alone create artificial intelligence.
--
Chris Malcolm [email protected] 031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK
End of article 1356 (of 1364)--what next? [npq]
Article 1357 (7 more) in comp.ai:
From: [email protected] (Chris Malcolm [email protected] 031 667 1011 x2550)
Subject: Is AI a proper science? The Cockton debate.
Message-ID: <[email protected]>
Date: 5 Apr 89 19:22:14 GMT
References:
Reply-To: cam@edai (Chris Malcolm)
Organization: University of Edinburgh, Edinburgh
Lines: 24
--MORE--(29%)
In article <[email protected]> [email protected]
(Gilbert Cockton) writes:
>I know of no incident in the history of science where continued
>romantic mucking about got anywhere. As A.N. Whitehead argued, all
>learning must begin with a stage of Romance, otherwise there will be
>no motivation, no drive to learn, no fascination. But it must be
>followed by a stage of analysis, a specialisation based on proper
Ok, accepting this for the sake of argument, what are the criteria
to be used for deciding when to leave the "age of Romance" and enter
the "age of analysis"? Gilbert is suggesting that (much of) AI is
romantic sci-fi when it should be analytic. My guess is that (much
of) AI hasn't yet sorted out its basic concepts, i.e., is in what
Kuhn would call the pre-paradigm stage, and analysis under such
circumstances would be foolishly premature, would it not?
We wouldn't want AI to fall into the horrible trap of physics envy
which so disfigured psychology, would we? - "Look Ma, numbers and
equations, I'm doing Real Science now!"
--
Chris Malcolm [email protected] 031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
--MORE--(97%)
5 Forrest Hill, Edinburgh, EH1 2QL, UK
End of article 1357 (of 1364)--what next? [npq]
Article 1358 (6 more) in comp.ai:
From: [email protected] (William J. Rapaport)
Newsgroups: sunyab.general,sunyab.grads,sunyab.undergrads,wny.seminar,comp.ai,ont.events,sci.philosophy.tech
Subject: SUNY Buffalo Philosophy/CogSci Colloq--L. R. Baker
Message-ID: <[email protected]>
Date: 7 Apr 89 15:53:12 GMT
Sender: [email protected]
Reply-To: [email protected] (William J. Rapaport)
Distribution: na
Organization: SUNY/Buffalo Computer Science
Lines: 33
--MORE--(29%)
UNIVERSITY AT BUFFALO
STATE UNIVERSITY OF NEW YORK
DEPARTMENT OF PHILOSOPHY
and
GRADUATE RESEARCH INITIATIVE IN COGNITIVE AND LINGUISTIC SCIENCES
PRESENT
LYNNE RUDDER BAKER
Department of Philosophy
Middlebury College
HAS REPRESENTATION BEEN NATURALIZED?
Physicalism either denies or denigrates beliefs, by maintaining either
that there are no beliefs or that beliefs are identical with physical
states. Baker's book gives close examination of each of these proposals
in turn, concluding that they come up short. One of the most subtle and
influential proponents of physicalism is Jerry Fodor. At the American
Philosophical Association meetings in December 1988, Baker read a cri-
--MORE--(76%)
tique of Fodor's book _Psychosemantics_, with Fodor giving a reply. The
paper she will read here is a revision of her APA paper that takes
Fodor's reply into account.
Wednesday, April 19, 1989
3:00 P.M.
684 Baldy Hall, Amherst Campus
Contact Newton Garver, Dept. of Philosophy, 716-636-2444, or Bill Rapaport,
Dept. of Computer Science, 716-636-3193, for further information.
End of article 1358 (of 1364)--what next? [npq]
Article 1359 (5 more) in comp.ai:
From: [email protected]
Subject: Looking for L. Wesley
Message-ID:
Date: 6 Apr 89 14:04:30 GMT
Sender: [email protected]
Reply-To: [email protected] (Keiji Kanazawa)
Distribution: na
Organization: Brown University Department of Computer Science
Lines: 13
--MORE--(55%)
I've recently been referred to work, presumably in robotics,
by L. Wesley at U Mass. Does anybody have any references to
his or her work, or an email address for contacting this
person?
Please reply by mail, as I don't normally read this group.
Thanks,
Keiji Kanazawa
[email protected]
End of article 1359 (of 1364)--what next? [npq]
Article 1360 (4 more) in comp.ai:
From: [email protected] (Frans van Otten)
Subject: Re: lexical items
Message-ID: <[email protected]>
Date: 7 Apr 89 10:09:19 GMT
References: <[email protected]>
Reply-To: [email protected] (Frans van Otten)
Organization: AHA-TMF (Technical Institute), Amsterdam Netherlands
Lines: 29
--MORE--(20%)
In article <[email protected]> [email protected] writes:
>
> Frans van Otten, Algemene Hogeschool, Amsterdam, in a paragraph
> dealing with the ongoing Searle topic, uses the following phrases:
>
>sic! >> It passes the TT. This means: a human being can't tell the
>sick!>> difference between a Chinaman [SIC!] and The Room. The behaviour
>
> His use of English suggests to me that he knows it well enough to be
> conscioussly or subconsciouslly aware that "chinaman" is a pejorative,
> a term invoking contempt. Racism has no place in scholarship, buster!
> Holland is a small country which once had a big empire (Indonesia etc.)
> where some of the worst colonial abuses were committed!
I hereby apologise for using the word Chinaman. I did not know this word
has the value you describe. I am not interested in propagating racism. I
used it probably because I read it in some other article in this newsgroup.
Apparantly I should have written "Chinese person" or something like that.
Also I hope there will be no follow-ups on Silber's article or on this one;
this is comp.ai, not alt.flame. I would like to thank Silber for his
correction of my vocabulary. Hopefully the next time he will use a more
--MORE--(92%)
friendly way.
--
Frans van Otten
Algemene Hogeschool Amsterdam
Technische en Maritieme Faculteit
[email protected]
End of article 1360 (of 1364)--what next? [npq]
Article 1361 (3 more) in comp.ai:
From: [email protected] (Frans van Otten)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 7 Apr 89 12:39:14 GMT
References: <[email protected]> <[email protected]>
Reply-To: [email protected] (Frans van Otten)
Organization: AHA-TMF (Technical Institute), Amsterdam Netherlands
Lines: 51
--MORE--(17%)
Greg Lee writes:
>Frans van Otten writes:
>
>>Humans are conscious. This is true simply because we state
>>it. But what do we mean by "conscious" ?
>
>The fact that humans say they are conscious [... can been taken
>as] a fact of human behavior [... or as] a theory. Taken as a
>fact, it's undeniable. Taken as a theory [...] it can be wrong.
>I think it is wrong. But more importantly, I think we can't even
>make sense of these issues, if from the true fact that humans say
>they are conscious, we draw the conclusion that the (or some)
>corresponding theory must be correct. That's not sensible. The
>fact is different from the theory; the theory does not follow from
>the fact.
I can't follow you. What do I mean when I say "I am hungry" ?
1. I am in need of food.
2. I have a (subjective) feeling that I call "hungry". This
feeling has been caused by an empty stomach, or by something
else.
--MORE--(57%)
Taken as (1), it is deniable. I can have a hungry feeling without
actually needing food. Taken as (2), it is undeniable: I *am*
hungry. Maybe this hungry feeling is caused by something else
then an actual need for food, but I don't say anything about that.
Now when I say "I am conscious", I have a (subjective) feeling which
I call "conscious". With this statement, I don't say anything about
what might have caused this feeling. In my article, I stated:
>>Humans are conscious (1). This is true simply because we state
>>it (2). But what do we mean by "conscious" (3) ?
Maybe I should have written:
(1) Humans say they are conscious.
(2) So they have a subjective feeling which they call "conscious".
(3) What might this feeling mean, or what might have caused it ?
Then I continued my article trying to answer that question.
So where do we misunderstand eachother ?
--MORE--(95%)
--
Frans van Otten
Algemene Hogeschool Amsterdam
Technische en Maritieme Faculteit
[email protected]
End of article 1361 (of 1364)--what next? [npq]
Article 1362 (2 more) in comp.ai:
From: [email protected] (Frans van Otten)
Subject: Simulation verus reality
Message-ID: <[email protected]>
Date: 7 Apr 89 15:03:13 GMT
Reply-To: [email protected] (Frans van Otten)
Organization: AHA-TMF (Technical Institute), Amsterdam, The Netherlands
Lines: 36
--MORE--(14%)
A lot of comp.ai writers seem to misunderstand the difference between
"reality" and "simulation". Actually, both words are pointers to some
actual process, which has no name, I'm afraid. Reality and simulation
are "relativity" concepts.
When we simulate a flying plain, it is (within the simulator) really
flying (in so far as the simulator simulates flying). To us, it is a
simulation of a flying plane. That is, in our reality the plane does
not fly, but in the reality inside the simulator it does fly !
Let's assume we made a computer system which can calculate the physics
of a person. Let's assume we can communicate with this simulation. Now
to us he is a simulation. But if we asked him, he would say "Me ? A
simulation ? You must be out of your mind, I do really exist !" Again,
in our reality the person is a simulation, but in the reality within the
computer system the person is real.
Our reality is not a "universal" reality, either. The rules for our
existence are the chemical and physical rules, which are being executed
by the molecules. The rules for the existence of molecules are being
executed by the constituting atoms. The rules for the existence of
atoms are being executed by the constituting atoms. Et cetera. If we
would execute the rules for the molecules on a computer, we would not
--MORE--(74%)
notice any difference except that nuclear fusion would not be possible.
The only problem is that a simulated reality is not physically in
contact with the simulating reality. So I could never feed myself with
a simulated meal. But that is also a very nice property of simulations:
whatever crashes a simulated plane makes, no real person dies ! And
nobody cares about all those killed simulated people, since we didn't
simulate their family and friends.
--
Frans van Otten
Algemene Hogeschool Amsterdam
Technische en Maritieme Faculteit
[email protected]
End of article 1362 (of 1364)--what next? [npq]
Article 1363 (1 more) in comp.ai:
From: [email protected] (Greg Lee)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 7 Apr 89 19:20:47 GMT
References: <[email protected]>
Organization: University of Hawaii
Lines: 16
--MORE--(28%)
From article <[email protected]>, by [email protected] (Chris Malcolm [email protected] 031 667 1011 x2550):
" ...
" This idea that you have to understand something properly before
" being able to make it is a delusion of armchair scientists who have
" swallowed the rational reconstruction of science usually taught in
" schools, and corresponds ... It also corresponds to the notion
" that one can translate one language into another by purely syntactic
" processing, a notion that AI disabused itself of some time ago after
" contemplating its early experimental failure to do just that.
What you say seems to assume that the syntax of natural languages was or
is understood. That is not the case. It's very far from being the
case. The failure you mention, consequently, does not suggest that
translation cannot be achieved by syntactic processing.
Greg, [email protected]
End of article 1363 (of 1364)--what next? [npq]
Article 1364 in comp.ai:
From: [email protected]
Subject: language in context
Message-ID: <[email protected]>
Date: 7 Apr 89 16:14:50 GMT
Sender: [email protected]
Organization: UC, Santa Barbara. Physics Computer Services
Lines: 16
--MORE--(26%)
Another contributor, sent me a message basically calling to my attention
the fact that an accusation which I made re: the use of a particular
lexical item in a recent posting was likely to be unwarranted.
In that regard:
Re: your observation re: semantics, I agree that I may have
jumped to an unwarranted conclusion concerning certain lexical
items occurring in otherwise completely fluent second language discourse.
i was overly nasty in pointing out the usage (for the tone, I apologize
as a matter of civility, IFF the usage cited was not intentionally as
i characterized it) As in the case you cite re:
your experience parmi les francais, these things should be pointed out.
I suppose this all comes under the heading of a case in point for
LANGUAGE IN CONTEXT !!!!
End of article 1364 (of 1364)--what next? [npq]
End of newsgroup comp.ai.
******** 4 unread articles in comp.ai.neural-nets--read now? [ynq]
Very nice! Thank you for this wonderful archive. I wonder why I found it only now. Long live the BBS file archives!
This is so awesome! 😀 I’d be cool if you could download an entire archive of this at once, though.
But one thing that puzzles me is the “mtswslnkmcjklsdlsbdmMICROSOFT” string. There is an article about it here. It is definitely worth a read: http://www.os2museum.com/wp/mtswslnk/