Category : Tutorials + Patches
Archive   : AIMSG.ZIP
Filename : AI
Output of file : AI contained in archive : AIMSG.ZIP
Keywords: chinese room simulation
Message-ID: <[email protected]>
Date: 29 Mar 89 06:38:09 GMT
Organization: Boeing Computer Services-Commercial Airplane Support
Lines: 32
--MORE--(20%)
> From: [email protected] (Daniel B Hankins)
>
> There are two issues here: whether understanding is in fact a
> physical property like magnetism, and whether the anti-simulation argument
> (hereafter referred to as ASA) is valid.
[ ... ]
> I can summarize my reply to the ASA in one sentence: "A difference
> that makes no difference _is_ no difference.".
>
> The ASA is characterized by sentences like the following: "A
> simulated magnet attracts no iron.".
>
> This may be true, but it is irrelevant; I will show by means of a
> gedanken experiment that in certain circumstances (the _only_ ones relevant
> to the discussion at hand) a simulated magnet does indeed attract iron.
Such a deal I have for you!
Your dinner entree for tonight is digital computer simulation of filet
mignon! It includes simulated baked potato, simulated tossed salad with
--MORE--(75%)
simulated vinegar, oil and Italian spices. Your steak simulation includes
five significant digits of heat, aroma and sizzle. And I suggest a superb
simulation of a vintage Port. This requires several minutes on a Cray X-MP,
and is really exquisite, including detailed molecular-level simulation of
over three hundred organic aromatic compounds!
Bon appetit!
Ray Allis [email protected] bcsaic!ray
End of article 1291 (of 1332)--what next? [npq]
Article 1292 (40 more) in comp.ai:
From: [email protected] (Frans van Otten)
Subject: Re: "Boss, look: da brain, da brain!"
Message-ID: <[email protected]>
Date: 31 Mar 89 11:48:31 GMT
References: <[email protected]>
Organization: AHA-TMF (Technical Institute), Amsterdam, Netherlands
Lines: 34
--MORE--(23%)
Michael Ellis writes:
>Wayne A. Throop writes:
>
>>...First is the trivial one, that the chemical reactions in the brain
>>are, at base, representable as discrete and symbolizable. That is,
>>there is a limit to the "analogness" of the brain's representation
>>of the world around it.
>>...It seems plausible (and even likely) that the "analogness" of signals
>>within the brain are not representations of analog quantities in the "real
>>world".
>
>Grasping for straws. Just who have you been reading? Douglas Hofstadter?
>
>The brain is clearly analog. What you *desperately* have to show us is that
>it is "at base, representable as discrete". You have only given us a wish
>list of blanket assertions.
Let me translate this to the digital computer world. The signals in it
are clearly analog. Then your conclusion is "the entire computer is analog"
when you say "the brain is analog".
--MORE--(75%)
Probably many analog signals in the brain are (directly or indirectly)
representations of analog quantities in the real world. But not neccesarily
the same way. The digital representation (within a computer) of an original
analog signal is also an analog value but it can be represented as discrete
and symbolized.
--
Frans van Otten
Algemene Hogeschool Amsterdam
Technische en Maritieme Faculteit
[email protected]
End of article 1292 (of 1332)--what next? [npq]
Article 1293 (39 more) in comp.ai:
From: [email protected] (Ray Allis)
Subject: Re: "Boss, look: da brain, da Brain!"
Keywords: symbols representations neural
Message-ID: <[email protected]>
Date: 28 Mar 89 06:53:09 GMT
Organization: Boeing Computer Services-Commercial Airplane Support
Lines: 65
--MORE--(11%)
> From: [email protected] (Wayne A. Throop)
>
> > [email protected] (Stevan Harnad)
> > If you examine the brain with a view to slicing off its "transducers"
> > and "effectors," you come up against a problem, because even if you
> > yank off the sensory surfaces, what is actually left over is repeated
> > analog transforms of the sensory surfaces as you go deeper and deeper
> > into the brain.
>
> An interesting assertion. It seems incorrect on two counts. First is
> the trivial one, that the chemical reactions in the brain are, at
> base, representable as discrete and symbolizable.
Perhaps, but can they be *replaced* by symbols? I think not.
> That is, there is a
> limit to the "analogness" of the brain's representation of the world
> around it.
>
> Second, no case has been made for how much of the "analogness" of the
> signal that makes its way to the brain is significant.
--MORE--(39%)
I assert that the "analogness" is absolutely critical. My case is based on
the fundamental difference between _representations_ and _symbols_. (i.e.
the voltages, frequencies, chemical concentrations and so on are
_representations_ of "external reality" rather than symbols. Symbols appear
at a much "higher" level of cognition, where _representations_ can be
associated with each other.
> There is some
> evidence that the "analogness" is, in fact, filtered out quite
> quickly, and what is left are symbolic representations of
> relationships among various input stimuli.
What evidence? Is there neurological evidence? The neurological research
I've seen does seem to point to a series of analog transforms. Do you have
some references, please?
> In fact, it would be very,
> VERY surprising if the analogness mattered, because the analogness
> that exists in human neural systems is not accurate.
What value "accuracy"? True, analog computers fell out of favor because they
didn't perform numerical computation to the same "accuracy" as digital
computers. But we sometimes tend to gloss over the truth that any
--MORE--(75%)
computation is only as good as its input measurements, assumptions and
premises. A digital computer is the archetypical physical symbol system; it
manipulates symbols according to specified relationships among them, with
absolute disregard for whatever they symbolize. In contrast, your nervous
system's state at, say, the visual cortex, *represents* the effect your
environment is having on your sensory equipment, with nary a symbol to be
found.
> It seems
> plausible (and even likely) that the "analogness" of signals within
> the brain are not representations of analog quantities in the "real
> world".
>
I'm sorry, but I don't understand what you mean by this.
> Wayne Throop
Ray Allis [email protected] bcsaic!ray
End of article 1293 (of 1332)--what next? [npq]
Article 1294 (38 more) in comp.ai:
From: [email protected] (Murthy Gandikota)
Subject: Who has more knowledge: Feynman or Scarecrow?
Message-ID: <[email protected]>
Date: 31 Mar 89 18:30:30 GMT
Reply-To: Murthy Gandikota
Organization: Ohio State University
Lines: 33
--MORE--(19%)
It is the question we (I and another AI buff) asked ourselves. Our
arguments were as follows:
They are more or less contemporaries (in the evolutionary
sense) so their brains have the same storage capacity (empty volume if
you wish). Scarecrow, being a descendant of Indian culture, may know a
great deal about nature and supernatural. His knowledge about birds,
animals, plants, sun, moon, stars, etc. may be equivalent to that of a
few enclyopedias. Feynman, being an excellent physicist, should
possess atleast a 100 years of physics knowledge and more from his own
cultural background. However, he could've traded some of his cultural
knowledge for Physics (and thus maintaining the number of concepts in
his brain at constant). Assuming, the hardwired concepts in their
brains don't differ by very much, we kinda agreed that Scarecrow and
Feynman should've the number of concepts at same order. An interesting
outcome of our conversation is that: "Brains have to forget things for
their own good." (Sounds like a good excuse for my optimization course
🙂 Any comments are welcome.
[For the unknowing, Scarecrow is a native Indian of North America who
shot into public fame last summer during the drought because of his
ability to cause rain by dancing and prayer. Feynman is a Physicist
--MORE--(87%)
from Cal Tech, and also a Nobel Laureate]
--murthy
--
"What can the fiery sun do to a passing rain cloud, except to decorate
it with a silver lining?"
Surface mail: 65 E.18th Ave # A, Columbus, OH-43201; Tel: (614)297-7951
End of article 1294 (of 1332)--what next? [npq]
Article 1295 (37 more) in comp.ai:
From: [email protected]
Subject: Re: the surrealism of dreams
Message-ID: <[email protected]>
Date: 31 Mar 89 18:41:15 GMT
Sender: [email protected]
Distribution: usa
Organization: University of Chicago Graduate School of Business
Lines: 87
--MORE--(6%)
>
>And now I'd like to add in a third viewpoint. It is known among sleep
>researchers that the seemingly nonsensical quality of dreams arises because the
>medulla is sending out random signals during this phase of sleep, which the
>neocortex tries is damned best to weave into a logically consistent framework.
This points up a major flaw that we see again and again in modern biology,
to wit: there is no way one can prove that something occurs 'at random'.
In particular, there is no way in this case to tell whether 'the medulla
is sending out random signals' or whether it is actually sending out signals
in some very definite, well determined fashion which we are unable to fathom.
Western biology (as opposed to Russian, which I'm told takes a somewhat
different view) seems somewhat preoccupied with this notion of randomness:
Most biologists I have met seem to feel that the theory of evolution is
premised upon random recombination and random mutation. This dogmatic attachment
to an unprovable hypothesis is even more interesting in light of the fact
that the assumption of randomness adds very little to our understanding of
the phenomena in question.
To add my two cents to this discussion of dreaming, I seem to recall an
old "Scientific American" article on the effects of LSD. Among other things,
--MORE--(29%)
it claimed that LSD affected the synapses of some 50,000 "controller
neurons" (if anyone knows the proper name, please post) causing them to
relax in the same fashion that they do naturally when people sleep. Taking
this description of what happens naturally as a given, doesn't it
seem possible that during sleep, the brain relaxes so that it can reorganize
itself, make new sets of associations and possibly reform decision structures?
From personal experience, I know that it is often helpful to get a good night's
sleep if I have encountered a particularly difficult mathematical or
programming problem. Sometimes it almost seems that I wake up with the correct
solution. A plausible model for this is the theory that my mind initially
proceeds down some search path/decision tree on which a solution is not to
be found. Unfortunately, we discover the non-existence of a solution too far
down this tree to be able to easily back up to the decision node at which
we took the wrong branch. We then search around a sub-tree which does not
contain the requisite particulars for our solution. In sleep, the brain
relaxes and we effectively move up the search tree, so that when we reexamine
the problem in the morning the incorrect assumption from the previous night
becomes obvious, or even perhaps it is somehow identified during the dreaming
process. (I realize that this hierarchical decision tree is a bad model for
the real life workings of brain but it helps to describe the process in a
relatively easy to understand, linear fashion. I think that a model based upon
competing processes and associative lookup would be more appropriate, but more
--MORE--(58%)
difficult to describe and understand, without adding much to the discussion.)
To take this one step further, I have often performed the following experiment
(albeit non-scientifically because, unfortunately, the means escape me at this
time): Find someone who's been a National Football League fan, and ask him
(randomly, of course) one the the following two questions:
1) Who was that really tall, 6'10" wide receiver, who used to play for the
Eagles and set the record for consecutive games with at least one reception?
2) Who was that really tall, 6'10" wide receiver, who used to play for the
Eagles and set the record for consecutive games with at least one reception?
Its not Randall Cunningham...
(The answer to both of these is Harold Carmichael)
My experience shows that people find 1 much easier to answer than 2, although
2 actually contains all the information in 1, plus one additional fact, and
I suggest that this is because the addition of the name "Randall Cunningham"
actually serves to block the search for "Harold Carmicheal". Randall
Cunningham is also a football player, he also plays for the Eagles and
the structure of the two names is similar. To test this, we could also
ask the following, third question
--MORE--(80%)
3) Who was that really tall, 6'10" wide receiver, who used to play for the
Eagles and set the record for consecutive games with at least one reception?
Its not Skip Aaron...
where Skip Aaron is no one in particular (apologies if you actually exist),
and the name doesn't seem to have much resemblance to Harold Carmicheal.
My expectation is that 1 will be slightly easier to answer than 3, and
both will be easier than 2. This is because 2 pushes us down the search
tree to a particular node that is close to the solution, without providing
us the information to tell us how we accidentally got to this node instead
of the correct one. In spite of the fact that we're 'close', we're unable
to back up to an appropriate point to discover where we went wrong. Only
when the brain relaxes sufficiently to do this (e.g. we 'forget' the question)
are we able to find the correct answer. 3 is slightly more difficult than
1 because it also puts the search process at a particular solution node,
but because of it's dissimilarity to the correct solution, we are easily
able to recover from it.
R.Kohout
End of article 1295 (of 1332)--what next? [npq]
Article 1296 (36 more) in comp.ai:
From: [email protected]
Subject: what is a suitable program
Message-ID: <[email protected]>
Date: 31 Mar 89 19:42:51 GMT
References:
Sender: [email protected]
Lines: 81
--MORE--(9%)
> [email protected] (Gilbert Cockton)
>> [email protected] (Wayne A. Throop)
>> I simply do not think that the
>> human brain has any mysterious "causal powers" that a computer
>> executing a suitable program does not.
> OK then, let's here what a "suitable" program would be. I contend that
> AI research doesn't have a grasp of what "suitable" means at all.
The nickel tour of what I think a suitable program would be is one
that can reach decisions to take actions which advance goals, where
these decisions are reached based on internal representations of
objects and processes occuring in the "real world".
Of course, any reasonable non-sloganish explication of "suitable"
would be at least several thousand words long. Also of course, this
truncated treatment doesn't address the notion of "self-aware", or
who's goals, or the complexity or accuracy of the decision making, or
other things important to judging degree of understanding or
intelligence.
Further, it is quite true that my notion of what is "suitable" may be
vague. This is primarily because everybody's notion of what humans do
when they "understand" something is vague. But even so, that is no
--MORE--(36%)
reason to jeer at my skepticism about the assertion that no program
can possibly be suitable (that is, "do the essentials of what humans
do to understand things").
> For one, human minds are not artefacts, whereas computer programs
> always will be.
If by "artefact" it is meant that computer programs consist mostly of
components designed by some entity to meet some specific goal which
the program is supposed to advance in some way, then I don't see that
this statement is likely to be true in any meaningful way. One of the
major fields of current AI is the investigation of self-learning
systems, or systems which "organize themselves" in response to fairly
complex interaction with the real world.
And even granting that, the fact that one object is not an artifact
while another is is no sign that the two objects cannot share some
feature or quality. Certainly, I see no evidence that "understanding"
or "intelligence" is not a feature that can be shared among artifacts
and non-artifacts.
> This alone will ALWAYS result in performance
> differences. Given a well-understood task, computer programs will
--MORE--(62%)
> out-perform humans.
So what? Even humans have performance differences from one to
another. Someone wih a savant talent (or even just someone with a
highly trained technique) can easily outperform other humans by orders
of magnitude on "well understood" problems (problems which are
completely solved with a known algorithm). But to suppose that this
performance difference is crucial is just plain silly. Especially
since in the strict Turing situation it is easy to fake slower speeds,
or simply run the program with delays.
> Given a poorly understood task, they will look
> almost as silly as the author of the abortive program.
I think another meaning for "well (or poorly) understood task" is
being slipped in here. In particular, tasks for which no algorithm is
known can be solved for practical cases "well enough" by many, many
techniques. So, I think maybe "poorly specified" is meant here.
If so, I note that humans themselves look pretty silly when trying to
solve poorly understood (specified) problems.
> The issue as ever is what we do and do not understand about the human
--MORE--(87%)
> mind, the epistemelogical constraints on this knowledge, and the
> ability of AI research as it is practised to add anything at all to
> this knowledge.
Exactly so. And I contend that we do NOT understand enough about the
human mind to rule out "suitable" programs.
--
If someone tells me I'm not really conscious, I don't marvel about
how clever he is to have figured that out... I say he's crazy.
--- Searle (paraphrased) from an episode of PBS's "The Mind"
--
Wayne Throop
End of article 1296 (of 1332)--what next? [npq]
Article 1297 (35 more) in comp.ai:
From: [email protected] (Gordon E. Banks)
Subject: Re: ReProgrammed Cockton
Message-ID: <[email protected]>
Date: 31 Mar 89 19:54:31 GMT
References: <[email protected]> <20900003@bradley> <[email protected]> <[email protected]>
Reply-To: [email protected] (Gordon E. Banks)
Organization: Decision Systems Lab., Univ. of Pittsburgh, PA.
Lines: 36
--MORE--(19%)
In article <[email protected]> [email protected] (Gilbert Cockton) writes:
>
>On the contrary, sound arguments against the validity of astrology can
>be constructed on epistemelogical grounds alone. The same is true of
>AI. The question is, can computers be programmed to be valid models of
>some woolly construct called 'Mind' (what the hell is mind?)? This
>question can be addressed competently without reference to any program
>constructed within the last 30 years of AI.
Actually, I agree with you here. More basic than the question that any
particular program addresses is the question of whether the mind is
grounded in a purely physical entity (the brain) or not. You indicate
not. If you are wrong, then the only argument is whether the AI
researchers are on the right track in trying to replicate/simulate
brain function or not. If you are right, then the question is what
is the mind that an artificial brain can not contain it or acquire it.
I find your answer "socialization" inadequate, because if I could make
a android that appeared to be a human child and grew and had an adequately
functioning brain, why couldn't it be socialized as well as a real child,
(a la the replicant in "Bladerunner" who didn't even know she wasn't
human).
>
--MORE--(70%)
>It is not for humanity to learn of AI, but for AI to learn of
>humanity. This unfortunately includes all that turgid Euro-rubbish
>which Maddox tries to lampoon. After all, it's as much his
>intellectual heritage as mine.
Well, Gilbert, I'm glad you finally acknowledged our common
cultural roots. After all, our cultures were divided only in the
last 400 years, whereas that of Britain and the rest of Europe
were sundered from 1 to 3 millenia ago. Perhaps with the "United
States of Europe" concept, this will change over the next few
decades, but it hasn't happened yet. In fact, one could argue
that the cultures of non-Anglic European countries (especially the young
people) have picked up far more transculturation from America than
Britain.
End of article 1297 (of 1332)--what next? [npq]
Article 1298 (34 more) in comp.ai:
From: [email protected] (Michael Ellis)
Subject: 1st person experience
Message-ID: <[email protected]>
Date: 30 Mar 89 05:34:04 GMT
References: <[email protected]> <28000[email protected]> <[email protected]> <[email protected]>
Reply-To: [email protected] (Michael Ellis)
Organization: SRI, Menlo Park, CA.
Lines: 81
--MORE--(9%)
> Wayne A. Throop >> Gilbert Cockton
>The gain in doubting people that think Eliza understands
>is that we don't cheapen what we mean by "understanding". The gain in
>doubting that people that think the CR shows that the room/rules have no
>understanding even in principle is that we don't arbitrarily anthopomorphize
>what we will accept as an "understanding entity".
One goal is indeed to make artifacts which can perform tasks that
currently "require understanding". The success of this kind of
research program is immune from CR considerations, just exactly
as the success of chess computers is judged by the game they play
and not whether they really think like us.
But there is another question at hand, one which you refuse to even
acknowledge, Wayne: Just what is the *human* thought process?
What is that strange stuff (some of us call "understanding") that
reveals itself to us in such an so incorrigibly first person fashion?
In spite of the stunning success in computer chess technology,
it has taught us practically nothing about how human chess players
do it, and I fear that symbol crunching gadgets from the folks
who make "Machines Who Think" will have a little else to add,
--MORE--(40%)
no matter how operationally they might resemble you.
IMHO, phenomenological introspectivity is something that absolutely
must be explicitly designed into the artifact. It's got to have qualia.
Feelings, pain, consciousness. These are the *explicanda*, they are what
we (or at least some of us) hope to have accounted for.
No phenomenology, no mind, no understanding. Just another toy doll that
technicians from blighted backgrounds anthropomorphically project
mind onto.
>> If your AI systems "work", all well and good. But don't demand that
>> people call black white in the process. If AI folk spent less time
>> trying to redefine everyday language, people might trust them more.
>This situation doesn't arise in the CR. In fact, the CR's premise is
>that "people's intuition" from outside the room leads them to think
>the room understands, and "people's intuition" once they've seen inside
>the room leads them to think otherwise.
Searle's premises here are that:
An entity is conscious if and only if it is like something
--MORE--(66%)
to be that entity.
If you *are* the entity in question, your consciousness is the only
one that can possibly be present: Same stuff, same consciousness.
There is only one way to be the same thing. (This is my inferrence
and not something I recall Searle saying, so I could be wrong).
The systems response is equivalent to asserting:
There is more than one way to be the entity in question.
The same stuff "been in different ways" (first, qua intentional
system, second, qua symbol cruncher) can give rise to distinctly
different consciousnesses.
There are different ways to be the same thing.
Maybe the systems response is true. It's a wild leap of faith, as I've
remarked before, which is not justified by any argument from its
advocates.
>So, we aren't asking to call
>black white. We are asking whether black should be defined functionally
>(in terms of the light it reflects) or structurally (in terms of which
>pigments it is constructed of).
--MORE--(90%)
There's a third possibility that you have either forgotten
or overlooked, Wayne:
Black is a qualitative experience that is revealed to
us via direct 1st person experience.
Just how do you deal with qualia functionally or structurally, Wayne?
Am I correct in inferring that you think we must, for ideological
reasons, ban qualia from the study of the mind?
-michael
End of article 1298 (of 1332)--what next? [npq]
Article 1299 (33 more) in comp.ai:
From: [email protected] (Stephen Smoliar)
Subject: Re: Where might CR understanding come from (if it exists)
Summary: evolution
Message-ID: <[email protected]>
Date: 31 Mar 89 14:57:07 GMT
References:
Sender: [email protected]
Reply-To: [email protected] (Stephen Smoliar)
Organization: USC-Information Sciences Institute
Lines: 36
--MORE--(19%)
In article <[email protected]> [email protected] (Frans van Otten) writes:
>
>My personal opinion on this is as follows. In the evolutionary process,
>with "survival of the fittest", you have to behave in such a way that you
>will survive long enough to raise a new generation. As the level of
>complexity of the organism increases, it will have to do more "information
>processing": to find food, to protect against enemies, etc. My point:
>intelligence etc. developed out of a need to determine how to behave in
>order to survive. So the behaviourist approach is justified: "when the
>system seems to act intelligently, it *is* intelligent".
>
>Then we invented the computer. We start wondering: can we make this
>machine intelligent ? Before we can write a program for this, we must
>understand the algorithm humans use. This proves to be very difficult.
>Research is hindered by people claiming that understanding requires very
>mysterious causal powers which computers, due to their design, can never
>have. Gilbert Cockton even claims that because human minds are not
>artifacts, while computer systems always will be, there will always be
>performance differences. Apart from the fact that this statement is
>nonsense, it is not of any importance to AI-research.
I find myself basically sympathetic to this approach. However, because
recently our Public Television Network has begun a series of programs about
--MORE--(67%)
current problems in American education, I have been toying with a darker side
of this evolutionary model. Let us accept Frans' premise that intelligent
behavior emerges because it is necessary for survival (i.e., if you lack
physical virtues like strength or speed, you need brains). Then the computer
comes along, sort of like the cherry on top of this monstrous technological
sundae. At each step in the history of technology, machines have made
intelligent behavior less and less necessary for survival Is there a
danger that, as machines increase their potential for "intelligent behavior,"
that they will "meet" the corresponding human potential which is in a decline?
Hopefully, this will not be the case. Hopefully, we, as humans, will have to
become MORE intelligent in order to interact with the very intelligent machines
we build. I just wonder whether or not the technological entrepreneurs who
wish to fashion the world in their image will see it that way.
End of article 1299 (of 1332)--what next? [npq]
Type h for help.
End of article 1299 (of 1332)--what next? [npq]
Article 1300 (32 more) in comp.ai:
From: [email protected] (the hairy guy)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 1 Apr 89 02:23:40 GMT
References:
Reply-To: [email protected] (the hairy guy)
Organization: The IOnstitute for Advanced Partying
Lines: 93
--MORE--(9%)
In article <[email protected]> [email protected] (Frans van Otten) writes:
>This discussion has degraded into a fight between two groups with different
>viewpoints:
>
> 1. Humans have some mysterious powers that are responsible for their
> having a mind. Animals might also have these powers, maybe even
> martians. This property might be inherent to the building material;
> carbon-hydrogen has it, Si doesn't.
>
> 2. Understanding etc. are properties which arise from a certain way to
> process information. The information theory is what matters, not
> the way it is implemented. If we humans can do it using our hardware
> (neurons etc), then computers are able to do this using theirs.
>
I think there is at least one alternative here. In the characterization
of (2), I think there is a certain ambiguity in the use of the term
"information." The stains on my coffee cup carrie the information that
it contained coffee yesterday, but is this information _for_the_coffee_
_cup?_ Certainly not. My mind carries the informatyion that I drank
coffee yesterday, and this _is_ information _for_ me. So there is a
fundamental difference between two things that we would call information.
Now we can ask the question, what is the minimum amount of information
--MORE--(31%)
that an "information processing" sequence must contain in order to
be an instance of mentality. Now one thing that seems reasonable is that
the sequence must be able to carry the information that it is _about_
something (other than itself). For instance, if I am thinking about
alligators, I must know that I am thinking about alligators, or else
I wouldn't be thinking about alligators (this is not a big revelation
to most people). Now I might be mistaken, like I might be thinking
about an alligator but be attaching the name "crockadile" to it, or I
might think that alligators are small furry things that rub up against
you, while we can imagine a possible world where alligators are long
green things that would just as soon chow on you as look at you. Also,
I might not know very much about alligators. For instance, suppose I
am a little kid and my mother says to me: "Go tell your father that the
alligators are coming up from the swamps and we should leave some milk
and cookies for them." Now all I would know about alligators is that
they are something my folks are talking about. But in all these cases,
I would submit that if I think about alligators, I know that my thoughts
are directed towards alligators, it's just that the other mental images
of alligators I could appeal to are either incorrect or very sketchy.
Now, if we accept this as a pretty necessary feature of mentality, we
can ask, in the abstract, whether syntactic digital computation is a
sufficiently rich process to carry the information that it is _about_
--MORE--(58%)
something. If we find reason to believe thatr it is not, then we would
also have reason to believe (1) human mentality is not syntactic digital
computation, and (2) syntactic digital computation cannopt give rise to
a system of information processing as rich as human mentality, no matter
what medium it is implemented in.
I see the Chinese Room argument as an argument that Syntactic Digital
computation is in fact _not_ sufficiently rich to mee the standard for
mentality I oulined above. I personally find it convincing, and would
be willing to discuss either this interpretation of CR or the standards
above, but I believe I have show that there is at least one reasonable
alternative to the two positions Mr van Otten describes above.
>order to survive. So the behaviourist approach is justified: "when the
>system seems to act intelligently, it *is* intelligent".
And all this time I thought there was something called "consciousness."
Imagine! But seriously, isn't the question of consciousness and
intensionality the question that makes philosophy of mind interesting in
the first place. And isn't your behavioristic brushing of them aside
tatamount to denying them as important aspects mentality? And if you
do choose to deny this, don't you come up with the problem that we
_are_ conscious and intensional, and that that's why we're doing all
--MORE--(81%)
this in the first place?
Neal.
530 N 24th Ave E "I cannot disclaim, for my opinions
Duluth, MN 55812 *are* those of the Instutute for
[email protected] Advanced Partying"
"Silence in El Salvador also. Just how old are "the bad old
days"? Down goes Vice President Quayle on February 3 to
urge good conduct on the Salvadorian Army. On the eve of
Quayle's speech, says the human rights office of the
Catholic Archdiocese, five uniformed troops broke into the
homes of university students Mario Flores and Jose Gerardo
Gomez and took them off. They are found the next day, just
about the time Quayle and the U.S. press are rubbing noses
at the embassy, dead in a ditch, both shot at close range.
Gomez's fingernails have signs of "foreign objects" being
driven under them. Flores's facial bones and long vertabrae
are fractured, legs lacerated, penis and scrotum bruised "as
a result of severe pressure or compression." No
investigation follows."
--Alexander Cockburn
--MORE--(99%)
_The_Nation_, 3 April, 1989
End of article 1300 (of 1332)--what next? [npq]
Article 1301 (31 more) in comp.ai:
From: [email protected] (g.l.sicherman)
Subject: Re: humans and understanding
Message-ID: <[email protected]>
Date: 31 Mar 89 20:08:36 GMT
References:
Organization: AT&T Bell Laboratories, West Long Branch, NJ
Lines: 51
--MORE--(14%)
> Gilbert Cockton writes:
> >... For one, human minds are not artefacts, whereas computer programs
> >always will be. This alone will ALWAYS result in performance
> >differences. ...
[email protected] (Frans van Otten) writes:
> This discussion has degraded into a fight between two groups with different
> viewpoints:
>
> 1. Humans have some mysterious powers that are responsible for their
> having a mind. Animals might also have these powers, maybe even
> martians. This property might be inherent to the building material;
> carbon-hydrogen has it, Si doesn't.
>
> 2. Understanding etc. are properties which arise from a certain way to
> process information. The information theory is what matters, not
> the way it is implemented. If we humans can do it using our hardware
> (neurons etc), then computers are able to do this using theirs.
If this is accurate, it explains why I have trouble choosing sides in
this debate! I believe that understanding always links one kind of
experience with another--that it is by nature metaphoric. The most
obvious example is words and reality. Computers have only one reality--
--MORE--(57%)
they do not *need* to understand. To bring up a theological example, God
understands nothing because She knows everything.
By the same token, Gilbert's statement is not convincing the deter-
minists in this group because it is weaker than it needs to be. We
can apply the idea of measurable "performance" *only* to tools. It has
no meaning when applied to people, except insofar as we regard people as
tools.
Popular culture does indeed influence us to regard people as tools. For
example, the typical magazine article about how our schoolchildren are
falling behind in studies implies that they must be enabled to catch
up--for *our* benefit. The public is an illusion created by print media.
Media like Netnews undermine that illusion.
-:-
A disciple of another sect once came to DRESCHER as he was
eating his morning meal. "I would like to give you this
personality test," said the outsider, "because I want you
to be happy."
DRESCHER took the paper that was offered to him and put it
into the toaster, saying, "I wish the toaster to be happy too."
--MORE--(97%)
--A. I. Koans
--
Col. G. L. Sicherman
[email protected]
End of article 1301 (of 1332)--what next? [npq]
Article 1302 (30 more) in comp.ai:
From: [email protected] (Chris Malcolm [email protected] 031 667 1011 x2550)
Subject: Understanding involves Learning?
Message-ID: <[email protected]>
Date: 30 Mar 89 18:29:47 GMT
Reply-To: [email protected] (Chris Malcolm)
Organization: Dept. of AI, Univ. of Edinburgh, UK
Lines: 31
--MORE--(18%)
There has been a lot of discussion lately about the Chinese Room, and
whether purely syntactic processes could understand, or even appear to
understand. In the course of this Stevan Harnad has argued that the kind
of linguistic competence needed to pass the Turing Test couldn't be
possessed by anything short of a creature potentially capable of passing
the Total Turing Test, i.e., a creature "living" in the real world, with
sensors, effectors, and no doubt, a personal history. If I have grasped
his argument properly, this is because convincing linguistic competence
will require the kind of complex internal mechanisms inevitably involved
in handling rich sensors in a capable way; the mechanisms involved in
"symbol grounding" as it is often called, although it is the whole
syntactic mechanics which needs grounding, not just the symbols.
In other words (and not as simply as these few words suggest),
convincing linguistic competence requires semantics as well as syntax.
My question is this. Does convincing linguistic competence involve
learning? For it seems to me that one of the things that happens in
human conversations is that, in many trivial little ways, in hints,
metaphors, negotiations, etc., both parties are offering one another
opportunities to learn, even trying to teach. Sooner or later a
conversational robot which couldn't learn new ideas would be suspected
of being a metal-head.
--MORE--(88%)
To approach it from another direction: does understanding involve
learning?
--
Chris Malcolm [email protected] 031 667 1011 x2550
Department of Artificial Intelligence, Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK
End of article 1302 (of 1332)--what next? [npq]
Article 1303 (29 more) in comp.ai:
From: [email protected] (Gilbert Cockton)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 31 Mar 89 10:38:17 GMT
References:
Reply-To: [email protected] (Gilbert Cockton)
Organization: Comp Sci, Glasgow Univ, Scotland
Lines: 123
--MORE--(6%)
>This discussion has degraded into a fight between two groups with different
>viewpoints:
There is considerable diversity, as well as incompatability, in the
arguments both for and against the possibility of strong AI.
You are particularly poor in your grasp of all the anti-AI arguments.
Some are based on the impossibility of simulating the brain's hardware
on a digital computer (indeed, the impossibility of accurately and
faithfully simulating ANY part of the natural world on a computer).
Others rely on epistemic arguments. Others rely on theories of
ideology which deny any possible objective status to such value laden
concepts as 'intelligence', which are symptomatic of a system of
social stratification peculiar to modern Europe (and taken on in an
even cruder form in the New World). The word doesn't have a usable
meaning in any scientific context.
The sensible approach is to identify tasks for automation, describe
them accurately and acceptably, and then proceed. Designing systems to
possess ill-defined and hardly understood properties is an act of
intellectual dishonesty at worst, and an act of intellectual mediocrity
at best. Robotics has the advantage of dealing with fairly well-defined
and understood tasks. I'd be surprised if anyone in robotics really
cares if their robots are 'intelligent'. Succesful performance at a
--MORE--(24%)
task is what matters. This is NOT THE SAME as intelligent behaviour,
as we can have clear conditions for success for a task, but not for
intelligent behaviour. Without verification or falisification
criteria, the activity is just a load of mucking about - a headless
chicken paradigm of enlightenment.
>with "survival of the fittest", you have to behave in such a way that you
>will survive long enough to raise a new generation. As the level of
>complexity of the organism increases, it will have to do more "information
>processing": to find food, to protect against enemies, etc. My point:
>intelligence etc. developed out of a need to determine how to behave in
>order to survive. So the behaviourist approach is justified: "when the
>system seems to act intelligently, it *is* intelligent".
You equate intelligence with a high degree of information processing
(by co-location of sentences, there is no explicit or clear argument in
this paragraph). A cheque clearing system does a high degree of
information processing. It must be intelligent then - and AI was
achieved 20 years ago?
You are making a historical point. Please make it like a competent historian.
Otherwise leave evolutionary arguments alone, as you are just making things up.
--MORE--(40%)
>Before we can write a program for this, we must
>understand the algorithm humans use. This proves to be very difficult.
>Research is hindered by people claiming that understanding requires very
>mysterious causal powers which computers, due to their design, can never have.
'Mysterious' is true only in the sense that we do not yet understand them.
'Eternally mysterious' would not be true. What is true is that
causation in human/animal behaviour, and causation in physics, are very
different types of cause (explanatory dualism). This does not hold up
research at all, it just directs research into different directions.
Logical necessity is a further type of pseudo-causation. Its relation
to human agency is highly tenuous, and it is wrong to bet too much on
it in any research into psychology.
Computers cannot uncover mysteries. Automation research may do, in
that the task or problem must be properly studied, and it is this
study, which advances knowledge rather than the introverted computer
simulation. Attempts at computer simulation do, however, expose gaps in
knowledge, but this does not make the mystery go away - it only deepens
it. The problem is that, if studies are driven by the imperative to
automate, this will force the research into an epistemic and
methodological straightjacket. This is a narrow approach to the study
of human behaviour, and is bound to produce nonsense unless it balances
--MORE--(59%)
itself with other work. Hence AI texts are far less 'liberal' than
psychology ones - the latter consider opposing theories and paradigms.
>Gilbert Cockton even claims that because human minds are not
>artifacts, while computer systems always will be, there will always be
>performance differences. Apart from the fact that this statement is
>nonsense, it is not of any importance to AI-research.
It is highly relevant. I take it that you think it is nonsense because
I offer no support (reasonable) and you don't want to believe it (typical).
An artefact is designed for a given purpose. As far as the purpose is
concerned, it must be fully understood. The human 'mind' (whatever
that is - brain? consciousness? culture? civilisation? knowledge?) was
not 'designed' for a given purpose as far as I can see (i.e. I am not a
convinced creationist, although nor have I enough evidence to doubt
some form of creation). As 'mind' was not designed, and not by us more
importantly, it is not fully understood for any of its activities
('brains' are of course, e.g. sleep regulation). Hence we cannot yet
build an equivalent artefact until we understand it. Building in
itself does not produce understanding. I can expose ignorance, but
this is not cured by further building, but by further study. Strong AI
does not do this study.
--MORE--(77%)
My argument is initially from scepticism. I extend the argument to all
forms of (pseudo-)intellectual activity which cannot improve our
understanding. Strong AI, as modelling without study, i.e. without
directed attempts to fill gaps in knowledge by proper, liberal, study,
is one such dead-end. Computer modelling based on proper liberal
study, is more profitable, but only as a generator of new hypotheses.
It does not establish the truth of anything. Finally, establishing the
truth of anything concerning human agency, is far far harder than
establishing the truth about the physical world, and this is hard
enough and getting harder since Quantum interpretations.
We have insitutionalised research. There are areas to be studied, and
a permanent role in our societies for people who are drawn to advancing
knowledge. Unfortunately, too many (of the weaker?) researchers today
see any argument on methodological grounds as an attack on research, an
attack on their freedom, a threat to the advance of scientific
knowledge, a threat to their next funding.
The purpose of research is to advance knowledge. Advancing knowledge
requires an understanding of what can, and cannot, count as knowledge.
In our bloated academia, respect for such standards is diminishing.
--MORE--(94%)
Research is not hindered by ideas, but by people acting on them. If
strong AI cannot win the arguments in research politics, then tough,
well - ironic really, for without research politics, it would not have
grown as it did in the first place. Those that live by the flam, die
by the flam.
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
[email protected]
End of article 1303 (of 1332)--what next? [npq]
Article 1304 (28 more) in comp.ai:
From: [email protected] (g.l.sicherman)
Subject: Re: computers and users
Message-ID: <[email protected]>
Date: 1 Apr 89 00:14:16 GMT
References: <[email protected]>
Organization: AT&T Bell Laboratories, West Long Branch, NJ
Lines: 34
--MORE--(19%)
In a footnote to my remark about the computer-user relationship, Herman
Rubin (l.cc.purdue.EDU!cik) has written to me:
> > It's important to remember that the user is part of the loop. If a
> > computer has no users, is it a computer?
>
> I disagree. A computer is a device which "mechanically" performs certain
> types of symbol manipulation. Possibly "mechanically" should be completely
> deleted. If an electronic glitch caused a computer to perform certain
> operations, and then erase the results of those actions, it would still
> be computing, and would still be a computer.
This will remind many of the scholastics' paradox about the tree that
falls with nobody to hear. 500 years ago print media pushed us toward
an objective model of reality; now electronic media are pushing us
toward an interactive model.
The implications for A.I. are obvious if you use an interactive model.
What does a man do when there's nobody around to talk to him? What
does a computer program do when there's nobody around to talk to it?
To *use* an artificial intelligence as an interlocutor requires you
to sustain the same kind of illusion as when you "listen" to Michael
--MORE--(84%)
Jackson on the stereo.
-:-
One day SIMON was walking to the conference hall when he
met WEIZENBAUM, who said: "I have a problem for you to solve."
SIMON replied, "Tell me more about your problem," and walked
on.
--A. I. Koans
--
Col. G. L. Sicherman
[email protected]
End of article 1304 (of 1332)--what next? [npq]
Article 1305 (27 more) in comp.ai:
From: [email protected] (Thomas G Edwards)
Subject: Re: the surrealism of dreams
Summary: PDP Dreams
Keywords: Boltzman Machines Dreams PDP
Message-ID: <[email protected]>
Date: 31 Mar 89 22:27:51 GMT
References: <[email protected]> <[email protected]>
Reply-To: [email protected] (Thomas G Edwards)
Organization: The Johns Hopkins University - HCF
Lines: 36
--MORE--(21%)
In article <[email protected]> [email protected] (R.) writes:
>In article <[email protected]> Andy Ylikoski writes:
>>I would like to try to contribute to the discussion involving dreams
>>"not following the usual laws of nature".
A while back I mentioned that there are some analogies between dreaming
and Boltzman Machine learning in PDP. I am sure that the author's
do not mean to claim that dreaming is Boltzman Machine learning
unclamped, but just that there are interesting similarities.
Hinton and Sejnowski point out that Crick and Mitchison (1983)
"have suggested that a form of reverse learning might occur
during REM sleep in mamals. Their proposal was based on the
assumption that parasitic modes develop in large networks that
hinder the distributed storage and retrieval of information." They
quote Crick and Mitchision: "More or less stimulation of the
forebrain by the brain stem that will tend to stimulate the
inappropriate modes of brain activity ... and especially those which
are too prone to be set off by random noise rather than by specific
signals."
Boltzman Machine learning uses a similar concept. Learning is
split into two phases, a phase+ , where positive Hebbian learning
occurs with input and output units clamped to their proper values,
--MORE--(76%)
and a phase-, where negative Hebbian learning occurs to the
unclamped network (thus randomly stimulating those parasitic modes
described above, and the associated nodes get de-strengthened by the
negative Hebbian learning which usually strengthens nodes which are
simultaneously stimulated, reducing the combined importance of those
"erroneous" modes).
Crick, F. and Mitchison, G. (1983). The function of dream sleep.
Nature, 304, 111-114.
Rumelhart, McClelland eds (1987). Parallel Distributed Processing.
MIT Press, 282-317.
-Thomas Edwards
End of article 1305 (of 1332)--what next? [npq]
Article 1306 (26 more) in comp.ai:
From: [email protected] (Greg Lee)
Subject: Re: Understanding involves Learning?
Message-ID: <[email protected]>
Date: 1 Apr 89 17:57:30 GMT
References: <[email protected]>
Organization: University of Hawaii
Lines: 23
--MORE--(29%)
From article <[email protected]>, by [email protected] (Chris Malcolm [email protected] 031 667 1011 x2550):
" There has been a lot of discussion lately about the Chinese Room, and
" whether purely syntactic processes could understand, or even appear to
" understand.
You've made too much sense of the discussion. It's conceded for the
sake of the CR argument that the processes do appear to understand.
The CR does pass the Turing Test and so does possess convincing
linguistic competence.
" My question is this. Does convincing linguistic competence involve
" learning? ...
Sure. You make a very good point here.
" Sooner or later a
" conversational robot which couldn't learn new ideas would be suspected
" of being a metal-head.
Yes. This suspicion has been raised about certain participants
in this discussion.
--MORE--(97%)
Greg, [email protected]
End of article 1306 (of 1332)--what next? [npq]
Article 1307 (25 more) in comp.ai:
From: [email protected] (Greg Lee)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 1 Apr 89 18:08:47 GMT
References: <[email protected]>
Organization: University of Hawaii
Lines: 9
--MORE--(45%)
From article <[email protected]>, by [email protected] (Gilbert Cockton):
" ... As 'mind' was not designed, and not by us more
" importantly, it is not fully understood for any of its activities
" ('brains' are of course, e.g. sleep regulation). Hence we cannot yet
" build an equivalent artefact until we understand it. ...
It doesn't follow. Think of a diamond, for instance.
Greg, [email protected]
End of article 1307 (of 1332)--what next? [npq]
Article 1308 (24 more) in comp.ai:
From: [email protected] (3929])
Newsgroups: comp.ai,comp.lang.misc
Subject: Re: NEXPERT experiences ?
Message-ID: <[email protected]>
Date: 31 Mar 89 13:20:00 GMT
References: <[email protected]>
Reply-To: [email protected] (Brian Gilstrap [5-3929])
Organization: Southwestern Bell Telephone Co
Lines: 24
--MORE--(28%)
In article <[email protected]> [email protected] (Raymond Fink) writes:
=I'd like to hear from people who are using NEXPERT Object from
=Neuron Data for industrial-strength applications. We've looked at the
=pictures and seen the demos, but would like to hear from some users.
=
=Is it meeting your expectations?
=Is performance adequate on workstation-class machines (suns, uvaxen, etc)?
=How about developing customized end-user interfaces?
=Do you miss having Lisp (or some other language) underneath?
=Things you like or dislike, especially in comparison to other high-end
=E.S. tools?
=
=Replies via email, please.
I'd like to be CC-ed on these replies (or perhaps they could be posted to
the net if there are others out there who are in the same position?)...
Thanks,
Brian R. Gilstrap Southwestern Bell Telephone
One Bell Center Rm 17-G-4 ...!ames!killer!texbell!sw1e!uucibg
St. Louis, MO 63101 ...!bellcore!texbell!sw1e!uucibg
(314) 235-3929
--MORE--(98%)
#include
End of article 1308 (of 1332)--what next? [npq]
Article 1309 (23 more) in comp.ai:
From: trevor@mit-amt (Trevor Darrell)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <3684@mit-amt>
Date: 2 Apr 89 03:29:26 GMT
References:
Reply-To: [email protected] (Trevor Darrell)
Organization: MIT Media Lab, Cambridge MA
Lines: 25
--MORE--(32%)
In article <2705@crete> [email protected] (Gilbert Cockton) writes:
>...
>My argument is initially from scepticism. I extend the argument to all
>forms of (pseudo-)intellectual activity which cannot improve our
>understanding.
>...
>The purpose of research is to advance knowledge. Advancing knowledge
>requires an understanding of what can, and cannot, count as knowledge.
>In our bloated academia, respect for such standards is diminishing.
Excuse me, but exactly how does one determine when an activity can or
cannot improve understanding? And have you published your test
of what can, and cannot, count as knowledge? ``References?''
Would you have had all intellectual explorations throughout the
ages constrained by these tests? All the artistic explorations? Would you
perscribe them as an absolute guide to your child's education?
Are you perhaps a bit lacking in the rigor of your debate? (Maybe diatribe
is a better term?...)
Trevor Darrell
--MORE--(95%)
MIT Media Lab, Vision Science.
[email protected]
End of article 1309 (of 1332)--what next? [npq]
Article 1310 (22 more) in comp.ai:
From: [email protected] (andrew)
Subject: CR and consciousness
Keywords: there's a gilbert up my nose
Message-ID: <[email protected]>
Date: 2 Apr 89 01:55:47 GMT
Organization: National Semiconductor, Santa Clara
Lines: 37
--MORE--(14%)
The distinction made between synthetic consciousness (or lack of) and organic
consciousness seems to be far greater than that between different organisms,
if I sense the recent CR-related postings correctly. It's a fact that people
in general are more liberal with their attribution of consciousness as they
take an increasingly holistic or religious or numenous or drug-induced or
allegorical view of their world. For many people, this may extend right down
to organisms as evolutionarily primitive as trees and plants. I don't think
it's helpful to dismiss this "superstition" as irrelevant, since it does
permeate a significant proportion of the world's current population. It's
useful to note because it reflects our ancient ingrained cultural biasses,
and shows that a hard fight is necessary to get an "artificial consciousness"
concept accepted. In some old way, "consciousness" and "nature/natural" are
intertwined concepts for most human beings.
I disagree with Sicherman's "computers have only one reality". Today's maybe,
but a perfect soldier could be similarly described.
Perhaps the way to go in creating "true AI" is to grow it.
I suggest the path to take is to create an ensemble of highly adaptive,
minimally-hardwired automatons with a high degree of interactivity in
a continuously challenging (but not too novel) environment. Only with
the emergence of machine culture will the sought-after properties
--MORE--(73%)
emerge in an unsupervised and spontaneous way. Multicellular Automata.
Then you can figure out what "consciousness" is by taking apart the watch!
Problem solved.
Since it's April 1st, I can't resist a tentative definition of human
consciousness:
That instinctive feeling of immediate revulsion of, and desire to distance
one's views from Gilbert Cockton, having just been informed It is an
automaton.
=====
Andrew Palfreyman USENET: ...{this biomass}!nsc!logic!andrew
National Semiconductor M/S D3969, 2900 Semiconductor Dr., PO Box 58090,
Santa Clara, CA 95052-8090 ; 408-721-4788 there's many a slip
'twixt cup and lip
End of article 1310 (of 1332)--what next? [npq]
Article 1311 (21 more) in comp.ai:
From: [email protected] (Stevan Harnad)
Subject: Re: Understanding involves Learning?
Summary: Learning Capacity is Necessary, Actual Prior Learning Is Not
Message-ID:
Date: 2 Apr 89 04:40:07 GMT
References: <[email protected]>
Organization: Rutgers Univ., New Brunswick, N.J.
Lines: 30
--MORE--(22%)
[email protected] (Chris Malcolm) of Dept. of AI, Univ. of Edinburgh, UK
asks:
" Does convincing linguistic competence involve learning?... does
" understanding involve learning?
There is good reason to believe that a candidate that will be able to
pass the Linguistic Turing Test (LTT) will have to have and draw
indirectly upon the robotic capacities that would be needed in
order to pass the Total Turing Test (TTT), and that these will include
the ability to learn. Suitably defined, "learning" is even involved in
the normal course of coherent discourse, since information is
exchanged, and the change must be reflected in the ensuing discourse.
It is another question, however, whether the candidate would
necessarily have had to arrive at its LTT-passing ability through the
exercise of its learning capacity, in real time. In principle, the
learning capacity, like the robotic capacity, could be latent --
present as a functional capability, but not yet used directly. In other
words, there's no reason why a device with the functional wherewithal
to pass the LTT couldn't have sprung, like Athena, fully developed from
the head of Zeus (or some other artificer). In that sense there's
--MORE--(81%)
nothing magic about learning or development (or even about real -- as
opposed to apparent -- experiential history).
--
Stevan Harnad INTERNET: [email protected] [email protected]
[email protected] [email protected] [email protected]
BITNET: harnad@pucc.bitnet CSNET: harnad%[email protected]
(609)-921-7771
End of article 1311 (of 1332)--what next? [npq]
Article 1312 (20 more) in comp.ai:
From: ssi[email protected] ( SINGH S - INDEPENDENT STUDIES )
Subject: Re: Thinking about the reduction of
Message-ID: <[email protected]>
Date: 2 Apr 89 05:29:53 GMT
References: <[email protected]> <[email protected]>
Reply-To: ssi[email protected] ( SINGH S - INDEPENDENT STUDIES )
Organization: U. of Waterloo, Ontario
Lines: 11
--MORE--(49%)
Agreed, if anything, the existence of higher intelligences creates
entropy faster than anything else.
This may sound misplaced, but can anyone CONCISELY and PRECISELY
explain the strong and weak anthropic principles. What are the implications,
if any, for intelligent machines? Is this more related to the
idea of life, or intelligence? ie. Does the anthropic principle say
something exclusively about life, or exclusively about intelligence, or
both?
Sanjay Singh
End of article 1312 (of 1332)--what next? [npq]
Article 1313 (19 more) in comp.ai:
From: ssi[email protected] ( SINGH S - INDEPENDENT STUDIES )
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 2 Apr 89 05:43:50 GMT
References:
Reply-To: ssi[email protected] ( SINGH S - INDEPENDENT STUDIES )
Organization: U. of Waterloo, Ontario
Lines: 19
--MORE--(33%)
Regarding the division of thought: I think BOTH are right in a way, but
both also have their flaws.
Carbon and Hydrogen does have something going for it that Silicon does not,
that property of being INCREDIBLY plastic. If you could freeze someone at
time t, and map out the neural networks, then froze them again at time t
+5, you would find changes in the nets. Current computer technology does
not allow circuitry to change itself. Unless there is a major revolution
in the design of hardware, I honestly think the first truly intelligent computer
will be made with organic materials. Who knows? It may even be grown using
recombinant DNA or something like that. There is no way we can match the
plasticity of the brain with current technology.
The second idea is very pure, and very open-ended. While I admire this
purity and free-form style, I think it has become undisciplined. Such all
encompassing models should be able to express every macroscopic
observation in terms of the theoretical models. No one seems to be able
to agree on even the most basic of definitions. Plenty of vigour, but
no rigour .
End of article 1313 (of 1332)--what next? [npq]
Article 1314 (18 more) in comp.ai:
From: [email protected] (Greg Lee)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 2 Apr 89 14:58:14 GMT
Organization: University of Hawaii
Lines: 22
--MORE--(20%)
It's hard to see where CR understanding might come from (if it exists).
It's hard to see where understanding might come from (if it exists).
It's hard to see where consciousness might come from (if it exists).
It's hard to see where meaning might come from (if it exists).
I know why it's hard. These things don't exist. There are theories to
the contrary that are embedded in the way we ordinarily talk about
people and their behavior. Meaning is the most obvious case. When two
sentences are paraphrases, we say they 'mean the same thing'. Then
there must be a thing that they both mean, right? Thinking of that as a
theory, it might be right, or it might be wrong. We ought to devise
alternative theories and look for evidence. We have. I think it's
wrong, myself, but opinions differ. If it does turn out to be wrong,
then the effort to program meaning into a machine can never be
successful -- not because meaning is essentially human, or essentially
organic, or essentially analogue, or essentially denotational, or any of
the other straws that have been grasped at in this discussion. But
because there's simply no such thing in the world to be found in us or
to be put into a machine.
Greg, [email protected]
End of article 1314 (of 1332)--what next? [npq]
Article 1315 (17 more) in comp.ai:
From: [email protected] (Cliff Joslyn)
Newsgroups: comp.ai.neural-nets,comp.ai,comp.theory,talk.philosophy.misc,sci.philosophy.tech,sci.bio.technology
Subject: CFP: American Society for Cybernetics 1989
Message-ID: <[email protected]>
Date: 2 Apr 89 20:39:55 GMT
Organization: SUNY Binghamton, NY
Lines: 110
--MORE--(10%)
CALL FOR PAPERS
the 1989 Meeting of the American Society for Cybernetics,
in Virginia Beach, Virginia on 9-12 November.
Pre-Conference Tutorial: 8 November.
Extensively, cybernetics can be defined by the connections it evokes.
Modern cybernetics was born forty years ago in a series of intense,
interdisciplinary conferences on "circular causal and feedback
mechanisms" which drew on anthropology, electrical engineering,
psychology, biology, and philosophy, among many other fields. From the
conversations and controversies that ensued arose the ideas of
organizational closure, self-reference, attractrs, and other recognitions
of essential circularities in complex systems. Their influence has been
felt in areas as diverse as immunology and political science, family
therapy and information systems, education and ethics.
Intensively, cybernetics could be defined as the search for "those notions
which perade all purpive bhavior and all understanding of our world" ,
as Warren McCulloch wrote of those early discussions, and the concern
with the tenability and consequences of ou conceptions of kowing,
causality, and the laws of nature.
The challenge and excitement of cybernetics lies in the difference
--MORE--(34%)
between these two definitions, and the bond. It is to go beyond
philosophizing and tool-building alike, to embrace distinction, not be
engulfed by it, and to let creativity and rigor inform not exclude one
another.
These are the concerns of the conference:
1. What questions does a cybernetician ask, and how
are these understood by workers in other fields?
2. What are the lessons of more recent connections for
understanding understanding?
3. What social and scientific processes underlie
change (or progress?) in cybernetics as a field?
They will be articulated in a series of plenary sessions on:
Self-organization, computer technology, & management,
The phenomena of language in the machine, animal, & organization,
Modeling as definition, reflection, & intervention,
The social construction of knowledge, and
Learning & helping.
--MORE--(51%)
PROCESS. To explore connecting in conversation, the conference will
include special issue seminars that will consider a particular topic in
greater depth and will include a packet of readings to be mailed to
participants before the conference; an ongoing participatory laboratory,
stocked with mechanical and electronic tools for modeling,
experimentation, and expression; "Questions of Cybernetics", a special full
day pre-conference tutorial, linked from the conference to sites around the
country by interactive television; and a cybernetics fair and other
unscheduled time in which to pursue the conversations and respond to the
cncrns hat arise during the conference.
PROGRAM. To encourage and faciliate preparation on the part of
presenters and other participants, we will publish a Conference Program,
including abstracts for each presentation and workshop, and theme
statements for each plenary session. The Program will be mailed to
conference registrants in early fall.
STUDENTS AND NEW PARTICIPANTS: To broaden participation, we plan
to provide a limited number of travel scholarships and awards. Please
contact the organizers at the address below for more information.
DEADLINE. We invite your participation. Proposals must be received by
--MORE--(77%)
May 1, 1989. They should include:
1. a title and abstract (150-300 words);
2. for seminar proposals only, a short reading list (30-50
pages of reading);
3. format (e.g. paper presentation, seminar, performance,
workshop, exhibit, or demonstration) and corresponding
technical and audio-visual requirements.
Since items 1 and 2 will be published in the Conference Program,
they must be submitted in one of the following formats:
camera ready copy OR
5 1/4" or 3 1/2 " MS-DOS 3.3 compatible floppy disk:
ASCII, Microsoft Word(, Wordperfect(, or Wordstar( OR
3 1/2" Macintosh( compatible floppy disk: Text, Microsoft
Word(, or MacWrite( .
Please mail proposals to:
--MORE--(90%)
Christoph Berendes
Center for Cybernetic Studies
in Complex Systems
Old Dominion University
Norfolk, VA 23529-0248
(804) 683-4558
Internet: [email protected]
Usenet: {hplabs,sun}!well!chrisber
PLEASE POST
--
O---------------------------------------------------------------------->
| Cliff Joslyn, Cybernetician at Large
| Systems Science, SUNY Binghamton, [email protected]
V All the world is biscuit shaped. . .
End of article 1315 (of 1332)--what next? [npq]
Article 1316 (16 more) in comp.ai:
From: [email protected] (Cliff Joslyn)
Newsgroups: news.groups,comp.ai.neural-nets,comp.ai,comp.theory,talk.philosophy.misc,sci.philosophy.tech,sci.bio.technology,comp.theory.self-org-sys,comp.theory.dynamic-sys
Subject: Announcing the Systems and Cybernetics mailist list
Message-ID: <[email protected]>
Date: 2 Apr 89 21:21:09 GMT
Organization: SUNY Binghamton, NY
Lines: 50
End of article 1316 (of 1332)--what next? [npq]
Article 1317 (15 more) in comp.ai:
From: [email protected] (Donald E Walker)
Subject: ACL Annual Meeting, 26-29 June, Vancouver; program & registration info
Message-ID: <[email protected]>
Date: 2 Apr 89 22:59:33 GMT
Sender: [email protected]
Lines: 574
--MORE--(1%)
please use as much information as seems appropriate for your bulletin board,
digest, or publication
The printed version of the following program and registration information will
be mailed to ACL members by the end of the week. Others are encouraged to use
the attached form or write for a program flyer to the following address:
Dr. D.E. Walker (ACL)
Bellcore - MRE 2A379
445 South Street - Box 1910
Morristown, NJ 07960-1910, USA
or send net mail to [email protected] or uunet.uu.net!bellcore!walker,
specifying "ACL Annual Meeting Information" on the subject line.
ASSOCIATION FOR COMPUTATIONAL LINGUISTICS
27th Annual Meeting
26-29 June 1989
Instructional Resources Centre (IRC)
University of British Columbia
Vancouver, British Columbia, Canada
SUNDAY EVENING, 25 JUNE
7:00-9:00 Tutorial Registration and Reception
--MORE--(5%)
Fort Camp Lounge, Walter Gage Residence Halls Complex
MONDAY MORNING, 26 JUNE
9:00-12:30 TUTORIAL SESSIONS
Theatre 4 Constrained Grammatical Formalisms
Aravind Joshi, K. Vijay-Shanker, & David Weir
Theatre 5 Psycholinguistic Approaches to Language Comprehension
Michael Tanenhaus
MONDAY AFTERNOON, 26 JUNE
2:00-5:30 TUTORIAL SESSIONS
Theatre 4 Morphology and Computational Morphology
Richard Sproat
Theatre 5 Speech Technology
Jared Bernstein & Patti Price
MONDAY EVENING, 26 JUNE
7:00-9:00 Conference Registration and Reception
Lobby
8:00-9:30 PANEL: Computational Linguistics & Research in the Humanities
Don Walker (Chair), Patrick Hanks, Nancy Ide,
Mark Liberman, Martha Palmer, Antonio Zampolli
--MORE--(8%)
REGISTRATION: TUESDAY THURSDAY
8:00-5:00 Lobby; until noon Thursday
EXHIBITS: TUESDAY THURSDAY
9:00-6:00 Various rooms on lobby floor; until 1:30pm Thursday
***** ALL TECHNICAL SESSIONS IN THEATRE 2 *****
TUESDAY MORNING, 27 JUNE
9:00-9:15 Opening remarks and announcements
9:15 9:40 A Transfer Model Using a Typed Feature Structure Rewriting
System with Inheritance
Remi Zajac
9:40-10:05 A Semantic-Head-Driven Generation Algorithm for
Unification-Based Formalisms
Stuart M. Shieber, Gertjan van Noord, Robert Moore,
& Fernando C. N. Pereira
10:05 10:35 Break
10:35 11:00 A Three-Valued Interpretation of Negation in Feature Structure
Descriptions
Anuj Dawar & K. Vijay-Shanker
11:00-12:00 INVITED TALK: Natural Language and Knowledge Representation:
So Close Together Yet So Far Apart
--MORE--(11%)
James Allen
TUESDAY AFTERNOON, 27 JUNE
1:30-1:55 Logical Forms in the Core Language Engine
Hiyan Alshawi & Jan van Eijck
1:55 2:20 Unification-Based Semantic Interpretation
Robert C. Moore
2:20-2:45 Reference to Locations
Lewis G. Creary, J. Mark Gawron, & John Nerbonne
2:45 3:05 Break
3:05 3:30 Getting at Discourse Referents
Rebecca J. Passonneau
3:30-3:55 Conversationally Relevant Descriptions
Amichai Kronfeld
3:55 4:20 Cooking Up Referring Expressions
Robert Dale
4:20-4:40 Break
4:40-5:05 Word Association Norms, Mutual Information and Lexicography
Kenneth Church & Patrick Hanks
5:05 5:30 Lexical Access in Connected Speech Recognition
Ted Briscoe
5:30-5:55 Dictionaries, Dictionary Grammars and Dictionary Entry Parsing
Mary S. Neff & Branimir K. Boguraev
--MORE--(15%)
WEDNESDAY MORNING, 28 JUNE
9:00-9:25 Some Chart-Based Techniques for Parsing Ill-Formed Input
Chris Mellish
9:25 9:50 On Representing Governed Prepositions and Handling `Incorrect'
and Novel Prepositions
Hatte Blejer & Sharon Flank
9:50-10:15 Acquiring Disambiguation Rules from Text
Donald Hindle, AT&T Bell Laboratories
10:15 10:45 Break
10:45-11:10 The Effects of Interaction on Spoken Discourse
Sharon L. Oviatt & Philip R. Cohen
11:10-12:10 INVITED TALK: Repair and the Organization of Natural Language
Emmanuel Schegloff
WEDNESDAY AFTERNOON, 28 JUNE
1:30-1:55 How to Cover a Grammar
Rene Leermakers
1:55-2:20 The Structure of Shared Forests in Ambiguous Parsing
Sylvie Billot & Bernard Lang
2:20-2:50 Break
2:50-3:15 A Calculus for Semantic Composition and Scoping
Fernando Pereira
--MORE--(18%)
3:15-3:40 A General Computational Treatment of the Comparative
Carol Friedman
3:40-4:05 The Lexical Semantics of Comparative Expressions
Duane E. Olawsky
4:05-4:25 Break
4:25-4:50 Automatic Acquisition of the Lexical Semantics of Verbs from
Sentence Frames
Mort Webster & Mitch Marcus
4:50-5:15 Computer Aided Interpretation of Lexical Cooccurrences
Paola Velardi, Maria Teresa Pazienza, & Stefano Magrini
5:15-5:40 A Hybrid Approach to Representation in the Janus Natural
Language Processor
Ralph M. Weischedel
6:30-7:30 RECEPTION
Graduate Center
7:30-10:00 BANQUET
Museum of Anthopology
Presidential Address: Candy Sidner
THURSDAY MORNING, 29 JUNE
9:00-9:25 Planning Text for Advisory Dialogues
Johanna D. Moore & Cecile L. Paris
--MORE--(21%)
9:25-9:50 Two Constraints on Speech Act Ambiguity
Elizabeth A. Hinkelman & James F. Allen
9:50-10:10 Break
10:10-11:10 INVITED TALK: How Many Words Do People Know?
Mark Liberman
11:10-12:00 BUSINESS MEETING & ELECTIONS
Nominations for ACL Offices for 1990
President: Jerry Hobbs, SRI International
Vice President: Ralph Grishman, NYU
Secretary-Treasurer: Don Walker, Bellcore
Executive Committee (1990-1992): Kathleen McKeown, Columbia
Executive Committee (1990-1991): Wolfgang Wahlster
Universitaet des Saarlandes
Nominating Committee (1990-1992): Candy Sidner, BBN
THURSDAY AFTERNOON, 29 JUNE
1:30-1:55 Treatment of Long Distance Dependencies in LFG and TAG:
Functional Uncertainty in LFG is a Corollary in TAG
Aravind K. Joshi & K. Vijay-Shanker
1:55-2:20 Tree Unification Grammar
Fred Popowich
2:20-2:45 A Generalization of the Offline Parsable Grammars
Andrew Haas
--MORE--(25%)
2:45-3:15 Break
3:15-3:40 Discourse Entities in Janus
Damaris M. Ayuso
3:40-4:05 Evaluating Discourse Processing Algorithms
Marilyn A. Walker
4:05-4:30 A Computational Mechanism for Pronominal Reference
Robert J.P. Ingria & David Stallard
4:30-4:50 Break
4:50-5:15 Parsing as Natural Deduction
Esther Koenig
5:15-5:40 Efficient Parsing for French
Claire Gardent, Gabriel G. Bes, Pierre-Francois Jurie,
& Karine Baschung
PROGRAM COMMITTEE
Joyce Friedman, Boston University
Barbara Grosz, Harvard University
Julia Hirschberg, AT&T Bell Laboratories (Chair)
Robert Kasper, USC Information Sciences Institute
Richard Kittredge, Universite de Montreal
and Odyssey Research Associates
Beth Levin, Northwestern University
Steve Lytinen, University of Michigan
--MORE--(28%)
Martha Palmer, Unisys
Fernando Pereira, SRI International
Carl Pollard, Carnegie-Mellon University
Len Schubert, University of Rochester
Mark Steedman, University of Pennsylvania
TUTORIALS
26 June 1989
CONSTRAINED GRAMMATICAL FORMALISMS
Aravind Joshi, University of Pennsylvania
K. Vijay-Shanker, University of Delaware
David Weir, Northwestern University
Our goal is to review a range of constrained grammatical formalisms
by considering the following aspects: key features of language
structure the formalisms try to capture, linguistic adequacy,
mathematical and computational properties, parsing strategies,
kinds of structural descriptions supported, strategies for embedding
them in the unfication framework, etc. We will focus on those
formalisms characterized as mildly context-sensitive. The presentation
--MORE--(32%)
will be based on examples rather than on formal proofs. Therefore,
it will be appropriate for a wide range of computational linguists,
even those whose investments in the construction of a lexicon and
a grammar do not allow them the luxury of playing with alternative
formalisms now.
PSYCHOLINGUISTIC APPROACHES TO LANGUAGE COMPREHENSION
Michael Tanenhaus, University of Rochester
I will present a selective review of recent psycholinguistic work
in three areas: (1) word recognition and lexical access; (2) parsing,
with a focus on attachment ambiguity and gap-filling; and (3)
anaphora resolution. In each of these areas, I will summarize some
of the influential ideas and the empirical results that have emerged
during the last few years. Basic information will be provided
about some of the methodological advances that are enabling
psycholinguists to provide detailed information about immediate or
``on-line'' comprehension processes. I will also identify some of
the controversial issues that I expect will be the focus of
psycholinguistic research for the next few years, and I will outline
some areas where more interaction between computational linguistics
and experimental psycholinguists would be especially fruitful.
--MORE--(37%)
MORPHOLOGY AND COMPUTATIONAL MORPHOLOGY
Richard Sproat, AT&T Bell Laboratories
Why study the structure of words computationally why not just look
up words in a dictionary without considering their internal structure?
Knowledge of morphology is useful in applications as diverse as
speech synthesis, parsing, machine translation, spelling correction,
and Japanese text-editing. The tutorial will outline some major
results in theoretical morphology which affect computational issues,
including recent linguistic work on the phonological, syntactic
and semantic properties of words. Particular pieces of work in
computational morphology will be discussed, all of which deal with
theoretically interesting issues to a greater or lesser extent,
and many of which were done with a particular application in mind.
Among the systems discussed will be the Decomp module of the MITalk
text-to-speech system, and the KIMMO Two-Level morphological analysis
system. There will also be some discussion of computational work
in areas closely related to morphology, including the interpretation
of compound nouns in English, and the recognition of word boundaries
in inputs where such boundaries are not marked, such as speech or
Chinese text. Some of the recent debate on the computational
complexity of morphological analysis will be addressed.
--MORE--(43%)
SPEECH TECHNOLOGY
Jared Bernstein and Patti Price, SRI International
This tutorial will review the basics of speech production and
perception, followed by a an overview of the major speech processing
applications including coding-decoding for transmission, speaker
recognition, speech recognition, speech synthesis, and related
medical and educational applications. The core of the tutorial is
an in-depth review of speech synthesis and recognition, along with
a discussion of metrics for their evaluation and current directions
of research. The presentation on text-to-speech synthesis will
cover current practice and research issues in letter-to-sound
conversion, prosodic construction, and spectral composition. The
presentation of recognition will emphasize methods for acoustic
feature extraction, lexical modeling, and word matching. The
integration of syntactic and semantic knowledge in recognition and
synthesis will also be covered.
PANEL
26 June 1989
COMPUTATIONAL LINGUISTICS AND RESEARCH IN THE HUMANITIES
--MORE--(47%)
Don Walker, Bellcore (Chair); Patrick Hanks, Collins Publishers;
Nancy Ide, Vassar; Mark Liberman, AT&T Bell Laboratories;
Martha Palmer, Unisys; Antonio Zampolli, University of Pisa
Humanists have carried out careful analyses of selected bodies of
literary texts, although usually not with sophisticated linguistic
tools. Computational linguists have developed new techniques for
examining linguistic structure, but only recently have begun to
study naturally occurring texts and to explore the characteristics
of particular collections. A Text Encoding Initiative has just
been established to formulate and disseminate international guidelines
for the encoding and interchange of machine-readable texts intended
for literary, linguistic, historical, or other textual research.
A Data Collection Initiative has also been started to collect,
annotate, and tag a large body of English texts. Other initiatives
in the United States, Europe, and Japan are pursuing similar
directions. The session will consider these developments and
explore the mutual relevance of corpus-based language analysis and
language-based corpus analysis in this larger context.
Organized with the cooperation of the Association for Computers
and the Humanities and the Association for Literary and Linguistic
Computing .
--MORE--(52%)
REGISTRATION INFORMATION AND DIRECTIONS
PREREGISTRATION MUST BE RECEIVED BY 12 JUNE; after that date, please
wait to register at the Conference itself. Complete the attached
``Application for Preregistration'' and send it with a check payable
to Association for Computational Linguistics or ACL to Donald
E. Walker (ACL); Bellcore, MRE 2A379; 445 South Street, Box 1910;
Morristown, NJ 07960-1910, USA; (201) 829-4312; [email protected].
If a registration is cancelled before 12 June, the registration
fee, less $25US for administrative costs, will be returned.
Registration includes one copy of the Proceedings, available at
the Conference. Additional copies of the Proceedings at $25US for
members ($50US for nonmembers) may be ordered on the registration
form or by mail prepaid from Walker. For people who are unable to
attend the conference but want the proceedings, there is a special
entry line at the bottom of the registration form.
TUTORIALS: Attendance is limited. Preregistration is encouraged
to insure a place and guarantee that syllabus materials will be
available.
--MORE--(57%)
BANQUET: The conference banquet will be held on 28 June 1989 at
the Museum of Anthropology on campus. The Museum is an architectural
masterpiece featuring a remarkable collection of Northwest Coast
Indian art. In addition, all of its research materials from around
the world are accessible in visible storage areas. Members will
be able to browse through the Museum before the banquet, after
eating, and again after Candy Sidner presents her presidential
address.
LOCAL ARRANGEMENTS: Richard S. Rosenberg, Department of Computer
Science, University of British Columbia, Vancouver, B.C., CANADA
V6T 1W5; (604) 228-3061; [email protected] or rosen%[email protected].
EXHIBITS AND DEMONSTRATIONS: People interested in exhibiting or
in demonstrating programs at the conference should contact Richard
Rosenberg (address above) AS SOON AS POSSIBLE.
RESIDENCE HALL ACCOMODATIONS: A large number of rooms in the Walter
Gage Residence Halls at the University of British Columbia. Send
in your ``Application for Residence Halls'' as soon as possible,
BY 26 MAY 1989, to guarantee a place.
HOTEL ACCOMODATIONS: A variety of hotel and motel accomodations
--MORE--(62%)
from simple to luxurious are available in downtown Vancouver, about
five miles from the UBC campus. Blocks of rooms at reduced rates
have been set aside for ACL members at three hotels, as indicated
on the attached list. You should make reservations directly with
those hotels as soon as possible, stating that you will be attending
the ACL conference at the University of British Columbia. The
rates quoted for them are subject to change after May 26.
PARKING: There is virtually unrestricted parking on the campus
during the summer.
DIRECTIONS: Car rental services are available at the Vancouver
International Airport. Take Grant McConachie Way over the Arthur
Laing Bridge to the Granville Street exit; continue north on
Granville to West 70th Street. If you are driving to Vancouver
from the U.S., take Route 99 (which becomes Oak Street in Vancouver),
to West 70th Street. In either case, turn left onto West 70th,
which becomes Southwest Marine Drive and continue for about 4 miles;
turn right onto 16th Avenue; turn left at Gate 10 of the UBC Campus
and continue on Wesbrook Mall to Gate 2; left at Student Union
Boulevard; the Walter Gage Residence Halls are immediately to your
right. Driving from downtown Vancouver, take the Burrard Bridge,
bearing right along the shore onto Point Grey Road; turn left at
--MORE--(67%)
Alma, then right onto 4th Avenue; continue on 4th Ave., which
becomes Chancellor Boulevard, to Wesbrook Mall; turn left at Wesbrook
Mall; turn right at Gate 2 onto Student Union Boulevard; the Walter
Gage Residence Halls are immediately to your right. If you are
coming directly to the IRC, continue on Wesbrook Mall to University
Blvd.; turn right, then left onto East Mall, for several hundred
feet. A visitor parking structure is on the left. The IRC is
behind and to the left in the Biomedical complex.
TAXI: from the airport to the UBC campus about $22CDN; from downtown
about $16.50CDN; between the airport and downtown about $18CDN.
SHUTTLE BUS: between airport and downtown $6.50CDN.
BUS: from downtown Vancouver take the No. 4 or No. 10 to the campus
(about 30 minutes); get off at the Bus Loop at University Boulevard
and East Mall, next to the Student Union Building; look at the map
there or follow the signs to the IRC. Fare is $1.25CDN; exact
change is required.
BANKING AND FOREIGN EXCHANGE: available in the Student Union Building.
--MORE--(71%)
HOTEL INFORMATION
Make reservations as soon as possible. The first three hotels
below are providing special university rates; indicate that you
are attending the ACL Meeting at UBC. Prices are in Canadian
dollars and do not include 10% hotel sales tax. Rates at the first
three hotels are the same for single or double occupancy and are
valid through the date specified. The other three hotels show a
range of prices from single to higher priced doubles and may be
reserved at any time.
NAME ADDRESS PHONE PRICE DATE
Blue Horizon 1225 Robson 800:663-1333 $75CDN* 12 May
Coast Georgian 773 Beatty 800:663-1144 $109CDN 26 May
Pacific Palisades 1277 Robson 800:663-1815 $115CDN** 26 May
Sylvia 1154 Gilford 604:681-9321 $50-70CDN
Centennial 898 W. Broadway 604:872-8661 $65-95CDN
Barclay 1348 Robson 604:688-8850 $59-99CDN
* 3rd person in room is $15.00 additional
**$125 single or double occupancy for an Executive One Bedroom Suite
--MORE--(75%)
APPLICATION FOR PREREGISTRATION (BY 12 JUNE)
27th Annual Meeting of the Association for Computational Linguistics
26-29 June 1989, University of British Columbia
NAME ___________________________________________________________________________
Last First Middle
ADDRESS ________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
--MORE--(78%)
AFFILIATION (short form for badge ID) __________________________________________
TELEPHONE ______________________________________________________________________
COMPUTER NET&ADDRESS ___________________________________________________________
REGISTRATION INFORMATION (circle fee)
NOTE: Only those whose dues are paid for 1989 can register as members;
if you have not paid dues for 1989, register at the `non-member' rate.
ACL NON- FULL-TIME
MEMBER MEMBER* STUDENT
by 12 June $95US $135US $60US
at the Conference $135US $175US $80US
*Non-member registration fee includes ACL membership for 1989;
do not pay non-member fee for BOTH registration and tutorials.
--MORE--(81%)
BANQUET TICKETS: $30US each; amount enclosed $_________
EXTRA PROCEEDINGS FOR REGISTRANTS: $25US each; amount enclosed $__________
TUTORIAL INFORMATION (circle fee for each tutorial, and check tutorials desired)
ACL NON- FULL-TIME
Each tutorial MEMBER MEMBER* STUDENT
by 12 June $75US $115US $50US
at the Conference $100US $140US $60US
*Non-member tutorial fee includes ACL membership for 1989;
do not pay non-member fee for BOTH registration and tutorials.
Morning Tutorials:
select ONE: Constrained Grammatical Formalisms Psycholinguistic Approaches
Afternoon Tutorials:
select ONE: Morphology & Computational Morphology Speech Technology
TOTAL PAYMENT MUST BE INCLUDED: $_______________
(Registration, Banquet, Extra Proceedings, Tutorials)
--MORE--(84%)
PROCEEDINGS ONLY: $25US members; $50US others; amount enclosed $__________
Make checks payable to ASSOCIATION FOR COMPUTATIONAL LINGUISTICS or
ACL. If payments are made in Canadian dollars, calculate charges
according to current exchange rate. Credit cards cannot be honored.
Send Application for Registration WITH PAYMENT before 12 JUNE to:
Donald E. Walker (ACL)
Bellcore, MRE 2A379
445 South Street, Box 1910
Morristown, NJ 07960-1910, USA
Phone: (201)829-4312
Internet: [email protected]
Usenet: uunet.uu.net!bellcore!walker
APPLICATION FOR RESIDENCE HALLS
27th Annual Meeting of the Association for Computational Linguistics
26-29 June 1989, Walter Gage Residence, University of British Columbia
--MORE--(87%)
The Walter Gage Residence is a modern three tower, 17 story complex
consisting of single bedrooms, grouped in sixes with a shared
bathroom and living area. Many rooms have a panoramic view of the
moutains, Burrard Inlet and Howe Sound. A limited number of
self-contained suites are also available on a first-come, first-served
basis. The studio-single occupancy includes a private bathroom
and bedsitting area; the one-bedroom suite includes a private
bathroom and living room. All units have refrigerators.
In the event of unanticipated demand, rooms will be assigned in
the order that reservations are received. Please send in your
application for residence halls as early as possible.
Room payments are due by 26 May to guarantee a place, although it
may be possible to make reservations after that date. Fees may be
paid with personal checks, traveler's checks, money orders, Visa,
or MasterCard. A $10CDN non-refundable deposit is required; the
balance must be paid at check-in time. If payments by check are
made in US dollars, the difference will be credited against your
balance.
NAME ___________________________________________________________________________--MORE--(92%)
Last First Middle
ADDRESS ________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
________________________________________________________________________________
AFFILIATION (short form for badge ID) __________________________________________
TELEPHONE ______________________________________________________________________
COMPUTER NET&ADDRESS ___________________________________________________________--MORE--(95%)
RESIDENCE HALL REQUIREMENTS
SINGLE ROOM in 6 Bedroom Cluster at $28CDN per night
Female Male Nonsmoking Smoking
Preferred companions in 6 Bedroom Cluster ______________________________________
________________________________________________________________________________
SINGLE STUDIO SUITE at $40CDN per night
DOUBLE ONE BEDROOM SUITE at $60CDN per night
Date and time of arrival _______________________________________________________
Date and time of departure _____________________________________________________--MORE--(97%)
$10CDN DEPOSIT MUST BE INCLUDED; pay by personal checks, traveler's checks,
money orders, Visa, or MasterCard.
VISA MasterCard
Credit Card Number_________________________ Expiration Date_____________________
Cardholder's Name_____________________ Cardholder's Signature___________________
Send Application for Residence Halls WITH DEPOSIT ONLY before 26 MAY to:
UBC CONFERENCE CENTRE
Reservations Office
5961 Student Union Boulevard
Vancouver, B.C., CANADA V6T 2C9
Phone: (604) 228-2963
Fax: (604) 228-5297
End of article 1317 (of 1332)--what next? [npq]
Article 1318 (14 more) in comp.ai:
From: [email protected] (andrew)
Subject: Re: Thinking about the reduction of
Summary: anthropic principles - references
Message-ID: <[email protected]>
Date: 3 Apr 89 00:09:50 GMT
References: <[email protected]> <[email protected]> <[email protected]>
Organization: National Semiconductor, Santa Clara
Lines: 21
--MORE--(25%)
In article <[email protected]>, [email protected] ( SINGH S - INDEPENDENT STUDIES ) writes:
> ...can anyone CONCISELY and PRECISELY
> explain the strong and weak anthropic principles. What are the implications,
> if any, for intelligent machines? Is this more related to the
> idea of life, or intelligence? ie. Does the anthropic principle say
> something exclusively about life, or exclusively about intelligence, or
> both?
There is an excellently-worded article in "The Economist", March 11 1989, p90
devoted to the anthropic principle. As you no doubt know, Steven Hawking
previously addressed the topic in "A Brief History Of Time".
The Economist article discusses a recently-discovered flaw identified by
Dr. Ian Hacking of Toronto U, called "the inverse gambler's fallacy".
The implication is that any anthropic principle reduces to an empty
tautology. The article is too long to post.
Perhaps this belongs in sci.philosophy... but hope this helps.
=====
Andrew Palfreyman USENET: ...{this biomass}!nsc!logic!andrew
National Semiconductor M/S D3969, 2900 Semiconductor Dr., PO Box 58090,
Santa Clara, CA 95052-8090 ; 408-721-4788 there's many a slip
'twixt cup and lip
End of article 1318 (of 1332)--what next? [npq]
Article 1319 (13 more) in comp.ai:
From: [email protected] (David Chalmers)
Subject: The universe and stuff.
Message-ID: <[email protected]>
Date: 3 Apr 89 04:55:30 GMT
References: <[email protected]> <[email protected]> <[email protected]>
Sender: [email protected]
Reply-To: [email protected] (David Chalmers)
Organization: Concepts and Cognition, Indiana University
Lines: 104
--MORE--(7%)
In article <[email protected]> [email protected] (Sanjay Singh) writes:
>Agreed, if anything, the existence of higher intelligences creates
>entropy faster than anything else.
Well, in a sense higher intelligence (and high-level structures in general)
tend to work against the inexorable flow towards entropy, by magnifying
low-level randomness into high-level patterns. Both of these carry information,
but at the high level it is much more robust than at the low level. High-level
structures allow information to be conserved and transmitted - witness human
brains, books, computers - thus providing relief against the Second Law, at
least in the short term.
Of course, you can't get something for nothing. The Second Law is a corollary
of the fact that it is impossible for new information to be created in a
deterministic universe. So where does high-level information, such as that
found in people and books come from? The answer is: from low-level randomness.
Such randomness carries information, but only in a very tentative form. But
occasionally this randomness gets magnified into information at the top-level
(for instance, by genetic mutation or by creative acts in the mind).
At the top level, natural selection can apply (whether to organisms or to
ideas), ensuring that the information which is left around is not a random
--MORE--(30%)
mess but is in fact selected for quality.
But, unfortunately, the Second Law tells us that this can't go on forever. In
the end the supply of low-level information will be eaten up and turned into a
homogeneous mass, and the top level will have no information to feed upon. At
this stage the top level will turn into an almost empty, deterministic system.
(Shades of Symbolic AI!) Of course, steps will be taken to preserve as much
information as possible at the top level. In fact, this is already happening.
The printing press and more recently the computer have seen to that.
>This may sound misplaced, but can anyone CONCISELY and PRECISELY
>explain the strong and weak anthropic principles. What are the implications,
>if any, for intelligent machines? Is this more related to the
>idea of life, or intelligence? ie. Does the anthropic principle say
>something exclusively about life, or exclusively about intelligence, or
>both?
The Weak Anthropic Principle:
Human life exists. From this fact we can draw conclusions about the way the
universe is, and about the laws of physics, in an a priori fashion without
the need for direct observation.
--MORE--(49%)
For instance: from the fact that we are here today we can conclude that the
laws of physics must be such as to allow the evolution of complex systems. We
can also conclude, for instance, that the laws of gravity are such that a planet
will not always plunge directly into the sun. There are less trivial examples,
but I can't think of any offhand.
The Strong Anthropic Principle:
The laws of physics *must* be such as to allow the development of intelligent
life.
In a sense, this is obvious. Along the lines of the Weak Anthropic principle:
we are here, so the universe must be structured so that we could get here. But
the SAP claims more. It says, almost, that a universe without intelligent life
is absurd. If there existed such a universe, there would be no-one to know
about it, and so in what sense could it exist?
There has also been proposed:
The Final Anthropic Principle:
The universe must be such as to allow the existence of an *infinite* amount
--MORE--(65%)
of intelligent life.
Don't ask me about this one. I could never understand the justification for it.
The most common application of Anthropic Principles is to argue against
religious arguments, and similar lines of thought. "Wow, isn't it *amazing*
that the universe allows the existence of intelligent life. For instance, if
certain physical constants had been just 2% different, then the proton would be
unstable and things could never get off the ground. There must be something
out there just making these things go right."
To which the answer is: "No. If these things had been different, we wouldn't
be here to talk about them. As it happens, we are here to talk about them.
Therefore we shouldn't be surprised that the universe is this way."
[By analogy: Somebody might say "Isn't it amazing that Earth has just the
right temperature and atmosphere to support human life?" Of course this is not
amazing. If Earth had been different we would not be here to talk about it.
But presumably life could still have evolved on other planets around the
universe, maybe only a few, but all of them going "Wow, isn't it amazing that
*this* planet..."]
--MORE--(84%)
The Anthropic Principle may be tautological, but it is still interesting.
Various people have tried to argue against it, but in its Weak form it is
impregnable. The Strong form is interesting but open to question.
The AP can be viewed as a part answer to that age-old question, and still the
second (or third) most burning philosophical question that exists: Why, of all
the possible universes, are we in this one? Don't the laws of physics seem a
little arbitrary? The AP says: well, we couldn't have just any old set of
laws of physics. The laws have to be such that intelligent life can exist
(otherwise we wouldn't be here to talk about it). For all we know, that rules
out most possible sets of laws. Some even go so far to claim that ours is the
*only* possible consistent universe in which intelligent life can develop, but
I think that this is implausible.
Dave Chalmers
Center for Research on Concepts and Cognition
Indiana University
End of article 1319 (of 1332)--what next? [npq]
Article 1320 (12 more) in comp.ai:
From: [email protected] (Michael Prietula)
Subject: HICSS Call...
Message-ID:
Date: 2 Apr 89 15:12:00 GMT
Organization: Graduate School of Industrial Administration, Carnegie Mellon, Pittsburgh, PA
Lines: 94
--MORE--(7%)
-------------------------------------------
Call For Papers and Referees
ARTIFICIAL INTELLIGENCE AND ORGANIZATION THEORY
23rd Hawaii International Conference on System Sciences, HICSS-23
Kailua-Kona, Hawaii
January 2 -- 5, 1990
-------------------------------------------
The Emerging Technologies and Applications Track of HICSS-23 will contain a
special set of papers addressing topics in ARTIFICIAL INTELLIGENCE AND
ORGANIZATION THEORY. For an organization to function, countless decisions
must be made at all levels of the firm. Over time, organizations adapt to
the internal and external environmental demands and constraints in a manner
which yields structures that reduce the complexity of such decision making
tasks. These structures are comprised of both formal and in formal
components which are sometimes quite difficult to articulate; therefore,
modifications or ignorance of such structures can lead to unanticipated,
often undesirable results.
As our capability and effort turn toward assisting decision makers with
information technology, it is essential that we understand and appreciate
the interaction between the systems we build and the organizational
structures in which we embed them. Relevant interesting and innovative
--MORE--(31%)
results are emerging from artificial intelli gence (AI) and cognitive
science research. AI systems have capabilities fundamentally different from
more traditional support systems. The notion of configuring an intelligent
agent which can assume more of the decision-making responsibility has
importan t ramifications when considering how the organizational structure
may be affected.
Collections of such agents working either independently or with humans
complicate the issues involved. Whereas earlier researchers have proposed
a link between organizati onal structures and information systems, it has
been further proposed that because AI systems embed problem solving
components, the design of these problem solving components affect, and are
affected by, the technology and the organizational structure.
The goal of this session is to bring together papers which begin to
address the link between AI research, organizational theory, cognitive
science, and the automated support of complex decision making in
organizations. Topics relevant to this session would include:
--> How can intelligent agents function in an organization?
--> What is the nature of the interaction between intelligent agents,
human agents, and organizational structures?
--> How can multiple intelligent agents cooperate and coordinate in
--MORE--(57%)
the support of complex decision making in an organizational setting?
--> What are the issues involved in implementing single or multiple
agent systems?
--> How can AI be used to model organizational structures or theories?
--> What are the major design issues to consider when operating an AI
system within an organization?
--> How can AI systems help realize truly adaptive organizational structures?
--> What can organization theory tell us about configuring distributed
AI systems?
--> And what can distributed AI tell us about organization theories?
Papers selected for presentation will appear in the conference proeedings,
which are published by the Computer Society of the IEEE, and, possibly,
later also in a special issue of a professional society periodical.
HICSS-23 is sponsored by the University of Hawaii in cooperation with the
ACM, the IEEE Computer Society, and the Pacific Research Institute for
Information Systems and Management (PRIISM).
INSTRUCTIONS FOR AUTHORS
Manuscripts should be 22--26 typewritten, double-spaced pages in length
(including figures and references). Do not send submissions that are
significantly shorter or longer than this. Papers must not have been
previously presented or published, nor currently submitted for journal
--MORE--(82%)
publication. Papers must not have been previously presented or published.
Each manuscript will be subjected to a rigorous refereeing process.
Manuscripts should have a title page that includes the title of the paper,
full name of its author(s), affiliation(s), complete physical mail and
electronic address(es), telephone number(s), and a 300-word abstract.
DEADLINES:
1. Six hardcopies of the manuscript are due postmarked by June 5, 1989.
2. Notification of acceptance by September 1, 1989.
3. Camera-ready accepted manuscripts due by October 1, 1989.
SEND SUBMISSIONS AND QUESTIONS TO EITHER OF THE CO-CHAIRS:
Dr. Michael J. Prietula
Graduate School of Industrial Administration
Carnegie-Mellon University
Pittsburgh, PA 15213
(412) 268-8833
BITNET: [email protected]
-- OR --
Dr. Renee A. Beauclair
School of Business
University of Louisville
Louisville, KY 40292
--MORE--(99%)
(502) 588-7830
BITNET: RABEAU01@ULKYVM
End of article 1320 (of 1332)--what next? [npq]
Article 1321 (11 more) in comp.ai:
From: [email protected] (Donald E Walker)
Subject: COLING-90 Call for Papers
Message-ID: <[email protected]>
Date: 3 Apr 89 15:58:38 GMT
Sender: [email protected]
Lines: 72
--MORE--(13%)
The Thirteenth International Conference on Computational Linguistics
COLING 90
COLING 90 will be arranged on August 20-25, 1990, at the University
of Helsinki. Pre-Coling tutorials take place on August 16-18, 1990.
YOU ARE INVITED TO SUBMIT
- a topical paper on some critical issue in computational linguistics,
- a project note with software demonstration
The written part of your presentation should not exceed 6 pages in
A4 format or 12,000 characters for a topical paper, and half that
length for a project note. The final version of the paper should
follow the COLING 88 style sheet.
Send your text NOT LATER THAN DECEMBER 1, 1989, as electronic mail
or as five paper copies to the Coling 90 Program Committee.
--MORE--(45%)
The Program Committee will respond by February 1, 1990.
All prospective participants are kindly requested to indicate their
interest to the Conference Bureau by January 15, 1990. Detailed
information (on e.g. accommodation) will be sent to all participants
by February 1, 1990.
Deadline for preregistration will be May 1, 1990. The registration
fee will be 750 FIM (certified students 400 FIM). The late
registration fee is 1100 FIM.
Inquiries concerning papers should be directed to the Program
Committee and concerning accommodation to the Conference Bureau.
Other inquiries are handled by the local organizers.
COLING 90 PROGRAM COMMITTEE
Hans Karlgren
KVAL
Skeppsbron 26
S-111 30 STOCKHOLM
Sweden
Phone: +46 8 7896683
Fax: +46 8 7969639
--MORE--(78%)
Telex: 15440 kval s
E-mail: [email protected]
or: [email protected]
COLING 90 CONFERENCE BUREAU
Riitta Ojanen
Kaleva Travel Agency Ltd
Congress Service
Box 312
SF-00121 HELSINKI
Finland
Phone: +358 0 602711
Fax: +358 0 629019
Telex: 122475 kleva sf
LOCAL ARRANGEMENTS
Fred Karlsson
Dept of General Linguistics
University of Helsinki
Hallituskatu 11
SF-00100 HELSINKI
Finland
Phone: +358 0 1911
--MORE--(96%)
Fax: +358 0 656591
Telex: 124690 unih sf
E-mail: COLING@FINUH (in BITNET)
End of article 1321 (of 1332)--what next? [npq]
Article 1322 (10 more) in comp.ai:
From: [email protected] (Murthy Gandikota)
Subject: Re: Where might CR understanding come from (if it exists)
Message-ID: <[email protected]>
Date: 3 Apr 89 17:52:29 GMT
References:
Reply-To: Murthy Gandikota
Organization: Ohio State University
Lines: 32
--MORE--(25%)
In article <[email protected]> [email protected] ( SINGH S - INDEPENDENT STUDIES ) writes:
>+5, you would find changes in the nets. Current computer technology does
>not allow circuitry to change itself. Unless there is a major revolution
>in the design of hardware, I honestly think the first truly intelligent computer
>will be made with organic materials. Who knows? It may even be grown using
>recombinant DNA or something like that. There is no way we can match the
>plasticity of the brain with current technology.
This provokes me to post a thought experiment I've made on
self-organizing neural nets. The point is, for a neural net to be as
efficient storage/processing device as brain, it should be able to
change its connections towards some optimality. Suppose there are two
independant concepts A and B represented as two neurons/nodes. So long
a relationship is not discovered between them there is no connection
between them. Say after some time, a relationship is found between A
and B, then a connection can be created between them. However this
won't be optimal if A and B have a degree/extent relationship. In
which case, A and B have to be merged into some C, with the
degrees/extents captured in (the hidden rules of) C. A ready and
simple example I can put down is, A=bright red, B=dull red, C=shades
of red. Has anyone thought of this before?
--MORE--(91%)
--murthy
--
"What can the fiery sun do to a passing rain cloud, except to decorate
it with a silver lining?"
Surface mail: 65 E.18th Ave # A, Columbus, OH-43201; Tel: (614)297-7951
End of article 1322 (of 1332)--what next? [npq]
Article 1323 (9 more) in comp.ai:
From: [email protected] (William J. Rapaport)
Newsgroups: sunyab.general,sunyab.grads,wny.seminar,comp.ai,ont.events,sci.philosophy.tech,sci.math,sci.logic
Subject: SUNY Buffalo Logic Colloquium
Message-ID: <[email protected]>
Date: 3 Apr 89 18:16:20 GMT
Sender: [email protected]
Reply-To: [email protected] (William J. Rapaport)
Distribution: na
Organization: SUNY/Buffalo Computer Science
Lines: 45
--MORE--(20%)
UNIVERSITY AT BUFFALO
STATE UNIVERSITY OF NEW YORK
BUFFALO LOGIC COLLOQUIUM
GRADUATE GROUP IN COGNITIVE SCIENCE
and
GRADUATE RESEARCH INITIATIVE IN COGNITIVE AND LINGUISTIC SCIENCES
PRESENT
JACEK PASNICZEK
Institute of Philosophy and Sociology
Department of Logic
Marie Curie-Sklodowska University
Lublin, Poland
FIRST- AND HIGHER-ORDER MEINONGIAN LOGIC
Meinongian logic is a logic based on Alexius Meinong's ontological
views. Meinong was an Austrian philosopher who lived and worked around
the turn of the century. He is known as a creator of a very rich objec-
--MORE--(52%)
tual ontology including non-existent objects, and even incomplete and
impossible ones, e.g., "the round square". Such objects are formally
treated by Meinongian logic. The Meinongian logic presented here (M-
logic) is not the only Meinongian one: there are some other theories
that are formalizations of Meinong's ontology and that may be considered
as Meinongian logics (e.g., Parsons's, Zalta's, Rapaport's, and
Jacquette's theories). But the distinctive feature of M-logic is that
it is a very natural and straightforward extension of classical first-
order logic--the only primitive symbols of the language of M-logic are
those occurring in the first-order classical language. Individual con-
stants and quantifiers are treated as expressions of the same category.
This makes the syntax of M-logic close to natural-language syntax. M-
logic is presented as an axiomatic system and as a semantical theory.
Not only is first-order logic developed, but the higher-order M-logic as
well.
Wednesday, April 26, 1989
4:00 P.M.
684 Baldy Hall, Amherst Campus
For further information, contact John Corcoran, Dept. of Philosophy,
716-636-2444, or Bill Rapaport, Dept. of Computer Science, 716-636-3193.
End of article 1323 (of 1332)--what next? [npq]
Type h for help.
End of article 1323 (of 1332)--what next? [npq]
Article 1324 (8 more) in comp.ai:
From: [email protected]
Subject: stochastic anthropoid principle
Message-ID: <[email protected]>
Date: 3 Apr 89 16:44:38 GMT
Sender: [email protected]
Organization: UC, Santa Barbara. Physics Computer Services
Lines: 10
--MORE--(37%)
a recent sequence of submissions has discussed the "anthropic"
principle. This 'anthropic principle' has been said to have two
forms: 'weak' (we are here, so the universe supports the evolution
of intelligent strife), 'strong' (the laws of the universe MUST
support the emergence of intelligent strife)
Why not add a third ( [& ...]), the stochastic anthropoid principle,
according to which in a statistical ensemble of universes, some
universes are so constructed as to admit, grudgingly, the mergence of
intelligent life.
End of article 1324 (of 1332)--what next? [npq]
Article 1325 (7 more) in comp.ai:
From: [email protected] (James Salsman)
Subject: The Chinese Labyrinth
Message-ID: <[email protected]>
Date: 3 Apr 89 19:44:02 GMT
Reply-To: [email protected] (James Price Salsman)
Organization: Carnegie Mellon
Lines: 8
Keywords:
--MORE--(66%)
If you must continue to discuss Searle's Chinese Room, please
do not change your subject lines so that I will not have to keep
adding lines to my kill file.
--
:James P. Salsman ([email protected])
--
End of article 1325 (of 1332)--what next? [npq]
Article 1326 (6 more) in comp.ai:
From: [email protected] (Jeffrey A. Sullivan)
Subject: A.I. R & D in L.A. -- where and who?
Keywords: AI south california
Message-ID: <[email protected]>
Date: 4 Apr 89 00:13:17 GMT
Organization: Decision Systems Lab., Univ. of Pittsburgh, PA.
Lines: 16
--MORE--(39%)
I am looking for any companies doing AI R & D in the Los Angeles metro
area. It seems that most of the west-coast AI work is being done in
the San Francisco area, but I know that there must be some decent AI
work in LA.
Please mail any replies, even if you post also, as mail is quicker and
more reliable.
Thanks,
--
...............................................................................
Jeffrey Sullivan DELPHI: JSULLIVAN | University of Pittsburgh
[email protected]., [email protected]. {pittsburgh.edu} | Intelligent Systems Studies
[email protected], {jasst3 | jasper}@cisunx.UUCP
End of article 1326 (of 1332)--what next? [npq]
Article 1327 (5 more) in comp.ai:
From: [email protected] (stevan r harnad)
Subject: Features, Symbols, Categories
Message-ID: <[email protected]>
Date: 4 Apr 89 04:59:25 GMT
Sender: [email protected]
Reply-To: [email protected] (Stevan Harnad)
Organization: Bellcore, Morristown, NJ
Lines: 69
--MORE--(9%)
Andrew Palfreyman
has asked me to reply to his recent posting, which elicited no
responses. He wrote:
" [about] symbols and their attributes... symbols, attributes, features
" and the central role of the recognition of isomorphism... [How do we]
" describe "the attribute of a thing"?... The existence of attributes
" seems only possible when a feature extraction process is performed, by
" which attributes are *created* as a direct result of the interaction of
" the perceiver with the environment... [There seem to be two kinds of]
" predicates... (1) a simple, "non-relational" predicate, like
" "whiteness" or "how many" (2) a set membership predicate, like "is a
" member of" or "has .. members". [but] (1) appears to be subsumable
" under (2) in a recursive fashion (i.e. "is a member of the set of white
" things")... Is (2) an inclusive definition?... from the above mentioned
" reductionist perspective, symbols evaporate!... The feature set is all
" that is, all the way from just inside the "transducer surface" to just
" inside the "effector surface". Analytic deduction of "symbols" from
" patterns of activation [is] just one more level of
--MORE--(41%)
There are points here with which one can agree, but the reason I
didn't reply to this when it was originally posted was that it was
embedded in a much larger message consisting of entirely unnecessary
Zen quips and pseudophilosophy. The suggestion seems to be that:
(a) Feature extraction is important. (Yes.)
(b) "Attributes" are "created." (No, feature-detection may involve some
internal construction, approximation and even error, but features are
still features: this is not ontology we're discussing, just cognitive
modeling).
(c) Feature recognition and predication may be related through set
inclusion. (Yes, in a book on categorization I've tried to show how
set inclusion may be the operation underlying both categorization
["That is an X"] and description ["An X is a Y"].)
(d) If feature detection (and categorization) is central, then
"symbols vanish." (No, symbol tokens, according to my view, are the
names of categories that we can recognize, identify and act upon
because we have learned to detect their features. These symbol tokens
then enter into combinations in the form of symbol strings that
--MORE--(68%)
describe ever more abstract objects and states of affairs in the form
of set-inclusion (categorization) statements. Symbols tokens are
objects too, so why should they "vanish"? You probably mean that
symbol MEANINGS vanish, but that's wrong too. They're still there.
That's what the Chinese Room debate was about. My position was that
subjective meaning rides epiphenomenally on the "right stuff," and the
right stuff is NOT just internal symbol manipulation, as Searle's
opponents keep haplessly trying to argue, but hybrid nonsymbolic/symbolic
processes, including analog representations and feature-detectors,
with the symbolic representations grounded bottom-up in the nonsymbolic
representations. One candidate grounding proposal of this kind is
described in my book.)
Refs:
Harnad S. (1987) (Ed.) Categorical Perception: The Groundwork of Cognition
(NY: Cambridge University Press)
Harnad S. (1989) Minds, Machines and Searle. Journal of Experimental
and Theoretical Artificial Intelligence" 1: 5-25
Stevan Harnad INTERNET: [email protected] [email protected]
[email protected] [email protected] [email protected]
--MORE--(96%)
CSNET: harnad%[email protected]
BITNET: harnad[email protected] [email protected] (609)-921-7771
End of article 1327 (of 1332)--what next? [npq]
Article 1328 (4 more) in comp.ai:
From: [email protected] (Bonnie Bennett)
Subject: Re: 1990 Connectionist Summer School announcement
Message-ID: <[email protected]>
Date: 4 Apr 89 14:29:36 GMT
References: <[email protected]>
Sender: [email protected]
Lines: 4
In-reply-to: [email protected]'s message of 29 Mar 89 05:32:48 GMT
--MORE--(75%)
I am a graduate student at the University of Minnesota. I fit your
cirteria, and am very interested in your program.
Bonnie Bennett
End of article 1328 (of 1332)--what next? [npq]
Article 1329 (3 more) in comp.ai:
From: [email protected] (Bonnie Bennett)
Subject: Re: legal expert system?
Message-ID: <[email protected]>
Date: 4 Apr 89 14:31:24 GMT
References: <[email protected]>
Sender: [email protected]
Distribution: usa
Lines: 2
In-reply-to: [email protected]'s message of 29 Mar 89 19:43:34 GMT
--MORE--(75%)
I think Paul Johnson at the University of Minnesota (MIS program in the
BUSINESS School) had at least one student working in this area.
End of article 1329 (of 1332)--what next? [npq]
Article 1330 (2 more) in comp.ai:
From: [email protected] (Bonnie Bennett)
Newsgroups: comp.ai,comp.lang.misc
Subject: Re: NEXPERT experiences ?
Message-ID: <[email protected]>
Date: 4 Apr 89 14:33:10 GMT
References: <[email protected]>
Sender: [email protected]
Lines: 1
In-reply-to: [email protected]'s message of 29 Mar 89 21:46:35 GMT
--MORE--(89%)
I'll be interested to see the responses you get.
End of article 1330 (of 1332)--what next? [npq]
Article 1331 (1 more) in comp.ai:
From: [email protected] (Armagan Ozdinc)
Subject: Re: NEXPERT experiences ?
Summary: let's post it to comp.ai.shells
Message-ID: <[email protected]>
Date: 4 Apr 89 18:16:21 GMT
References: <[email protected]> <[email protected]>
Sender: [email protected]
Lines: 15
--MORE--(36%)
In article <[email protected]>, [email protected] (3929]) writes:
> In article <[email protected]> [email protected] (Raymond Fink) writes:
> =I'd like to hear from people who are using NEXPERT Object from
> =Neuron Data for industrial-strength applications.
>
> I'd like to be CC-ed on these replies (or perhaps they could be posted to
> the net if there are others out there who are in the same position?)...
>
There is a new group called comp.ai.shells. Discussions about expert
system shells such as NEXPERT can be posted to this group. It is
a moderated group. If you would like to get more information about the
group, please be subscribed to it and read the first article.
Armagan Ozdinc
End of article 1331 (of 1332)--what next? [npq]
Article 1332 in comp.ai:
From: [email protected] (Stevan Harnad)
Subject: Features, Symbols, Categories
Message-ID:
Date: 2 Apr 89 14:44:41 GMT
Organization: Rutgers Univ., New Brunswick, N.J.
Lines: 68
--MORE--(8%)
Andrew Palfreyman
has asked me to reply to his recent posting, which elicited no
responses. He wrote:
" [about] symbols and their attributes... symbols, attributes, features
" and the central role of the recognition of isomorphism... [How do we]
" describe "the attribute of a thing"?... The existence of attributes
" seems only possible when a feature extraction process is performed, by
" which attributes are *created* as a direct result of the interaction of
" the perceiver with the environment... [There seem to be two kinds of]
" predicates... (1) a simple, "non-relational" predicate, like
" "whiteness" or "how many" (2) a set membership predicate, like "is a
" member of" or "has .. members". [but] (1) appears to be subsumable
" under (2) in a recursive fashion (i.e. "is a member of the set of white
" things")... Is (2) an inclusive definition?... from the above mentioned
" reductionist perspective, symbols evaporate!... The feature set is all
" that is, all the way from just inside the "transducer surface" to just
" inside the "effector surface". Analytic deduction of "symbols" from
" patterns of activation [is] just one more level of
--MORE--(41%)
There are points here with which one can agree, but the reason I
didn't reply to this when it was originally posted was that it was
embedded in a much larger message consisting of entirely unnecessary
Zen quips and pseudophilosophy. The suggestion seems to be that:
(a) Feature extraction is important. (Yes.)
(b) "Attributes" are "created." (No, feature-detection may involve some
internal construction, approximation and even error, but features are
still features: this is not ontology we're discussing, just cognitive
modeling).
(c) Feature recognition and predication may be related through set
inclusion. (Yes, in a book on categorization I've tried to show how
set inclusion may be the operation underlying both categorization
["That is an X"] and description ["An X is a Y"].)
(d) If feature detection (and categorization) is central, then
"symbols vanish." (No, symbol tokens, according to my view, are the
names of categories that we can recognize, identify and act upon
because we have learned to detect their features. These symbol tokens
then enter into combinations in the form of symbol strings that
describe ever more abstract objects and states of affairs in the form
--MORE--(70%)
of set-inclusion (categorization) statements. Symbols tokens are
objects too, so why should they "vanish"? You probably mean that
symbol MEANINGS vanish, but that's wrong too. They're still there.
That's what the Chinese Room debate was about. My position was that
subjective meaning rides epiphenomenally on the "right stuff," and the
right stuff is NOT just internal symbol manipulation, as Searle's
opponents keep haplessly trying to argue, but hybrid nonsymbolic/symbolic
processes, including analog representations and feature-detectors,
with the symbolic representations grounded bottom-up in the nonsymbolic
representations. One candidate grounding proposal of this kind is
described in my book.)
Refs:
Harnad S. (1987) (Ed.) Categorical Perception: The Groundwork of Cognition
(NY: Cambridge University Press)
Harnad S. (1989) Minds, Machines and Searle. Journal of Experimental
and Theoretical Artificial Intelligence" 1: 5-25
--
Stevan Harnad INTERNET: [email protected] [email protected]
[email protected] [email protected] [email protected]
BITNET: [email protected] CSNET: harnad%[email protected](99%)
(609)-921-7771
End of article 1332 (of 1332)--what next? [npq]
End of newsgroup comp.ai.
******** 1 unread article in comp.ai.digest--read now? [ynq]
Skipping unavailable article
End of newsgroup comp.ai.digest.
What next? [qnp]
Very nice! Thank you for this wonderful archive. I wonder why I found it only now. Long live the BBS file archives!
This is so awesome! 😀 I’d be cool if you could download an entire archive of this at once, though.
But one thing that puzzles me is the “mtswslnkmcjklsdlsbdmMICROSOFT” string. There is an article about it here. It is definitely worth a read: http://www.os2museum.com/wp/mtswslnk/