Category : Miscellaneous Language Source Code
Archive   : ESIE.ZIP
Filename : HISTORY

 
Output of file : HISTORY contained in archive : ESIE.ZIP












ESIE


The Expert System Inference Engine


History

















Lightwave Consultants August 1985
P.O. Box 290539
Tampa, FL 33617







Copyright 1985, All Rights Reserved.

The ESIE distribution diskette, of which this history is one
file, may be freely copied and distributed. Printed copies
of this history, or this history without the rest of the
files on the distribution diskette, may not be copied or
reproduced in any form.


Page 2


Table of Contents


Introduction . . . . . . . . . . . . . . . . . . . . . . 3

Before the 20th Century . . . . . . . . . . . . . . . . . 4

1900 to 1940 . . . . . . . . . . . . . . . . . . . . . . 6

The 40s . . . . . . . . . . . . . . . . . . . . . . . . . 7

The 50s . . . . . . . . . . . . . . . . . . . . . . . . . 8

The 60s . . . . . . . . . . . . . . . . . . . . . . . . . 9

The 70s . . . . . . . . . . . . . . . . . . . . . . . . . 10

The 80s . . . . . . . . . . . . . . . . . . . . . . . . . 12

Bibliography . . . . . . . . . . . . . . . . . . . . . . 15
































Page 3


Introduction


ESIE (pronounced "easy") is the acronym for Expert System
Inference Engine. ESIE is, according to many people working
in Artificial Intelligence (AI), an "expert system shell."

ESIE is a fast, powerful, inexpensive tool for work in
Artificial Intelligence. If you would like to know more
about ESIE, please print and read the file MANUAL.

This history is designed to give you a brief "run down" on
the past of Artificial Intelligence. While the history of AI
is not exactly as exciting as Napolean at Waterloo, I hope
you will find it interesting.

Those of you who are interested in the socioeconomic impacts
of AI may well be excited, perhaps worried, about the
direction and potential of AI. For example, a few science
fiction authors have claimed that man's purpose on earth is
to BUILD a better race that eventually will become the
dominant one.

I take a different outlook. Man was meant to be and do great
things and we need to build great tools to help us do it.
After all, if we never invented the spear we would still be
wearing animal pelts. I'm sure when the spear was first
invented, other members of the tribe had serious misgivings
about it and warned the young ones to do things the old way.

I certainly am not claiming that the road to the successful
use of Artificial Intelligence will be an easy one, (there
must have been more than one caveman who stabbed himself in
the foot with his new spear), but it can be one that provides
numerous benefits. The coming of any new technology has
always brought on some problems; the successful ones cure far
more than they hurt.

Hopefully, you will get more and more interested in AI, and
meet as many knowledge engineers (KEs) as you can. One of
the nicest, and most consistent things, about KEs is our
nearly universal desire to talk and think about the future.
A conversation with a KE at KE social hour can be
invigorating.

It is my belief that Artificial Intelligence has real promise
to be an important tool in the ascent of man.





Page 4


Before the 20th Century


The astute reader may well be wondering what this chapter is
doing here. Logically, didn't AI start with advent of the
computer? Well, the answer is sort of.

The IDEA of objects having human qualities has been around
probably as long as man has. When Mr. Ug first missed his
prey with his new found weapon, say the bow and arrow, he
might have thought, "it would be nice if the arrow could find
it's own way to the food." There is evidence that certain
groups of ancient man thought their weapons had souls and
should be appeased before the hunt. While not quite fitting
in with a modern definition of AI these were definite
feelings toward Artificial Intelligence.

Real work towards defining the mathematics and symbolics
behind AI can be thought of as beginning with Charles Babbage
in the 19th century. Babbage was fascinated with the idea of
building machines to do human tasks, and the mathematics that
would be required to do such tasks. Babbage was, of course,
a mathematician.

Theory that was developed during his period is still used and
debated today. The Tower of Babel is standard fare in
beginning computer science courses. In the Tower of Babel
you have three stakes in the ground and around one stake you
have donut-shaped pieces. The pieces get consecutively
larger in size:


| | |
x|x | |
yy|yy | |
zzz|zzz | |
aaaa|aaaa | |
bbbbb|bbbbb | |
cccccc|cccccc | |
ddddddd|ddddddd | |
eeeeeeee|eeeeeeee | |

The object of the puzzle was to move all of the pieces from
the starting stake to any other stake with these rules: you
may move only one piece at a time, and no piece may have a
larger piece on top of it. In a child's toybox is often
found the "stake" with the pieces around it, and a couple of
wine bottles will suffice for the empty stakes, it you want
to try to solve the puzzle. It's not as easy as it sounds.

Puzzles such as these, and the human intelligence applied to
solve them, were fascinating to Babbage. He theorized that
Page 5
you could apply the rigors of mathematics to the mental
process so that one common language could be used to transfer
that process from man to machine.



















































Page 6


1900 to 1940


This period saw the development of a formalism for computers
and Artificial Intelligence. In the early history of
computers the two were almost always talked about together -
they were inseparable. The goal was to create machines that
acted like humans or performed human functions so that humans
would no longer have to perform them.

The early pioneers in the U.S. were George Stibitz, Howard
Aiken, Presper Eckert, John Mauchley, John Von Neuman, Herman
Goldstine, and Julian Bigelow.

In Britain, Alan Turing contributed substantially to AI and
computer science. Nearly every computer in existence today
is based on the Turing model.

If you've had some coursework in computers, one or more of
the above names should sound familiar. They are the fathers
of computers, and in a way, the fathers of Artificial
Intelligence. For them, the two were one and the same.






























Page 7


The 40s


Computers during the Forties left a lot to be desired. They
were used to do real work for the first time, during World
War II, to help artillery batteries better aim their
projectiles. After the war, the concentration changed: since
computers could handle numbers well, shouldn't they handle
symbols well? During the Forties, much effort was expended
to get the computer to work with symbols the same way it
worked with numbers.

Many attempts were failures, but some successes drove the
fire towards building machines that could work with symbols
and therefore be one more step closer to thinking.

For an interesting book you might want to pick up and read
"Cybernetics - Control and Communication in the Animal and
the Machine", by Norbert Weiner. The book was published in
1948.
































Page 8


The 50s


The Fifties saw work begin in earnest on the thinking machine
- a computer that would reason as a human reasoned. Four of
the major institutions involved during the 50s were:
Stanford, RAND, Carnegie-Mellon, and MIT.

In 1956 John McCarthy held a conference on Artificial
Intelligence at Dartmouth. At this conference were, among
others, Herbert Simon, Marvin Minsky, Alan Newell, Claude
Shannon, and Arthur Samuel. All of these people are
considered among the fathers of AI.

DARPA was also very interested in human reasoning in the
Fifties. Often it was claimed that building expert systems
would show the true value of computers to man. An expert KB
feasibility study was conducted by DARPA in the late Fifties.
The study was labelled MODAPS. MODAPS was eventually built
into a usable system for the U.S. Army called A-VIS. A-VIS'
main goal was for maintenance of hardware and software on
computers of the day. Much funding for AI work came out of
DARPA.

Three universities in the U.S. took on the leading roles in
AI research: Carnegie-Mellon, MIT, and Stanford. Four
universities in Britain took on the leading role of AI there:
Edinburgh, Sussex, Essex, and Imperial College. Donald
Michie, H. C. Longuet-Higgines, R. A. Brooker, and R.
Kowalski are all important people in British AI.

Stimulated by the impressive gains these people made towards
intelligent machines, the press, and the people, overreacted.
Science fiction stories exploded on the scene about the
power, and danger, of intelligent machines. Everyone, it
seemed, was concerned about the impact of thinking machines
on their lives. Surprisingly, the overwhelming attitude was
positive - people were pro machine. This was due to the
promise of less labor and more free time, along with greater
prosperity.












Page 9


The 60s


The Sixties can be classified in Artificial Intelligence by
the lack of it. During the Fifties, and somewhat during the
postwar period, fantastic and glamourous claims were made for
thinking computers. Computers, it was said, would soon solve
all our problems by thinking and reasoning and performing
like humans. We would use them to find all the tough answers
and build machines that would do all our dirty work for us.

The let down from these claims produced the dismal lack of AI
in the Sixties. Research was left to a few universities:
MIT, Carnegie-Mellon ,and Stanford.

The work at MIT centered on building machines to play the
perfect game of chess. Researchers reasoned that if they
could build a machine that played perfect chess, then they
could use the same techniques to build a machine to mimic any
human behavior. Toward the end of the Sixties they realized
that building a computer to play perfect chess gave you a
computer that played perfect chess, and that's all.

They had trouble using the same techniques for chess playing
in other fields, athough concepts were gained that have been
applied successfully in many AI applications. Also, no chess
playing computer has ever been capable of consistently
beating the masters of the game.
























Page 10


The 70s


In the Seventies came the push to "try something practical"
in Artificial Intelligence. The goal then became to define
very limited domains that AI could be applied to TODAY to
solve real world problems. This decision changed the course
of AI and is the reason you hear so much about AI today. The
two types of AI focused on were expert systems and natural
language processors.

In 1970 there were only 65,000 computers in the United
States, (In 1984 there were over 5 million), and the rapid
"computerization" of America has helped AI.

In 1972, SHRDLU made headlines (at least among the AI
researchers) by using semantic networks for natural language
processing. SHRDLU was roughly diviable into three parts:
the first part analyzed the text to get at the intent of the
user's input, a semantic processor to get at the meaning of
words, and a logic segment to implement the user's requests.

SHRDLU functioned with a fairly limited domain: the blocks
world. In SHRDLU's world there were only blocks, and the
only thing SHRDLU could do was move these blocks around on a
screen. The method behind this movement was what was unique.
The first part of SHRDLU, now called an augmented transition
network (ATN), where SHRDLU tried to solve for the intent of
user's request, was unique.

In 1975 MARGIE was created by Roger Schank, with the model of
conceptual dependency (CD) in mind. In CD, the researcher is
intent on using work done by linguists and psychologists to
build human language understanding into machines. In MARGIE
an input would be analyzed into the most minimal components,
where it could be operated on. MARGIE had two main operating
modes, in one it would paraphrase your input, for example:

Bob asked Mary out.

might become:

Bob requested that Mary go on a date with him.

And MARGIE's other mode, where it would make inferences
concerning the input. Inferencing became one of the most
important aspects of MARGIE, even though it was intended for
natural language processing.

In 1977 another breakthrough occurred in natural language
processing: GUS. As natural language processors became
larger and took on additional capabilities, the size of the
Page 11
semantic network, the network that models human language,
became extraordinarily large. In order to handle such large
amounts of data a system would have to break the information
up into digestable chunks. GUS demonstrated that you could
break this data up and still be effective.

GUS used a coding scheme called frames. Frames are used to
group nodes in the semantic network into groups that are
similar. GUS was also one of the first natural language
systems to work against a data base; GUS was used as an
advisor to passengers flying in California. The data base
was a part of the Official Airline Guide and GUS answered
questions against this data base. Most natural language
processors sold commercially today are designed specifically
to answer questions from an existing data base.







































Page 12


The 80s


The Eighties have brought an explosion into the computer
field and a corresponding explosion in Artificial
Intelligence. This has occurred for three reasons: 1) there
finally is enough computer power, and advanced software, for
AI to be useful in real time, 2) there are plenty of
computers and computer professionals to spend time and money
accomplishing more than the simple computer tasks, and 3)
industry has taken notice of AI and moved it from the
laboratory into the field, along with additional financial
arrangements.

In 1983 another step up in natural language processing
occured with IPP. With IPP the frame used in other natural
language processors became a dynamic scheme. Frames could be
moved, deleted, changed, updated, and added with relative
ease. This made creation and maintenance of the semantic
network easier and quicker. IPP could build new structures
if it encountered information that was new to it, and these
new structures were fully compatible with, and enhanced, the
old structures.

Many other countries are involved in AI besides the United
States. Much press has been devoted to the Japanese 5th
Generation Computer, which is AI of a high form. However,
many other countries, most of them western, are also involved
in AI.

Canada, for example, has a very impressive AI program,
although most of its work is done in the University
laboratory. I expect that Canadian work will soon leave the
lab and advance into the marketplace and attract significant
financing with it.

In 1983 several interesting developments came out of Canadian
laboratories concerning machine vision. Mackworth and Havens
are working towards several schema for scene interpretation
(putting into words what the camera sees), map understanding,
and remote sensing.

Other major work in Canada involves natural language
processing, knowledge representation, and expert systems.

The United Kingdom has been active in AI almost certainly
from its inception.

One fact that may be suprising is that Japan is the largest
user of industrial robots in the world. Not of robots per
person or per corporation, but Japan has more robots in
employment than anywhere else in the world, including the
Page 13
United States. Japan uses well over 60,000 industrial
robots, and some estimates place the tiny Asian country as
having as many robots in use as North America and Western
Europe combined.

In Japan the concentration, as far as robotics are concerned,
is on sensing and control: they have successfully made a
robot that can shake your hand firmly but gently.

Britain, France, Germany, Italy, The Netherlands, Belgium,
Sweden, and Spain all have active AI laboratories.

The Artificial Intelligence Laboratory at Linkoping
University, Sweden is concentrating on knowledge
representation, problem solving, and natural language
communication.

The Kaiserslautern University, Germany, is working on the
theory behind expert systems, and how to build them.

Prolog, which the Japanese have taken as their language of
choice for the fifth generation computer, was originally
built in France. Prolog is a logic programming language, and
was built by A. Colmerauer. Later, Prolog was enhanced by R.
Kowalski of Britain.

At the Louvain La Neuve, Belgium, techniques for knowledge
base pruning have been developed. Since knowledge bases can
become very large as information is added to them, several
algorithms have been designed over the years to eliminate
large sections of the knowledge base as the consultation
proceeds. The problem with pruning is that you might miss
some knowledge you need. In Belgium, they are working
towards the best of both worlds.

At the Research Institute of Applied Computer Science,
Budapest, a computer language called Lobo has been developed.
This language offers many of the advantages of Prolog, while
keeping the advantages of a standard computer language, such
as speed.

At the Telecommunication Laboratory and Study Center, Turin,
Prolog programs have been written to analyze the concurrent
communications that occur in telephone operations.

If you look at every AI system in existence today you might
well exclaim that the humanoid robot of science fiction and
'Hal' of 2001 are just around the corner.

There are systems that can perform very delicate sensor-motor
tasks such as assembly of complex automobile structures - as
long as the parts are all laid out in their correct
positions, hold very impressive conversations, win nearly
every time in certain games, advise doctors better than the
Page 14
doctors themselves, identify and select objects from a bin
based on "looking" at them - using three dimensions, and a
host of other successful applications.

However, we are a long way from building 'Hal' or a humanoid
robot. In not one single area of AI have we even come close
to approximating human behavior or capabilities. In some,
heavily restricted domains, with well defined parameters, the
AI system CAN occasionally surpass the human in accomplishing
the same task. This is primarily for two reasons: 1) humans
get bored. We become lax in the attention needed to
accomplish a task, and AI systems never get bored. 2) AI
systems also never forget, once they are taught to do a task,
and if the environment in which that task is performed does
not change, they will continue to perform that task correctly
forever. A human might, over a period of time, forget how to
do some part of a task.

Another reason we are years from building "science fiction
systems" is the problem of integration. We can build a
system to "see" parts on a conveyor built, and a system to
build automobile assemblies, but to build a system that can
"see" to select parts then build an automobile assembly from
them is another thing.

What is needed is an influx of AI researchers and experts,
willing to spend the time needed to tackle complex problems,
and the creation of tools that are inexpensive yet fast and
powerful. It is my hope that ESIE will make the road a
little easier.
























Page 15


Bibliography


1. Dr. Herbert Simon, "AI - The Realty and the Promise"; in
his lecture at "Artificial Intelligence -- Opportunities and
Limitations in the 80's", Miami, Florida; November 7, 1984.

AI Intelligence Report; Sendero; Phoenix, Arizona; April,
1985.

AI Magazine; American Association for Artificial
Intelligence; Menlo Park, CA; Fall 1985.

AI Magazine; American Association for Artificial
Intelligence; Menlo Park, CA; Winter 1985.

Applied Artificial Intelligence Reporter; University of
Miami; Miami, Florida; October 1984.

Artificial Intelligence in Canada: A Review; by Gordon
McCalla and Nick Cercone; AI Magazine; American Association
of Artificial Intelligence; Menlo Park, CA; Winter 1985.

Executive Briefing Artificial Intelligence; Longman Crown;
Reston, Virginia; 1984.

The First Conference on Artificial Intelligence Applications;
Sponsered by the IEEE Computer Society; Denver; Decemeber
5-7, 1984.

Intelligence, Artificial and Otherwise; by William M. Chance;
Campus Report; Stanford University; Stanford, CA; April 27,
1984.

Physical Object Representation and Generalization: A Survey
of Programs for Semantic-Based Natural Language Processing;
by Kenneth Wasserman; AI Magazine; American Association for
Artificial Intelligence; Menlo Park, CA; Winter 1985.

Proceedings: AI - Opportunities and Limitations in the 80's;
ICS Research Institute; University of Miami; Miami, Florida;
November 7, 1984.

Proceedings: IEEE Workshop on Principles of Knowledge-based
Systems; Sponsored by the IEEE Computer Society; Denver;
December 3-4, 1984.

Worldwide Artificial Intelligence and Computer Science; by
Dr. Jacob F. Blackburn; Proceedings: AI - Opportunities and
Limitations in the 80's; ICS Research Institute; University
of Miami; Miami, Florida; November 7, 1984.



  3 Responses to “Category : Miscellaneous Language Source Code
Archive   : ESIE.ZIP
Filename : HISTORY

  1. Very nice! Thank you for this wonderful archive. I wonder why I found it only now. Long live the BBS file archives!

  2. This is so awesome! 😀 I’d be cool if you could download an entire archive of this at once, though.

  3. But one thing that puzzles me is the “mtswslnkmcjklsdlsbdmMICROSOFT” string. There is an article about it here. It is definitely worth a read: http://www.os2museum.com/wp/mtswslnk/