Dec 162017
 
Neural network simulator. Supports two different learning models -- Back-prop and kohonen.
File NEUROSIM.ZIP from The Programmer’s Corner in
Category Miscellaneous Language Source Code
Neural network simulator. Supports two different learning models — Back-prop and kohonen.
File Name File Size Zip Size Zip Type
B_NET 11994 1161 deflated
B_PARA.DBF 613 208 deflated
K_NET 11674 10397 deflated
K_PARA.DBF 613 202 deflated
MAKEFUNC.ZIP 40211 34629 deflated
NEUROSIM.DOC 30809 9647 deflated
NEUROSIM.EXE 111142 46962 deflated
NS.BAT 10 10 stored
NS_BEP.BAT 23 23 stored
NS_KOH.BAT 23 23 stored

Download File NEUROSIM.ZIP Here

Contents of the NEUROSIM.DOC file


1 Welcome to NeuroSim! Version 1.1 February 21, 1989

NeuroSim v1.1 implements both the Back Error Propagation (BEP) and Kohonen
learning paradigms in three simulation modes within Neural Network processing
element (PE) text editors. Research for BEP and Kohonen is ongoing and
promising. NeuroSim was designed so that I could learn about BEP and Kohonen
learning while providing a simple and intuitive guide to avenues in Neural
Network simulation. It may be useful for university and college students
learning Neural Net models. NeuroSim is simply a Neural Network learning
simulation and this version is not intended as a tutorial. Wasserman's "Neural
Computing" listed in the reference section, is an excellent tutorial on BEP and
Kohonen learning paradigms.

Hardware requirements: NeuroSim will work in Mono/CGA/EGA/VGA monitors.
and does not require an '87 math chip, although one
is highly recommended. Memory requirement: 512K
Software requirement : None.

Note: Before reading the operations listing you may consider executing
NeuroSim to get a 'feel' of the interface. Hopefully the explanations below
will be less necessary. Type "NS_BEP" or "NS_KOH" at the DOS prompt (without
the double quotes) to enter two simple demonstrations.

2 Operation of NeuroSim

2.1 Terms used in this document

* NS is short for NeuroSim.

* BEP refers to "Back Error Propagation".

* PE refers to "Processing Element", a computer simulation of a real neuron.

* IN_VECTOR refers to the simultaneous set of floating point values present
in the left-most column of the NS Editor.

* OUT_VECTOR refers to the simultaneous set of floating point values present
in the right-most column of the NS Editor.

* IO refers to an input or output node.

* VALUE refers to a floating point value for an IO node.

* WEIGHT refers to a floating point value for a PE weight. Note that a
single PE on the screen when not highlighted has an unseen array of WEIGHTs.

* ID refers to "identification letter or number" of a BEP PE or IO node.

* FIELD refers to a VALUE, WEIGHT, or ID of a BEP PE or IO node.


2.2 Global NeuroSim Keys

2.2.1 Help

Press F1 to show simple introduction. Will be context sensitive on next
version.

2.2.2 Exit NeuroSim

Press ALT_X to exit NeuroSim from any point. The Neural Network is not
saved in this version.

2.3 Main Menu

2.3.1 File Menu

2.3.1.1 Load Net

Filenames for load and save may have any or no extension. The Neural
Net file must be a valid BEP or Kohonen file. The learning model type
is automatically identified and updated.

2.3.1.2 Save Net

Filenames for load and save may have any or no extension. The Neural
Net file saved may be from an inactive model. The learning model type
is automatically updated.

2.3.1.3 Load Dbf

A dBase III+ equivalent format file may be used. Only the the *.dbf
file is read. All other related files (e.g. *.dbt, *.idx, ..) are
ignored. The numerical fields are properly interpreted in this
version. Logical fields are accepted but untested in this version.
All other field types are ignored.

2.3.1.4 Operating System

Allows full exit into the DOS environment while leaving NeuroSim in
memory resident. Type "exit" without quotes to return to NeuroSim.

2.3.1.5 Quit

Exit NeuroSim. No automatic Neural Net save is performed in this
version. Pressing ALT_X is equivalent to Quit.

2.3.2 BEP Menu

2.3.2.1 Parameters

2.3.2.1.1 Sigmoid-Eta-Batch

2.3.2.1.1.1 Sigmoid

Enter the ESIGMOID value to be used in determining the
"squeezness" of the Sigmoid curve. A value of 1.0 is standard.
The function is:
Sigmoid(x)=1.0/(1.0+exp(-ESIGMOID*x))

2.3.2.1.1.2 Eta

Eta is a weight adjustment value used to determine the rate of
BEP convergence. The domain of Eta is:
0.1 < Eta << 1.0

2.3.2.1.1.3 Cycle Batch

The Cycle Batch determines the number of complete BEP learning
cycles to perform with any given IO vectors.

2.3.2.1.1.4 dBase Batch

The dBase Batch is the total number of times a database is to be
scanned from the beginning record onto the last. In simpler
terms it can be construed as the number of "database file runs".

2.3.2.1.1.5 Random Ratio

The range is [0,1]. I felt a degree of control over the applied
randomness to PEs was essential in preserving most of a learned
BEP Neural Network while allowing a 'slight perturbation' (e.g.
Rnd Ration = 0.05) to all PE weight vectors. Set Rnd Ratio to
0.99998 for a complete randomization of all PEs. The internal
expression used is:


PE[row][col].weight[w]=(1.0-Rnd_Ratio)*Pe[row][col].weight[w]
+Rnd_Ratio*frandom();

BEP is especially prone 'get them valley blues', where the
WEIGHTS become locked in a local energy minima.

2.3.2.1.2 Sigmoid Range

The Sigmoid function range may be set any of the three ranges
below:
[0,1],[-1,1],[-0.5,0.5]

Statistically (Wasserman), the negative to positive range has a
provided 30 to 50 percent faster BEP convergence rate, than the
only positive range, [0,1].

2.3.2.2 Automatic Layer Normalization

This function allows each layer to be automatically normalized during
simulation. It is an experimental function.

2.3.2.3 Randomize All

Randomize all non-masked VALUES and WEIGHTS.

2.3.2.4 Normalize All

Normalize all non-masked VALUES and WEIGHTS with their respective
columns.

2.3.2.5 Alphabet All

Set all non-masked IDs to letters.

2.3.2.6 Number All

Set all non-masked IDs to numbers from 0 to 22.

2.3.2.7 Do BEP

Enter the BEP environment.

2.3.2.8 Standard SNhbE

Setup a standard Neural Network format.

2.3.3 Kohonen Menu

2.3.3.1 Parameters

2.3.3.1.1 Max Rows and Cols

The minimum accepted value is 1. The maximum values are 20 and 50
respectively. Unless your PC has a math-coprocessor, expanding to
the 1000 (20*50) PE matrix can be prohibitively slow.

2.3.3.1.2 Cycle and Dbase Batch

The minimum accepted value is 1. The maximum values are 999 and
9999 respectively. Cycle is the number of times Neighborhood
weight adjustments will be made for a winning PE. Dbase Batch is
the number of times a database will be applied to a Kohonen Neural
Network.

2.3.3.1.3 Random Ratio

The range is [0,1]. I felt a degree of control over the applied
randomness to PEs was essential in preserving most of a learned
Kohonen Neural Network while allowing a 'slight perturbation' (e.g.
Rnd Ration = 0.05) to all PE weight vectors. Set Rnd Ratio to
0.99998 for a complete randomization of all PEs. The internal
expression used is:

PE[row][col].weight[w]=(1.0-Rnd_Ratio)*Pe[row][col].weight[w]
+Rnd_Ratio*frandom();

2.3.3.1.4 Alpha

The range is [0,1]. Alpha is an implemented feature which may be
useful in 'smoothing' the Kohonen Neural Network response from
database record to record. It essentially provides exponential
averaging by allowing PE dot products to somewhat preserve their
previous states. It is still experimental and your feedback is
appreciated. The internal expression used is:

New Dot_Product = Computed Dot_Product
+ Alpha*(Prev Dot_Product)


2.3.3.2 Randomize and Normalize All

All PEs are randomized according to the Rnd Ratio value set.

2.3.3.3 Auto PE Norm

Normalize each PE for every change in it's weights.

2.3.3.4 Do Kohonen

Enter the Kohonen environment.

2.3.3.5 Neighborhood

The Neighborhood is comprised of concentric squares. Values entered
dictate the degree of conformity of vectors for each concentric square
about a winning PE. Generally the values are positive decreasing
outward to a stable value close to or at zero after dipping slightly
into a negative region. Beware of negative values shifting
surrounding vectors away from the winner: They may significantly
increase in magnitude, hence the Auto PE Normalization menu option.

2.4 BEP Operations

2.4.1 BEP Editor

2.4.1.1 Global Editor Key(s)

2.4.1.1.1 Return to Main Menu

Press ESC to return to main menu from any point in the Editor.

2.4.1.2 Screen Editing

2.4.1.2.1 Masking and UnMasking PEs (F3 and F4)

Press F3 to mask a FIELD. Logically adjacent FIELDS are also
masked. Group masking removes any ambiguity as to what values are
being multiplied and allows a WYSIWYG screen interface. Masked
FIELDS are not altered. Press F4 to unmask a FIELD. Group
unmasking also occurs.

2.4.1.2.2 Down Masking PEs (F5)

Press F5 to mask all FIELDS at and below the current FIELD
position. F5 is equivalent to many F3s applied downward.

2.4.1.2.3 PE Value and Weight Settings

Simply enter floating point values into the VALUE and WEIGHT
FIELDs. None numeric entries will result in a 0.0 being
interpreted.

2.4.1.2.4 PE ID Settings

Letters and/or numbers may be chosen for the two character
identification of a PE. The IDs may be automatically set for alpha
or numeric characters via the local editor menus. If a dBase file
is selected before-hand, the IDs serve to identify particular
fields in that database beginning with the first field 1. IDs need
not be unique.

2.4.1.2.5 Scanning various dBase records

Press Control-Left or Control-Right arrows for viewing the dBase
records. Non-numeric and non-logical fields are automatically
group masked.

2.4.1.2.6 Randomizing and Normalizing a column with Keystroke

Press ALT_R or ALT_N, to automatically randomize or normalize a
column without entering the BEP Editor Menu. The BEP Random Ratio
value is used to compute the degree of randomness applied to PEs.

2.4.1.3 BEP Editor Menu

2.4.1.3.1 Randomizing IOs and PEs

All values or weights for any given column may be randomized with
values ranging from -1 to +1. Only the none-masked values are
randomized.

2.4.1.3.2 Normalizing IOs and PEs

All values or weights for any given column may be normalized with
values ranging from -1 to +1. Only the none-masked values are
normalized.

2.4.1.3.3 Alphabetic and Numeric IDs

All IDs for any given column may be attributed letters or number
identifications.

2.4.1.3.4 Training IN_VECTOR

A dBase III+ format file is requested and accepted only if it is a
dBase file. Only floating point and logical fields are interpreted
internally. Albeit, the logical field BEP simulations are not
fully supported in this version. Be sure to set the IO IDs for
dBase field numbers beginning with field 0.

2.4.1.3.5 Disabling and Enabling Layers

The four PE layers may be disabled only from right to left and
enabled from left to right. The ability to disable a layer is
quite necessary for experimenting with various Neural Net
configurations. Note, that Group masking is automatic.

2.4.2 BEP Simulation

2.4.2.1 Connection of PEs

Full connections are made between output vectors and input weights,
with the exception of the last active layer to the target vector
OUT_VECTOR.

2.4.2.2 Trace Learning (F7)

Press F7 to step through the layers to-then-fro. You can see the
values produced by each layer in the left to right direction. Upon
return the new weight values are displayed. Note the decrease in
weight changes from right to left.

2.4.2.3 Cycle Learning (F8)

Press F8 to cycle through complete weight changes while observing the
convergence of the last active layer to the OUT_VECTOR. The number of
cycles is entered in the Main Menu.

2.4.2.4 Batch Learning (F9)

Press F9 to batch through the entire dBase file. The number of batch
passes is entered in the Main Menu. Note: For every presentation of
a record value, BEP is processed the entered number of cycles. The
result is three equivalent loops: This is a three loop process:
Batch Number( Records Number( Cycles))).

2.4.2.5 Running the DEMO

To enter NeuroSim into a BEP demonstration, execute the NS_BEP.BAT
file at the DOS prompt. A pre-learned file B_NET is loaded along with
a vector filtered file B_PARA.DBF described dBase interface section.

2.4.2.6 Lessons to be learned

BEP learning may be fraught with peril if not approached properly.
Please keep in mind that NeuroSim v1.1 is designed to help you learn
about popular Neural Network learning only. From here on your
imagination is your most valuable tool. There are a number of
variables to consider when setting-up a BEP Neural Net. A few may be:

* What am I trying to make BEP learn?
* What shall form the input and supervising output vectors?
* Can the IO vectors be expressed in floating point notation?
* Will normalization of the IO vectors destroy the information
content?
* If any IO vectors are less than 0.1 or greater than 0.9, BEP may
spin a few machine cycles attempting to perfect its convergence to
non-attainable values, while side-tracking properly converged
mappings. A visible penalty for improperly presented OUT_VECTORs is
large WEIGHT values.
* What is the nature of the IO vectors (e.g.: smooth, erratic,
etc..)?
* How many PEs should each layer be attributed?
* How many PE layers should be active?
* What values of ESIGMOID are best?
* What values of Eta are best?
* What Cycle Batch count is sufficient?
* What dBase Batch count is sufficient?
* Lastly, install math-coprocessor chip into your PC if possible.

2.5 Kohonen Operations

2.5.1 Kohonen Editor

2.5.1.1 Global Editor Keys

2.5.1.1.1 Do Response Vector (F2)

Press F2 to see the new winning vector for any adjustments made in
the FIELDs.

2.5.1.1.2 See Field Names (F6)

Press F6 to see the active database field names.

2.5.1.1.3 Trace Learn (F7)

Press F7 to apply a neighborhood weight adjustment to the winning
vector.

2.5.1.1.4 Cycle Learn (F8)

Press F8 to apply a cyclic neighborhood weight adjustment to the
winning vector. The Cycle number is adjusted in the main menu
under Kohonen.

2.5.1.1.5 dBase Batch Learn (F9)

Press F9 to apply every record within the defined database a cyclic
neighborhood weight adjustment to the winning vector and repeat
Batch number of times. This is a three loop process:
Batch Number( Records Number( Cycles))). The dBase Batch number is
adjusted in the main menu under Kohonen.

2.5.1.1.6 Scanning various dBase records (CTRL-LEFT & RIGHT ARROW)

Press Control-Left or Control-Right arrows for viewing the dBase
records. Non-numeric and non-logical fields are automatically
group masked.

2.5.1.1.7 Randomizing and Normalizing with one keystroke

Press ALT_R or ALT_N, to automatically randomize or normalize the
IN VECTOR and WEIGHT columns without entering the Main Menu. The
degree of randomization may be set with the Rand Ratio parameter
under Kohonen in the Main Menu.

2.5.1.2 Kohonen Vector Box Keys

2.5.1.2.1 Return to Main Menu (ESC)

Press ESC to return to main menu from any point in the Editor.

2.5.1.2.2 Edit VALUES and WEIGHTS (F10)

Press F10 to enter the VALUES and WEIGHTS editor.

2.5.1.2.3 Imprint IN VECTOR (SPACEBAR)

Press SPACEBAR to assign the IN VECTOR to the current PE (indicated
by the star position '*') weights. Although this feature may be
construed as "cheating", I found it very useful in noting the
behavior of passing Learning Generations by setting a likely path
topology. A Learning Generation is my idea of one complete dBase
learning pass.

2.5.1.3 VALUES and WEIGHTS Editing

2.5.1.3.1 Masking and UnMasking VALUES and WEIGHTS (F3 and F4)

Press F3 to mask a FIELD. Logically adjacent FIELDS are also
masked. Group masking removes any ambiguity as to what values are
being multiplied and allows a WYSIWYG screen interface. Masked
FIELDS are not altered. Press F4 to unmask a FIELD. Group
unmasking also occurs.

2.5.1.3.2 Down Masking PEs (F5)

Press F5 to mask all FIELDS at and below the current FIELD
position. F5 is equivalent to many F3s applied downward.

2.5.1.3.3 Value and Weight Settings

Simply enter floating point values into the VALUE and WEIGHT
FIELDs. None numeric entries will result in a 0.0 being
interpreted.

2.5.1.3.4 IN VECTOR ID Settings

Positive non-zero numbers may be chosen for the two character
identification. If a dBase file is active, the IDs serve to
identify particular fields in that database beginning with the
first field 1. IDs need not be unique. Non-numeric and
non-logical fields are automatically group masked.

2.5.2 Kohonen Simulation

2.5.2.1 Kohonen Vector Box Topology

The box or matrix presented has a toriod topology. The leftmost and
rightmost columns are adjacent for simulation purposes, as well as the
top and bottom rows. This may initially appear to be an unusual quirk
in NeuroSim. The toroid connectivity was accepted simply for the sake
of neighborhood continuity, greater PE density, and hence faster
vector convergence for any given box size. It is also mathematically
sound to impart every PE within the Kohonen Vector Box with the same
neighborhood space.

2.5.2.2 Neighborhoods

Neighborhood zero is simply the PE itself. Neighborhood one is
comprised of eight immediately adjacent PEs to the PE in reference.
Neighborhoods two on up form concentric squares centered upon the PE
in reference. A 'Neighborhood' refers to the complete set of
concentric squares, including the referenced PE and is made visible
upon pressing F7 for trace.

2.5.2.3 Running the DEMO

To enter NeuroSim into a Kohonen demonstration, execute the NS_KOH.BAT
file at the DOS prompt. A 'fixed' file K_NET is loaded along with a
vector filtered file K_PARA.DBF described dBase interface section.
K_NET was simply randomized, normalized then imprinted in a square
pattern with all eleven non-filtered vectors of K_PARA.DBF. Press F9
to see the vector conglomerations.

2.5.2.4 Lessons to be learned

Kohonen learning is elegantly simple in concept and implementation.
Though once you have experimented with Kohonen learning you may ask
yourself "so what do I do with it?". The most feasible approach is to
use the database generator to construct normalized databases for your
own applications. Please keep in mind that NeuroSim v1.1 is designed
to help you learn about popular Neural Network learning only. From
here on your imagination is your most valuable tool. There are a
number of variables to consider when setting-up a Kohonen Neural Net.
A few may be:

* What am I trying to make the Kohonen model learn?
* What shall form the input vector?
* Can the input vector be expressed in floating point notation?
* Will normalization of the input vector destroy the information
content?
* What is the nature of the input vector (e.g.: smooth, erratic,
etc..)?
* How many PEs should the Kohonen Vector Box be attributed?
* What size for maximum rows and columns is proper?
* Reducing the Neighborhood size, may compensate in simulation speed.
* Beware of negative Neighborhood values shifting surrounding vectors
away from the winner: They may significantly increase in magnitude,
hence the Auto PE Normalization menu option.
* What Cycle Batch count is sufficient?
* What dBase Batch count is sufficient?
* Lastly, use install math-coprocessor chip into your PC if possible.

3 Database dBase III+ file format generation

Both dBase III+ and FoxBase structure formats will work properly. Simply
create the desired structure and save empty. The files described below serve
as a guide to IO vector generation. The file MAKEPARA.C is public domain.

3.1 Files to compile

Files MAKEPARA.C, DBC.LIB, and DBF.H are included as an aid in generating
database files B_PARA.DBF and K_PARA.DBF. MAKEPARA.C is a Turbo C file and
DBC.LIB is a large model library. Be sure to set your Turbo C Compile
option within the integrated environment for Large Model. It may have to be
tweaked for other compilers.

3.2 Functions to Test

The function created for K_PARA.C is a simple parabola, y=x*x.

The function created by B_PARA.C is a parabola with the range mapped from
[0,1] to [0.3,0.7], accounting for the value restrictions in BEP simulations
(<0.1 and >0.9). The general conversion expression is,

y1 = Delta + (1-2*Delta)*y0, where Delta = 0.3 for this example.

For example,
y=0.0 is mapped to 0.3+0.4*0.0 = 0.30
y=0.3 is mapped to 0.3+0.4*0.3 = 0.34
y=0.5 is mapped to 0.3+0.4*0.5 = 0.50
y=0.7 is mapped to 0.3+0.4*0.6 = 0.66
y=1.0 is mapped to 0.3+0.4*1.0 = 0.70

3.3 IO vectors to create

Two additional values were necessary to create a non-scalar IO vectors. To
retain vector unity the values chosen were given by SLACK=sqrt(1-z*z). To
see this, load either demo, view the input and output vectors with the
CONTROL-LEFT and CONTROL-RIGHT arrows, and note how both IO vectors are
unity. Press F6 to 'See' the database field names.

4 NeuroSim Shortcomings

NeuroSim forms a basis upon which an intrigued scientist (i.e. anyone who is
curious) can observe the effects of popular Neural Network constructs. The
Processing Element models simulated here, by no means encapsulate the full
behavior of a real neuron. For example, the action potential generated by the
axon hillock has a wave-form signature lasting 0.2 to 0.5 msec, and during a
portion of this state the neuron is temporarily prevented from firing again.
Also temporary firing on a synapse has an cumulative exponentially decaying
effect. These behaviors are not in NeuroSim v1.1 and are only applicable to
real-time or continuous simulation. Back Error Propagation and Kohonen
modeling are mathematical contrivances. Research is still under way to
simulate the behavior of real neurons while learning properly.

As you may have already gleaned, I am a student of my own creation, NeuroSim.
If you would like some interaction feel free to call or write to me concerning
suggestions or questions.

5 Several Applications

* approving loan applications
* student admission into universities
* trouble-shooting industrial circuits
* robot learning
* dynamic software interaction
* pattern recognition (sound, vision)
* computer music composition

6 Other Software Bytes Products and future versions

6.1 NeuroSim

NeuroSim v1.1 is a stepping stone into two AI Neural Network avenues. In
future versions I hope to provide better text color in the windows and
menus. My sole EGA monitor fried a resistor (really!) within itself and is
heretofore resting in peace, hence, the mono-like colors. NeuroSim will
also be combined with the ET package released in Mid-87. Automatic BEP IO
vector filtering. Several convergence accelerating techniques will be
employed such as statistical preparation of weights and automatic parameter
(ESIGMOID and Eta) adjustment. A WEIGHTs printing option will also be
available.

Neural Nets are a technology in search of applications. NeuroSim will later
be packaged with several applications. One life-long goal I have is to
create a computer music composition program with neural nets and a MIDI
interface - you may see one yet!

6.2 ET

ET is an artificial intelligence neural network (NN) simulation program.
The program name ET, is my naming convention for an artificial neuron and
derives from the Greek symbol sigma for the summation of input weights (the
keyboard letter 'E' suffices for sigma) and capital 'T' for threshold
activation. ET has a graphics and mouse intuitive interface. Neuron
threshold and weight inputs are manually set in this simulation to maintain
simplicity and encourage a fundamental understanding of neuron activation.
Both Perceptron and Sigmoid summing simulation models are provided. The
simulations are intended to be theoretical. ET has received a favorable
response from developers (mostly programmers) in Neural Networks.

Hardware and Software requirements for ET are: EGA and a Microsoft or
Logitech mouse. Note, the mouse driver must be called before executing ET!

7 Credits and Reference

A hearty thanks to Mark Thomas Clifton and Hara Ra who early on provided an
impetus to the creation of NeuroSim.

The dBase III+ to Turbo C converting files are the result of excellent public
domain work done by Mark Sadler. I found his work on a BBS and unfortunately,
he listed no forwarding address or phone number. I would normally suggest you
send him a check if you were interested in his conversion utilities.

Here is a list of excellent reference books ascending from easy to hard. All
may be found in university and technical bookstores.

Philip D. Wasserman,"Neural Computing, Theory and Practice",Van Nostrand
Reinhold, 1989.

Igor Aleksander,"Neural Computing Architectures, The Design of Brain-Like
Machines",The MIT Press, 1989.

James A. Anderson and Edward Ronsenfeld,"Neurocomputing, Foundations of
Research",The MIT Press, 1989.

8 Restrictions

NeuroSim.Exe with Save enabled may not be sold for distribution or placed on
any BBS without written permission from its author. The demo version of
NeuroSim has the Save disabled and may be found in BBS's or in Public Domain
Houses. The file MAKEPARA.C is public domain and may be released on a BBS.

After having attended the January '90, IJCNN AI Neural Network Conference in
Washington DC with the support of Steve Ward (president of Ward Systems) and
Tom Schwartz (president of The Schwartz Associates), I noted a gap in
university funding for Neural Network research. Software Bytes products
intends to close this gap. The enabled version of NeuroSim may be distributed
freely only within universities and colleges. I encourage you to register to
receive the latest of NeuroSim. Registered users may call for support.
NeuroSim as it stands took me two months to complete, by the time you read this
it may have been greatly enhanced.

9 Distribution and Order

NeuroSim costs $20 and was designed for your personal learning enjoyment.
NeuroSim development was a personal adventure primarily targeted for folks who
really want to learn about neural networks but can't afford hundreds for a
corporate product (e.g. students). An order form and registration file is
provided separately in this NeuroSim package. Updates are frequent and it may
be in your best interest to order and remain on the NeuroSim mailing list. If
you have questions and/or need software support contact.

Software Bytes
P.O. Box 9283
El Paso, TX 79983

(915) 779-2352

10 Closing

It is a note worthy event when in earthkind, not simply mankind, we have
learned about learning.



Enjoy.

- Raul Aguilar




 December 16, 2017  Add comments

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)