Dec 122017
Commmunications tutorial.
File COMMTUTR.ZIP from The Programmer’s Corner in
Category Tutorials + Patches
Commmunications tutorial.
File Name File Size Zip Size Zip Type
COMMTUTR.TXT 62941 22560 deflated

Download File COMMTUTR.ZIP Here

Contents of the COMMTUTR.TXT file


The following text file was captured by me as a result of my call
to Jim Davis' Retreat (713 497-2306) in Houston, Texas. I went
to his board to download GTCTL and GTLOG - two utilities used
with GT PowerComm. Jim came on the line to assist as I
experienced transmission problems. I took the opportunity to ask
questions about GT PowerComm and PC communications. Jim's
response is being presented here as an aid to other `Neophytes"
to PC communications.

<< Raymond Wood >>

... In the vernacular of the communications industry, there are a few
concepts that need to be understood before understanding 'HOW' is
accomplished. For example, the word BAUD. This essentially
means 'bits per second'. In fact, it means something a little
different than that, but for openers, let's say that's what it

Now, whenever two machines are going to try to communicate with
each other a couple of things have to be done by both. They must
both be set to send and receive at the same frequencies, for
example. The most often used frequency, today, is 1200 baud.
That means 1200 bits per second, as I said before. Well, most
users have no idea what bits are involved in a file transfer or a
message transfer. Let's look at another standard word: BYTE.
There are 8 bits of information contained in a byte. That is, a
byte is merely a set of 8 bits. Within a set of 8 bits there are
256 permutations available. From all zeroes to all ones. Each
letter in the alphabet and each digit and each other special
character is represented by a predetermined set pattern of
those 8 bits. A capital 'J' has a different pattern than a lower
case 'j', for example. Given that that is true, it is easy to see
that no more than 128 of the total possible patterns would be
necessary to represent any text. Thus, we have another 128 that
may be used for 'special purposes'. What, for example? I'll get
to that.

The sending of bits (on or off, high or low, in other words
binary information) is, by definition, a binary process. That is,
the computers need only recognize one of two states. The
telephone, on the other hand, carries information that is other
than binary. It can faithfully represent different tones, pitch,
and volume. This is called analog rather than binary. The
almost sole purpose of a modem is to translate binary signals
into analog and vice versa. When you are going to send a set of
bits across a telephone you will have to convert those binary
'states' into some form of sound (which is, after all, what the
telphone is designed to best carry). Modulating a signal from
binary to analog is the 'Mo' in Modem.
Demodulating an analog signal back into binary is the reverse and
is the 'Dem" in Modem.

If we want the transmission to be highly reliable then we must do
more than simply send the binary information (modulated). We have
all heard 'noise' on a telephone line and without doing more than
demodulating into bits, the receiver will no doubt have a
virtually impossible time of being able to tell what sounds are
bits or just plain noise. In some applications, we don't really
care all that much. Examples include the transmission of plain
text files. Recall that all that was necessary to send any
letter, many special symbols and any digit was a capability that
required no more than 128 different combinations of bits. 7 bits
are sufficient to represent 128 permutations of those bits. That
is, if a byte were only 7 bits long then it could contain as many
as 128 different sets of bits being on or off). However, a byte
is 8 bits long by definition. So, in what is called ASCII
(American Standard Code for Information Interchange)
transmissions we can use the first 7 of those bits to represent
data and the 8th bit to represent some form of insurance or
integrity check that the first 7 were received as they were sent.

This is called using 'PARITY'. You can establish a convention
between the sender and the receiver that every byte WILL have an
even number of bits (or odd) and use the 8th bit to do so at the
sending end. If the receiving end ever gets a byte that has odd
parity then it knows that it received the byte in error (some bit
or bits were either added or lost). That is all there is to
parity checking in an ASCII transmission. Not at all very good,
but sufficient for most text.

Program files or data files or even text files that have been
compressed (ARChived) in some way use all 8 bits in every byte to
represent information. So, we have lost the ability to use
parity as an integrity check vehicle. Instead, in every protocol
other than ASCII we add either one or two full bytes to the end
of a 'block' of bytes. The block is a fixed length (usually 128
bytes). The purpose of those one or two bytes is to contain what
is called a Cyclic Redundance Check (CRC) character or word.
Like parity, the CRC is constructed at the sending end to create
a pattern of bits that demonstrates that the preceeding entire
block of bytes has been received with integrity. The Receiving
end dynamically creates its own CRC from the information received
and compares it to the byte or bytes received at the end of a
block. If it doesn't match then the block must be rebroadcast (requested
by sending to the sender a signal that says: "Negative Acknowledge" -
NAK. If it was ok then it sends an ACK - meaning "Acknowledge", and the
next block is sent.

Now, let's go back to the idea of baud. At 1200 baud, the modems
are able to send and receive 1,200 bits per second. How many bits
per byte? Yes, 8, but not on a telephone line if you are using
modems! Instead, we bracket a byte by sending what is called a
start bit before the 8 bits of data and ending with what we call
a stop bit (sometimes 2 - at 300 baud). So, every byte requires
10 bits, not 8. Thus, at 1200 baud your maximum possible data
transfer rate is 120 characters (bytes) per second!

OK. Now we know what we have to send and how many bits are
required and that there is something called a response from the
receiver called either an ACK or NAK. So why don't we get 120
bytes per second transfers using 1200 baud modems? Well, we
already saw that for every 128 bytes of data, in most protocols,
we send an additional one or two bytes of CRC. We DO NOT count
the CRC byte(s) as data! Yet it takes time to transmit. Also,
it takes time for most protocols to turnaround and react to the
ACK or NAK. For example, assuming all is well, the sender has a
few hundred blocks to upload to the receiver. After the first
block is sent he, by convention, must wait for the receiver to
analyse the CRC and decide if it is going to respond with the ACK
or a NAK. Then it takes a moment to send that to the sender who,
in turn, has to receive it, verify that it got here properly (was
not just noise) and decide whether to send the next block or to
resend to last one that was improperly received by the receiver.
That takes time. All time used as described above is called
'overhead'. Overhead does not include the transmission of DATA,
only control bits and time. Thus, it is impossible to get to an
effective DATA transmission rate of even 118 characters per
second let alone 120 (CRC, etc). But, we know that the telephone
is capable of carrying sound in both directions simultaneously.
So, why should the sender have to wait for the receivers ACK or
NAK? This mode of operation is often called 1/2 duplex, by the

The answer, of course, is that it does so only by convention.
Newer protocols do not wait. They assume that a transmission
will be successful and will result in getting an ACK. So they go
immediately to the task of sending the next block. Always
listening, of course, for that ACK or NAK. When it is received
as an ACK all is well and we have gained performance. If not,
the software has to decide which block or blocks have to be
rebroadcast. In order to do that it should be obvious that the
ACK or NAK is not simply a single byte. Rather, it includes a
byte that is called the packet number (0 to 255), and possibly
more information. If an ACK is received the recipient knows
which of a series of blocks(packets) it is referring to.
Similarly it would know with an NAK. Yep, more bits and more

Well, then let's see if I can get to a few more contemporary
terms and information more practical to know at this time.

For example, almost nobody uses ASCII transfers any more. Why
should they when they are so poorly controlled and when you
realize that ONLY un-compressed raw text can be sent with it?
Still, a great many first time communications users try to do so.

And, while the transmissions will appear to work, the resulting
files will be garbage, of course. Only 7 oF the 8 bits are being
transmitted in each byte! Many comm programs will allow you to
use ASCII even when they should know that the result will be
unsatisfactory. For example, if a filename ends with COM or EXE
then, again by convention, that file is an executable program.
ALL such programs use 8 bits in every byte and could not,
therefore, be transmitted via ASCII. Some comm programs will not
let you try to do something that stupid (only, of course, to a
knowledgeable user).

What are the protocols that currently exist in wide spread usage
across the country? The most frequently seen is called XMODEM.
This protocol is quite reliable (about 96%) and uses blocks of
128 bytes plus one CRC character at the end of every block. It
is because it uses only one CRC character that the reliability is
only 96%.

Another is called XMODEM/CRC. This is exactly the same as XMODEM
but it uses two CRC characters. The result is that the effective
performance is reduced insignificantly (1/130th), but the
reliability is increased to about 99.6%. In any case where you
have a choice between the two you would, of course, opt for

Then, and this is particulary true in environments where one of
the computers being involved is either a mini or a mainframe,
there is a protocol which is called Kermit. I believe it uses
128 byte blocks and other overhead such as a 'header block -
block zero' that provides control information. It is also very
reliable (99.6% I believe) but it is SLOW!!! It is used only if
that is the only protocol available.

Then there is what is called YMODEM. This protocol differs from
the earlier ones in that it sends 8 - 128 byte blocks together as
a 'super block' before it sends the two byte CRC word. As a
result it is the fastest protocol that I have ever seen for micro
computers that use 'dumb' modems (ie, non self correcting ones).
There are two times when one should not use this protocol if
there is a choice: 1) when the line noise is great on the
telephone (for a retransmission of a 'block' that failed involves
1024+2 bytes even if only one bit was gained or lost). That is a
lot of overhead! And 2), in an environment like PC-PURSUIT that
involves long duration hand shaking turnaround delays.

Another protocol is called Telink. Telink uses 128 byte blocks
but has an advantage over the other ones. It results in a file
that is exactly the same size and has the same date and time
stamp on it as the one being sent. Ymodem, for example, adds to
(pads) a block until it is exactly 1024 bytes (the last record)
even if that record only contains a few bytes of data.

GT PowerComm has a unique protocol called 1kTelink. It is the
same as Telink except it uses 1024 byte blocks and is therefore
more efficient. Like YMODEM, 1kTelink should only be used on
clean phone lines for performance, but unlike YMODEM it can be
used on even a short file with efficiency.

In the case of GT, and then only if communicating GT to GT, if
either YMODEM or 1kTelink experience a set of 6 errors during the
transmission of a single file then it will automatically fallback
to 128 byte blocks to greatly increase the odds that the
transmission can be completed and to greatly increase the
efficiency on what is presumed to be a noisy line!!! Neat!!!

The BEST protocol at this time for use in a PC-PURSUIT environment
is called Wxmodem which stands for 'Windowing Xmodem'. This uses 128
byte blocks but it does not wait between blocks for a response. It is
always listening for those ACKs and NAKs, of course. Extremely high
performance is the result, relative to Xmodem or the other 1/2 duplex
protocols. Wxmodem tries to stay 4 blocks ahead of the receiver at all
times while the receiver tries to get 'caught up'. The difference
between the block being sent and the most currently received ACK or NAK
is called the window (a number between 1 and 4).

Then there are two more odd protocols that have become relatively
visible of late. These are called ZMODEM and Batch-YAM. ZMODEM
was designed for use in a PC-PURSUIT like environment. Like
WXMODEM, the best protocol for use in that environment, ZMODEM
does not wait for ACKs and NAKs. Unlike WXMODEM, ZMODEM is
relatively slow. For one reason, it uses no buffering. Thus
every 512 bytes of data it must make another disk access.
Batch-YAM is much like YMODEM except that it allows you to specify a
set of file names (a 'batch' of them). It is slower than YMODEM
except, possibly on PC-PURSUIT.

What must a user know to do a file transfer? What protocol is
available on BOTH ends of the transmission, the file name of the
file on his end and the file name on the other end. That is, if
the receiveing end of a transmission already has a file with the
name of the file you want to send to it, naturally you will call
the new file something else. Thus, every comm program allows the
specification of the file name on your end and then the name on
the other end. (It is not just an irritant that you 'already'
typed that in, it is necessary). Having said that I must make an
exception - Telink and 1kTelink. These protocols allow batch
names, like Batch-YAM, but the receiving end and transmitting end
file names are the same.

That's it for now.

Wood: I have a few questions. ok?

Davis: Sure.

Wood: Four to be exact.

1- You mention date/time stamp on one of your protocol
descriptions but did not define its use prior to that. What is
this and what is it used for?

PC-DOS or MS-DOS marks every file with the date and time that
file was created or last modified. So, let's say I want to send
you a copy of my transmission log that was dated 12/31/86 (by
DOS). If I use any protocol other that Telink the resulting file
on your end will be dated with the date and time it was created
(ON YOUR SYSTEM!) Today, now. Telink creates that file and
leaves it on your system with my date and time stamp still

When I receive an ARCed file this time/date stamp is in the EXE
module somewhere?

Davis: It is several places in that example. In the directory record on
your disk is the formal residence of the stamp. So, in the case
of an ARC file, it has a date and time stamp. Additionally,
within the ARC file each record, which is merely another way of
saying 'each file within the ARC file', has the date and time
that THAT file had in its directory record BEFORE it had been
ARCed into the ARC file. When you unARC, the resulting file will
not have todays date and time as a stamp but the one recorded
within the ARC file for it.

Wood: Good, I understand perfectly. I can relate it to what we
sometimes do on the mainframe.

2-You mentioned padding with YMODEM. What is this? Does the
receiving end recognize the padding and discard it automatically?

Davis: Let's say the file you want to send is exactly 1025 bytes long.
Each block transmitted by YMODEM contains 1024 bytes of date plus
2 bytes of CRC. It will, therefore, take two blocks to send that
file. The second block will contain only 1 byte of data plus
1023 padded "blanks" - actually End Of File marks. YMODEM sends
1024 bytes every time!. The receiver does not automatically
strip those padded bytes. Indeed, it passes them to the
resulting file so that it will always be an even multiple of the
1024. Thus, you sent a 1025 byte file and it becomes a 2048 byte

Wood: Ok--3...You came to a conclusion without what I thought was the
necessary support when you said "...thus 512 bytes result in a
disk access with ZMODEM..." I did not follow the conclusion.

Davis: Sure. As we discussed before the tutorial when we talked about
buffers, a buffer is a fixed length (amount) of memory,
sufficient to contain some number of blocks of data. In the case
of ZMODEM, a block is 256 bytes, by the way. If the protocol
used buffers there could be some large multiple of 'blocks' in
memory awaiting transmission. Instead, ZMODEM does not use a
buffer. Thus, it must have in memory only one sector of data at
a time. In the PC world, a sector is 512 bytes, or two blocks of
data as far as ZMODEM is concerned. Again, since that is the
case, after two blocks (512 bytes), ZMODEM must go back to the
disk to get more data to transmit.

Wood: One of the first things we learned in programming school 20+
years ago was that you could do things a lot faster with more
than one buffer. WE typically (or the system) use at least two.

Why would ZMODEM not use any? Is there a memory problem?

Davis: I can't speak for the authors of ZMODEM but I will say that it is
typically not a protocol that is written into a program like GT
PowerComm (As is Xmodem or Wxmodem, etc.). Instead, it comes
rather conveniently in the form of an EXE program that can be run
independantly of the comm package or by a simple shell out of the
comm package to it. In the latter case, there is no way to know
how much memory might be available in the majority of systems.
The program itself, could, of course, simply find out. But you
will recall that BOTH ends of a transmission are highly dependant
upon compatible software. It might be that the author of ZMODEM
simply took the easy way out. I don't know.

Wood: This leads nicely into my final question which deals with today's
comm packages. When I first bought my PC I did the necessary
research by reading reviews and magazines like Software Digest.
I rejected XTALK and settled on HYPERACCESS. After I started
using it I discovered Shareware. I have come to the conclusion
that there are two classes of products in the Micro world today.
Commercially developed and other. My company uses XTALK. In the
corporate environment you order a comm package and you get what
the corporate gurus decide is best for you.
I like ProCommm. I do not like to feel that I was ripped off by
buying HyperAccess. I just feel that I was uninformed at the
time. In this area ProComm seems to reign as King with the
majority of PC users.

4- What are the advantages of GT over ProComm?

Davis: Excellent question. Let me try to deal with it professionally
instead of from the bias I would naturally have for GT PowerComm.

(When I wrote the documentation for GT I twice called it ProComm
- how embarrassing it would have been if I had released it
without an edit).

Let's go back a little in time. Before the era of the PC
virtually all micro computers were 8 bit in design rather than
16. At that time the undisputed King in the area of comm
packages was Crosstalk. It enjoyed an excellent reputation and
was well supported. Further, it was not terribly expensive and
it was one of the only comm packages that supported what was to
become a whole set of protocol transfer methods (it was an XMODEM
protocol). Well, in those days if your comm package didn't work
reliably and you were not sure if it was a hardware problem or a
software problem you simply put up Crosstalk. If it worked the
conclusion was that the problem was software. It was THAT

Along came the PC's. Crosstalk was ported to the 16 bit world,
but in a way that made very little progress in terms of adapting
to the capabilities of the PC's. To this very day, I believe it
is impossible to change directories in Crosstalk, though I could
be wrong. In essence, Crosstalk continues to be available and
though it runs reliably in a 16 bit environment it runs like it
was in a CP/M environment, not a DOS one.

Then there was a leading contender from the shareware world
called QMODEM. It enjoyed an excellent following and was
remarkably efficient by comparison to Crosstalk - MUCH faster, in
fact. And, it had a couple of contemporary protocols not
available in Crosstalk. It took off and has been a very
successful product ever since. In my opinion it would still be a
champion product save only for a few 'author' problems.
It is a great program, nonetheless.

About the same time the Hayes Modem manufacturers
introduced SmartComm II as a commercial product and it was being
shipped with many of their modems. By brand identification it
was accepted. This, despite that it is the clumsiest of all the comm
packages I have ever seen. It was, furthermore, not very
efficient by comparison to QMODEM. It has essentially been
unchanged since its introduction (Sound like Crosstalk all over

A new comm package hit the scene called ProComm. In this program
the author spent a great deal of attention to 'image'. He used
imaginative ideas like a whistle that announced opening and
closing of windows, the windows themselves were innovative, etc.
It was no where near as efficient as QMODEM, but it captured the
imagination of the users. And, like QMODEM, the price was right
- $0 to try it out, and then if you decided to, you sent them a
small check - but that's shareware.

Procomm has advanced far faster than QMODEM in terms of
incorporating different protocols and the incorporation of what
is called a Host mode, or unattended mode of operation
(autoanswer of modem, etc.) It became King as you call it by
being both innovative and current - but not by being efficient,
though it is quite respectable.

GT PowerComm was only formally announced to the shareware world
on the 21st of last month!!!(2/21/87). It includes 8 protocols, not
including the also present ASCII, of course. At 2400 baud, I
routinely establish DATA transfer rates of 235.5 characters per
second with it, while the best I ever got with Qmodem was about
220 and with Procomm about 218. Actually, I did get a 225 once
with Qmodem, but only once.

So, in terms of performance, nothing has come close to being as
fast as GT PowerComm. But that, as we saw with Procomm, is not
all that the user is looking for. We have incorporated an
extremely rich function called the LOG. Into that log is
recorded all connects, disconnects, messges to the host,
passwords used to gain access, bad passwords tried, and even more
interesting, the names and time to trasmit every file that goes
from the GT to or from another computer, and along with that is
the total byes involved and the name of the protocol used in the
transmission and, finally, manually created notes and messages.
So what, you might ask. I would answer that if you were the Sysop
of a board, or of a Corporate system, you MUST be able to
determine who sent you a file or a messgage and when. (Yes, date
and time stamps are included in all entries in the log). For
example, what would be your reaction if you found that a program
on your disk was a trojan horse if you could not determine where
it came from? Or, say you created a proforma for your department
and it has been downloaded by 18 different executives before you
discover a major error in it. Wouldn't you want to be able to
determine who has received that file? All those kinds of
questions are automaticlly answered via GT's log and GTLOG. The
main reason for feeling that there is a substantial difference
between GT and Procomm for the user is in the area of SUPPORT. I
take it that it has occurred to you that I have been talking to
you for more than three hours already? And I don't even know if
you are a registered user of GT. Well, I am only one of two of
us that do exactly the same thing. The author of GT PowerComm, Paul
Meiners, provides 24 hour a day access to his system as I do (as the
author of the companion software). We have provided many new
versions of GT powerComm over the past year and are about to
provide release 12.10 only two weeks after announcing 12.00 on
the 21st! Why, because we are constantly enhancing the products
and our users want us to do so. We have several major clients
already including one of the major Oil companies, one of the
major airlines and one of the countries largest school
districts!!! Finally, nobody has a better Host mode than GT
PowerComm!!! I run a BBS using nothing else. That is power and
function! Try it, you'll love it!!

Wood: I can't wait to put the system together! Rest assured that I
will register the program. As an ex-programmer I know what is
involved. I wish the product much luck. Did you say 3 hours?

Davis: I believe so. I don't remember, but I reset the 1 hour time
limit I gave you twice now, possibly three times. By the way, as
a favor to me in exchange for the time, would you mind terribly
ARCing your capture file and sending me a copy. I can make it
available as a tutorial to others. And if you will make it
available to others as well, it is possible that they will come
to know GT PowerComm as well.

Wood: No problem. I will not be able to do this for a couple of days
however. My modem is on the blink and I am waiting for a
replacement. I will upload GT and the Log and CTL files to all
of the bulletin boards that I normally deal with. I have already
uploaded it to the corporate BBS. I do expect to get some
healthy ribbing from the ProComm lovers which is why I asked the
question that I did. For now though I would like to get the Log

Davis: Thanks for the opportunity to be of help. I too must get to
work. So, I'll take you out of chat mode. Don't forget to
'close' your capture file.

You have 48 minutes left.

Jim Davis' Retreat Voice 713 558-5015
Data 713 497-2306

Following is a second conversational 'chat' between James Davis
and Raymond Wood designed as a follow-up of the first one. It
takes on the form of a tutorial again due to the high number of
requests for same following the first one we released.

D: Shall we start this off with a kind of outline as to where I
think we will go with it? We discussed many fundamentals
involved with communications in the first tutorial and ended up
discussing several of the more popular file transfer protocols.
This session will go farther into the area of file transfer
protocols, technology such as the 9600 bit per second modems and
error correcting modems with MNP or ARQ, and how one goes about
intelligently selecting a protocol given a basic understanding of
their environment. For example, while Ymodem was described as
the 'King of the hill' when it comes to performance, that is not
true if you are using one of the packet switching networks. It
is also not true at 9600 bits per second.

W: You mentioned 9600 and MNP. I thought that there was no
industry standard for 9600 and that it is only practical if the
other end is talking the same language with the same hardware?
Also that MNP was implemented in the hardware of the
modem...where am I wrong ?

D: You're not wrong. GT PowerComm (12.20) now supports 9600
baud. I believe the newest version of Qmodem (3.0) does as well.

Paul Meiners, the author of GT PowerComm, has a USRobotics
HST9600 baud modem and he is using it every day. I, too, have a
USR HST9600 as well as a Microcom MNP modem that I am testing.
There are two quite different error correction methods in use at
this time. MNP (Microcom Networking Protocol) which was
developed by Microcom and ARQ (a general term used by USR to mean
Automatic Retry Request protocols - theirs being specifically
called USR-HST [High Speed Technology]) and these two methods are
totally incompatible. Even the methods used to modulate 9600
baud signals appears to be incompatible. However, we have
successfully connected these two different brands of modems in
'reliability' mode. The USR has the ability to 'fallback' to MNP
at 1200 or 2400 baud where MNP has established a standard. (Of
course, that makes sense for our PCP users).

We have also connected with other USR HST9600 modems and seen
that we have outstanding performance at 9600 baud. (We have
cruised along at about 945 cps during transfers of more than 3
million bytes so far). Further, GT is such an efficient comm
program that we are able to drive these modems at 19,200 bits per
second from the systems while the modem is communicating at 9600
to another modem - for additional performance. It is for this
very reason that we had to implement flow control - so the
transmitting modem does not overrun. I will discuss this in more
detail a little later in this tutorial.

So, while you are correct that there is no standard at 9600 baud,
that does not mean that 9600 baud modems are necessarily
impractical. We are determining to what extent it is a problem.
What concerns me the most is the different modulation methods.
Nevertheless, it will not stop our support of 9600 baud.

Finally, you are right again, MNP (ARQ) is a hardware function -
but it can and should be a transparent one. I note, for example,
that since I began testing these modems I have connected with
several (many) others and, as a result,totally eliminated the
line noise that was present prior to the MNP connection - ie,
there appears to be more to MNP than just error free file
transfers. Thus, we must look at it. And, in doing so, we will
test the various non-error checking protocols that are used in
such environments (Ymodem-G, for example). It is as much a
learning curve for us as for the users - we just MUST do it
behind the scenes for credibility sake.

W: I understand the necessity to stay up with technological
advances affecting your your product. What I am not to clear on
is exactly what is MNP or ARQ and why have they come about. Can
you shed some light on this?

D: Since 2400 baud modems are NOT really 2400 'baud' - they are
2400 bits per second, 1200 baud modems - it has been clear that
the limit of reliable communications in terms of speed using the
bandwidth of the existing telephone circuitry has not been
reached. However, it is also clear that as we more densely pack
information within that bandwidth the incidence of errors
increases. The manufacturers investigated, starting with
Microcom, various error detection and recovery methods that were
hardware assisted. That was the the birth of MNP (Microcom
Networking Protocol). There has been an evolution in that
technology which results in several 'levels' of MNP available
today. The higher the level, the more function is included. At
any level, MNP merely insures that the data received by the modem
is what was sent by the sending modem. That is INSUFFICIENT, in
my opinion. The only valid scenario is one in which the
receiving COMPUTER is insured that it received accurately what
the sending COMPUTER sent. There are cables, ports, circuits,
timings, etc. that MNP DOES NOT CHECK. Thus, it seems that a
combination of software and hardware error detection and
correction methods is necessary.

Almost all file transfer protocols check what I believe is
necessary - computer to computer accuracy. What, then, is the
advantage of MNP? Well, to begin with, it SHOULD be more
efficient. If the software need only be concerned with data
bytes and not CRC and other control bytes, then it should be
faster. Further, the newer levels of MNP are more efficient than
you might have guessed. They strip off the start bit and the
stop bits from each byte, for example, and that increases
transfer performance by 20% (8 bits per byte rather than 10).
Further, they send 'compressed' data via internal algorithms
which increases performance even more. On the other side of the
ledger, MNP and ARQ technology has some built in disadvantages
from a performance point of view, they are, after all, no longer
just high speed pipes but are now full computers (usually Z80's)
and are prone to modest slowdowns at the higher speeds.
Nevertheless, at 9600 'baud' it is possible to obtain about 1100
cps rather than 960 and at 2400 'baud' it is possible to obtain
upwards of 290 cps rather than 240.

Not to forget, as I mentioned earlier, MNP is active at all times
while protocol transfers are active only during a transfer -
thus, line noise is effectively filtered out even while we are
chatting. There are several possible advantages, and a few
disadvantages - not the least of which is the lack of standards.

W: Jim, I understand what you just said and from that it would
seem that MNP is needed at both ends to do the job. Is that
correct? Also is MNP proprietary for just Microcom modems?

D: It is obviously true that MNP (or ARQ) must exist on both
ends to be functional. When my Microcom modem connects with a
non-MNP modem it recognizes that fact and reverts to being a
standard Hayes compatible modem. Further, when the USR HST
connects with a Microcom that has MNP, there is a fallback in
baud rates to 2400 baud in both modems so that they can
communicate using MNP. That is likely to be overridden by the
users, however, via disabling MNP or ARQ in such situations. (My
opinion only). However, it is reasonably certain that 9600 baud
connections cannot be established without error correction being
functional. Further, while Microcom MNP is wider used than ARQ
(USR's method), the USR method of supporting both (at different
baud rates) is more flexible and argues for USR. It may be that
we obtained the wrong 9600 baud modems at this time. It is part
of the testing and learning process.

As to the proprietary nature of MNP, according to USRobotics,
Microcom has placed at least the first three levels of MNP into
the public domain. It is certain that they have been generous in
licensing out at least the lower 'levels' to other manufacturers.
What alternative do they have? Unless a standard evolves, these
are contests that damage the future, not advances it.

W: It seems obvious that standards in this area are to the
advantage of all concerned. Is there a standards organization
looking into this? I would like to have 9600 baud capability and
error free transmission. However, I would also like to
communicate with whomever without having to worry about whats at
the other end. Do you see what I am concerned about?

D: Of course. It is a paraphrase of my earlier discussion. I
think the only 'standards organization' that is effective is
called the marketplace. The huge power of the Hayes
organization, because of its modem standard, is likely to be the
telling blow to other manufacturers - when they finally put there
own 9600 baud technology - may well become the new standard.
Because of this I believe it is premature to buy 'long' in such
security issues as USRobotics and Microcom.

W: Whenever I talk to the Hayes people at a convention or trade
show, they know or say nothing about 9600 development. I do not
know if this is just policy or not. I think that when they do
introduce 9600 that it would not necessarily mean that whatever
they do will be the standard. I may be naive, but I would like
to believe that will be the case. I say this only because others
are active in meeting a need and they are not or appear not to

D: No argument there. My point remains valid only if Hayes does
something in the near term. Intel saw what happens when they get
over confident and let competition pass them by when they first
put the 8080 micro-computer chip into the marketplace. They had
it made, save only that the Z80 took it ALL away from them. It
was an awfully long time before they we were able to come back
and Motorola nearly did it to them again. So, while Hayes has by
far the largest visible shelf space in the industry at the
moment, USR (my guess) or Microcom could steal it away from lack
of responsive attention on their part.

W: It would seem that you need compatible hardware above 2400
baud and compatible software as well for truly effective and
increased performance. Does Paul Meiners' Megalink protocol tie
into this somehow?

D: Megalink is an extremely efficient protocol particularly
designed for the network environments like PCP and the higher
baud rates. It is 'network friendly', which means that it
recognizes and honors flow control imposed by the network. For
efficiency it uses 512 byte packets (4 blocks), it is a full
streaming protocol, which means it does not ever stop sending
unless it receives a NAK saying a packet was received in error,
and it is batch oriented. It uses block 0 header information, as
do all the '' protocols so that the resulting file is the
same size and properly time and date stamped, and it uses 32 bit
CRC rather rather than 16.

I think it is time to go back to the earlier tutorial and add
some more concepts at this time.

Since our last discussion there has been increased popularity in
two relatively new file protocols. The first of these is called
SEAlink and the second is Zmodem. You will recall in the earlier
discussion that 'windowing' techniques are beginning to become
available in the file transfer protocols. There is now a
Windowing Kermit, for example, as well as WXmodem. These
programs attempt to obtain better performance by avoiding the
start-stop approach used by earlier protocols where after sending
a packet of data the transmitter would stop and wait for an
Acknowledgment that the packet had been properly received before
sending the next one. Windowing protocols assume that the
packets are being received without error and do not wait between
packets. The receiving systems DO send ACK signals, its just
that the transmitter is not waiting for them. Assuming all is
well, time has been saved as a result. When an error does occur,
a NAK is returned to the transmitter and associated with that
signal is the packet number that was in error. Assuming the
transmitter still has that packet at its disposal it merely
retransmits it and proceeds.

That is the limit, of course. In order to be able to retransmit
a packet it must still be in the transmit buffer and the buffer
has a finite length. All windowing protocols set a maximum
'window size'. This means that there can be no more than 'x'
packets sent without a reply before the transmitter is forced to
wait for that reply else error recovery would not work. This is
no big deal at 1200 baud, but at 2400 and above it is really
quite limiting.

SEAlink is a windowing protocol. It has as an added advantage
over WXmodem, for example, two very important features: it uses
32 bit CRC for reliability, and it is 'network friendly'. The 32
bit CRC (4 byte CRC per packet) makes undetected errors virtually
impossible. The benefit gained in reliability is at the expense
of having twice as much CRC overhead, however. Thus, all else
being equal, it would be a little slower than WXmodem. All else
is not equal. Performance of SEAlink is not noticably degraded
because of 32-bit CRC though it is substantially affected by
being Network-friendly. Further, SEAlink uses a window size of 6
rather than the 4 used by WXmodem.

What is 'network-friendly'? It is a design that recognizes and
honors XON/XOFF signals that are placed on a packet switching
network when that network (like PC Pursuit) becomes so busy that
it is nearly choking on data. When the network places an XOFF on
the line, a network-friendly recognizes it for what it is rather
than a coincidental configuration of bits in a byte of data and
stops sending data! It stops until it receives an XON from the
network. Why is that important? Well, it is my experience that
a huge number of subscribers now exists for PCP. Forcing a
network to exceed its ability to handle data could only crash the
network. PCP would not allow that. They have intelligent node
controllers that selectively will abort a 'hog' link that does
not honor its earlier 'request' to wait a little (via XOFF).
Thus, using a protocol that is not network-friendly is like
saying: "I don't care if I am a hog. And, if you don't like it,
then abort me." As usage continues to increase, the network will
oblige that attitude.

The result of being network-friendly is two fold in terms of
'hits' against performance: 1) while you are waiting for the
network to send you an XON you are not sending data and 2) there
are MANY extra bytes of control information that definitionally
must be sent along with your data.

Let me explain that last point as it is not obvious, I know.
XOFF and XON are simply bytes, just like the letter 'A' or the
digit '4'. If no data file contained those bytes then it would
be easy to implement a network-friendly protocol. Recall,
however, that it is almost always true that data is sent in some
form of archive or compressed format. The resulting bytes can
have ANY configuration despite what the un-archived or un-
compressed file looks like. In other words, the odds are
essentially 100% that the data files that you send consist of
probably many bytes that look like XOFF or XON. That cannot be
allowed to happen. The protocol finds all such bytes and
encapsulates them in what is called an escape sequence that
consists of a special byte (usually the DLE character) followed
by a 'folded' duplicate of the byte that needed to be camouflaged
(the XON or XOFF). Folding merely means that the byte is
transmogrified in some way (usually via being sent as a
compliment - XORed with all 1's). Further, the DLE character
itself must also be escape sequenced for this method to work. It
is a random process that results in indeterminate performance for
any particular file. That is, if a file had none of these three
special byte combinations in it, then the time to transmit it
would be minimal where a file that happened to have many of them
will have that many more bytes to send in order to escape
sequence it. In such a case the file would take longer to
transmit than the first. Same protocol, different performance.

On balance, the designers of SEAlink did an excellent job. The
performance of SEAlink is essentially as good as WXmodem yet it
is more reliable and it is network-friendly. Incidentally, they
also escape sequenced a fourth byte - the SYN. It is for rather
obscure reasons and I believe a mistake. Why is SEAlink becoming
so popular? Because it is a protocol supported under a BBS
system called OPUS which is quickly replacing most of the old
FIDO systems all over the country. It is a good protocol.

The next one of interest is called Zmodem. This is almost always
found as an external protocol. That means it is included in a
file (DSZ.EXE) that is shelled to by the host or terminal
communications program when it is needed. As such, it requires a
lot of memory compared to the internal protocols. But because of
that, it is easy to install as a protocol offering of many BBS
systems. There is another and more significant difference
between Zmodem and the other protocols we have already discussed
so far. Instead of being start-stop in nature, and instead of
being windowing, it is a streaming protocol. A streaming
protocol does not expect to get ANY ACK signals back from the
receiver until the transfer is complete and successful. If an
error occurs it will receive a NAK and it is up to the
transmitter to insure that it can recover from any NAK received.
Thus, because it is not a windowed protocol it never stops
transmitting unless there is an error. That means it should be
faster than even the windowing protocols.

Unfortunately, while Zmodem uses 32-bit CRC for reliability, it
is NOT network-friendly. In some ways it is not even user-
friendly. For example, in every other protocol there is a way to
terminate the transfer should you wish to do so while it is in
progress. The usual manner is to press Cntl-X one or two times
and wait till the other end recognizes the abort request and
finally stops. In the case of Zmodem you must press 10! times in
a row to stop it. I suggest that not 1 user in a thousand knows
that. It is a popular protocol as a result of its performance on
the packet switching networks. Because it is not network-
friendly it does not bother with (it doesn't have to) escape
coding anything. That is probably a fatal mistake to its future
particularly as the networks get crowded.

Included in GT PowerComm 12.20 is the newest file transfer
protocol. It is called MegaLink. It uses 32-bit CRC, it is
network-friendly, is faster than Sealink, and like all the 'link'
named protocols it uses a header record that results in exact
size and proper time and date stamping of the resulting file when
received. Most interesting about MegaLink is how well it
performs at the very highest baud rates. Running comparative
tests of four different protocols, all sending the same 880K file
to the same machine and at 9600 baud, I obtained the following

WXmodem 60.4 % efficiency 580 cps
SEAlink 75.6 % 725 cps
Ymodem 77.6 % 744 cps
Zmodem unsuccessful*
MegaLink 98.5 % 945 cps

In order, WXmodem did so poorly for two reasons: at 9600 baud its
window limit of 4 is the same as not having a windowing technique
at all. Second, there are ACK signals coming back for each
packet sent. In the 9600 baud arena, the transmission is only
9600 baud in one direction and only 300 baud in the other! It is
transparent, more or less, to the users as the modems
automatically change which direction is at 9600 baud based on the
volume of data that needs to be sent in each direction at any one
time. Further, while one character (the ACK itself at 300 baud
is not significant, the ACK/NAK response is actually either two
or three bytes rather than one as you might expect. The
additional byte(s) is for packet number (and it's compliment).

SEAlink is being driven about as fast as it can go. It is not as
fast as Ymodem because of the small window it uses (like WXmodem)
and because it has so many more characters to transmit because it
is network-friendly (escape sequences).

Ymodem is going as fast as it can. It is effected primarily
because of the start-stop nature of its function and the fact
that the ACK/NAKs are coming back at 300 baud. Here we see
clearly an indication that the days of the start-stop protocols
are numbered.

As an aside, Ymodem-G would have performed MUCH better because it
has no error control whatever, thus it has fewer bytes to
transmit and no turnaround delays. Remember, however, that error
correcting modems are only capable of insuring that the data sent
from one modem is received reliably by the other. As will be
seen in the discussion later about Zmodem's total failure,
Ymodem-G would not have reliably worked in this test.

It is interesting that Zmodem failed altogether at 9600 baud.
The reason is a little subtle and it leads to the next thing I
wanted to discuss anyway.

I earlier mentioned that the MNP and ARQ modems are able to strip
the start and stop bits from bytes, (they must, thus, be in
synchronous mode rather than asynchronous), and that they also
may use a form of compression beyond that for performance
reasons. I further stated that at 9600 baud the modem I was
using was able to perform at 1100 cps rather than 960. This may
have caused you to ponder: if the modem is connect to the
computer at 9600 baud that means the computer can only send 960
characters per second to the modem for subsequent transmission.
So how can the modem send it any faster than it receives it?

The answer is that it cannot do so. The method to use to obtain
these extraordinary performances is to connect your computer to
the modem at 19,200 baud and utilize a buffer in the modem to
match up the input with the output. Naturally, as the data is
arriving at the modem much faster than it is leaving, there must
be a way to stop the input. Well, you guessed it, we use flow
control just like the networks when they are getting choked. In
particular, we sense that the modem's Clear To Send signal is on
or off. When off, we stop sending data to it and when on, we
instantly start cramming data at it at 19,200 baud. In this way,
the modem is able to send data at 1100 cps. Naturally, the modem
must be able to control its CTS signal for this to work.
USRobotics HST is capable of doing so.

I showed you what happened to Zmodem when we tried to transfer to
it at in excess of 9600 baud - it failed. That is not entirely
the fault of Zmodem, however. Unless the receiving system is of
the AT class of computers you will probably find that regardless
of what kind of software you are using with it, the modem is
faster than the computer's ability to feed it or eat from it!!
Now that is amazing, isn't it? We now have modems that are paced
by the computer they are attached to instead of the other way

Incidentally, unless the receiving computer is connect to the
receiving modem at 19,200 instead of 9600 baud, and has
implemented some form of flow control to signal the sending modem
that it's buffer is full, 1100 cps transmissions to it will
naturally fail when the buffer is overflowed.

This is the third in a series of tutorials that I hope will be
found to be useful to both new and experienced users of
communications facilities.

Q: Why is it that I experience so much more line noise than the
people I call? It seems that I see noise on my screen with some
frequency, but if I ask the party that I have called if he sees
it too, I'm usually told his screen is clean. Is there something
wrong with my system?

A: The odds are twice as great that you will have line noise if
you place a call to a computer than if a computer were to call
you. It is normal and easily explainable.

While it is true that the odds are twice as great that you will
experience or know about noise in the case where you have
initiated the call, the incidence of noise is the same regardless
of who places that call (assuming the same lines and circuits are
being used in both cases). The reason for this is that when you
are in Terminal mode (placing the call), your system is set to
full-duplex operation and when it is in Host mode (auto answer),
it is in half duplex.

Full duplex means that whatever you type on your keyboard does
not get sent to your screen. It is sent, instead, to the
communications port and from there it travels through your modem,
along the telephone lines to an answering modem, and then to a
host sytem. The host system then sends it back to you. In half
duplex, on the other hand, whatever you type is sent to both your
communications port and to your screen. From this it is obvious
that every character seen on your screen when you have placed a
call has gone through the telephone system while only half of
what is seen on the host system's screen has been on the
telephone circuit before it got there.

Further, line noise can be unidirectional. That is, it may
appear as data travels in only one direction or the other.
Regardless of that fact, it will be seen by the terminal mode
user (data must go both ways before it reaches the screen) and if
it appears only on the link from the host to the terminal user it
will never be seen by the host.

Q: The last tutorial you wrote told us about MNP and ARQ modems
being able to eliminate most line noise. How do they do that?

A: Part of that answer is still a mystery to me, but I know how
it does it in theory at least. I will tell you why part of the
answer remains a mystery in a moment. First, recall the
discussion we had about file transfer protocols. All of them
utilize some form of CRC mechanism to insure that the receiving
system had received all of the contents of a packet of
information without having dropped any bits or picked up any
extra bits. The CRC is a byte or a word of data that is the
result of an algoritm that 'folds' every byte in the data packet
onto itself in such a way as to result in a pattern of bits that
can be calculated by the receiving system as each byte of data is
received and then compared with the CRC that is subsequently
received. If there is a mismatch then the data (or CRC byte) did
not get received correctly. The MNP and ARQ modems implement
this strategy within themselves. All data that is transmitted
from one of these modems is re-packaged into what the modem
manufacturers call 'frames' (packets) before being transmitted.
Each frame is followed by a CRC byte or word that is stripped off
by the receiving modem and used to determine if the frame was
received correctly. Line noise simply makes that CRC check fail
and the result is an automatic retransmission of the frame.

As you can see from the above, the modem is now acting just like
your computer does during file transmissions using a protocol
transfer method. This is not done for 'free'. The overhead of
doing so results in less than rated speeds in every case. That
is, the theoretical maximum data rate of a 1200 baud modem is 120
characters per second, but MNP and ARQ modems are sending more
characters between themselves than the sending system itself. If
there are errors and, thus, an automatic retransmission of a
frame, the sending modem is very likely to have to ask the
sending computer to wait for it. It is estimated that this
overhead (even without errors) results in a degradation of about
12% in terms of the maximum possible performance of the modem
yielding about 106 characters per second possible throughput. To
counter that built in degradation, the modems strip the start and
stop bits from each byte and send only 8 bits rather than the 10
(or eleven) that are sent by non-error-correcting modems. This
increases the efficiency by about 20%. The net effect is that,
assuming no errors, the possibility of about 108% of rated
performance. (It is possible to get about 130 characters per
second rather than 120 if there are no errors - this also fails
to account for additional 'compression' methods built into some
of these modems).

So, where is the confusion? Well, the above assumes there is a
stream of data being sent that can be 'framed'. How the modems
function when a user is merely typing one or two characters or
words at a time before the other side responds is a mystery.
Indeed, as each character is typed it is sent down-line.
Presumably there is a timeout of some kind in the modem that says
that if another character is not entered within x milliseconds it
is presumed that the frame is complete and it is sent along with
its CRC. However it does it in practice, it does seem to be
effective at eliminating line noise.

Q: So MNP and ARQ modems are faster and eliminate line noise.
Sounds like the way to go. Are there any negatives to their

A: Interesting question. Assuming that you use protocol
transfer methods in addition to the error detection and
correction logic of the modems themselves, I can only think of a
couple of negatives at the moment. The first, of course, is the
lack of standards, particularly at the higher baud rates. Second
is the fact that every time you use one to call a system that
does not use MNP or ARQ (the vast majority of them do not) then
you automatically lose part of their opening screens.

Let me explain that. When an MNP or ARQ modem first connects
with another modem the calling modem issues a sequence of bytes
that is asking the answering modem if it is also MNP or ARQ.
These bytes include an id and an indication of the level of MNP,
for example, that the caller is using. The first set of
characters that come back from the called modem are then consumed
by the modem rather than passed through to the user's screen.
Thus, they are lost to your system. Very often it is necessary
for the calling system user to press his Enter key in order to
cause subsequent characters to be passed through the modem
(telling it in effect, to turn off MNP or ARQ). This is an
annoyance to the terminal mode user but it can be worse for the
host system.

With the introduction of release 12.20 of GT PowerComm there has
been some controversy as to the existence of the opening prompt
that it issues in which it asks if the caller wants to use ANSI
graphics or not. Many users seem mildly annoyed that their
selection is not recorded somewhere so they don't have to answer
that prompt more than once. What they fail to understand is that
the prompt is there for several reasons. MNP is a good example
of what I mean as is the possibility of noise on the line.

When an MNP call comes in, those initial characters I just
mentioned 'hit' the prompt and result in reissuance of it. We do
not permit a default to that prompt so that we do not go past it
with noise or MNP. By the time a Y or N is entered, the MNP
sequence of handshake signalling is done. If we did not have
that initial prompt then the first question the user would be
asked would be his first name. Ask any Sysop how many garbage
names he has in his user base. If there are any then I can
reasonably assure you that his system does not have a leading
prompt such as ours to protect him from noisy incoming calls (or

Q: Is 9600 baud the theoretical limit to technology in terms of

A: Hardly. It appears that 9600 'baud' stretches the
reliability limits of today's unconditioned telephone system, but
modems exist that are much, much faster than that already.
19,200 bits per second modems are functional on conditioned lines
even now. As to limits, well, did you know that satellite
communications capabilities exist that already permit the
transfer of over a million bits per second?

Over the past 20 years there has been a rather constant rate of
improvemnt in all aspects of data processing technology. As a
rule of thumb that is pretty close consider this: Every four
years there has been a three fold improvement in
performance/capacity for only a two fold increase in price.
Sometimes we forget how long this trend has been in effect, but
an IBM advertisement a few years back made it pretty clear. At
that time the ad suggested that if the automobile industry had
enjoyed the same rate of improvements over the past 20 years that
the data processing industry has enjoyed, then every adult in
this country could afford to own a Rolls Royce, as it would cost
only about $20 and, incidently, it would get about 2,000,000
miles to the gallon of gasoline. For a more contemporary
example, we need only look back at the original IBM PC. That
machine had 320K disk drives and a clock speed of 4.77 micro
seconds. Today you can buy a Compaq 386 that is 17 times faster
than the original PC (throughput) and you can get it off the
shelf with 130 megabyte hard disk. The price of this newer
machine is less than three times the original PC, closer to twice
the price. No, we are not at the limit of technology, not by a
long shot.

 December 12, 2017  Add comments

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>