Contents of the TESTDISK.DOC file
THE NEW GENERATION OF DISK THROUGHPUT BENCHMARKS
(c) COPYRIGHT 1988-1991 BY
COLUMBIA DATA PRODUCTS, INC.
P.O. BOX 142584
1070B RAINER DRIVE
ALTAMONTE SPRINGS, FL 32714-2584
(407) 869-6700=VOICE (407) 862-4725=FAX (407) 862-4724=BBS
NOTE: TESTDISK IS A READ ONLY TEST AND WILL NOT DESTROY ANY DATA
ON YOUR HARD DRIVE!
Welcome to the world of hard disk performance measurement! Until
recently, measuring performance of disk subsystems for the IBM
PC/XT/AT/PS2 and compatibles has been somewhat over simplified by
the industry. Now, with the introduction of SST software
equipped controllers, contemporary benchmarks are no longer
adequate to predict disk subsystem performance in actual everyday
use. Benchmarks for the new generation of mass storage systems
for the IBM PC/XT/AT/PS2 and compatibles now require that more
variables be taken into account to accurately measure the
throughput a system can achieve.
Until recently the only major variable that generally was
considered was the Average Disk Head Seek Time, expressed in
milliseconds (commonly referred to as Average Access Time). From
the standpoint of a disk drive manufacturer, this is the major
item which is used to market their products and they are
consistently touting faster and faster access times. But the
REALLY important aspects of overall disk system performance are
just not shown.
Let's take a disk that has an access time of 20 milliseconds, for
example. Does that mean that just because it has a 20
millisecond access time that it is fast? Well, yes and no. It
means that it can move the disk head from one spot on the disk to
another spot on the disk in an overall average of 20
milliseconds. As far as access times go, a 20 millisecond
average access time disk drive is relatively fast. But it tells
you nothing of how fast it will deliver data to your computer.
So you ask, "Well, then, how can I tell when a disk drive is
fast?" Read on!!
Disk performance really starts at your hard disk controller--not
at your disk. Disk performance is a function of how rapidly data
can be transferred from the hard disk into your computer. If you
have a disk controller which can move data at 29,000 characters
per second (which is not uncommon), then it stands to reason that
it will take you six seconds to load your 174,000 character
spreadsheet. So, whether you have a 20 millisecond hard disk or a
65 millisecond hard disk, it still takes six seconds to recover
As you can probably tell, disk performance measurement, which
tells you how long you are going to wait for something to load
into your computer from your hard disk, can best be expressed in
characters per second. Since one character is equal to one byte,
and most disk systems move data in thousands of bytes (or
characters) per second, the standard term for data rate is
kilobytes (1024 bytes) per second (KBS).
Some benchmark tests actually DO report the data transfer rate
(in KBS) of the controller. They measure the movement of data
from the disk to memory. They only report, however, the highest
data rate your disk drive can achieve on your computer. Now, if
you read closely, you will notice that I said the HIGHEST data
rate your disk drive can achieve on your computer. Does this
mean that there is more than one data rate expressed in KBS which
can be achieved by your disk system? The answer is YES!
WHICH DATA RATES?
Whenever the disk system is asked to move data from the hard
drive into memory or vice versa, it takes time for the disk
system to figure out what you are asking for, where it is on the
disk, and how to move the data. This time lag is called command
processing latency. It is this initial time lag before data is
actually moved to or from the disk that is the evil of disk
performance. Since this time lag is pretty much a constant, it
is wise to get as much data off of the disk at one time rather
then to repeatedly ask for data in small blocks. For example, if
you wanted to move 327,680 bytes off the disk and the initial
command processing latency was 70 milliseconds (70/1000 of a
second) and the 327,680 bytes was moved in 512 byte blocks, you
would be asking the disk to move data 640 times and your command
processing latency alone would be 44.8 seconds (640 X .070
If, however, you moved the 327,680 bytes off the disk 65,536
bytes (64 KB) at a time, you would be asking the disk to move
data only 5 times and your command processing latency would be
35/100's of a second (5 X .070 seconds). That is 992% faster!!!
44.8 seconds compared to a fraction of a second and that's not
even counting the time to move the data!! This is extremely eye
opening isn't it?
So, you see, it doesn't appear to be very wise to ask the disk
for data in small blocks, now does it? But this is precisely the
way most application programs (such as Lotus, Dbase, etc) DO move
their data--512 bytes at a time. Some other disk performance
testers test the disk by moving data 65,536 bytes at a time. So,
you have absolutely no idea how fast Lotus, Dbase, or any of the
other application programs can retrieve data. All you know, is
that IF YOUR PROGRAM MOVED DATA 64K AT A TIME OFF THE DISK, THIS
IS HOW FAST IT WOULD BE!! In essence, then, to measure disk
performance in 64K blocks is utterly useless to you if you really
want to know how fast your application program can load data.
Your Columbia Data Products disk performance tester, TESTDISK,
doesn't just move data off the disk in 64K blocks. It moves the
data in 8 different size blocks: 1/2K, 1K, 2K, 4K, 8k, 16K, 32K
and 64K. This will give you a clear idea of how fast your disk
system really is. When you run this on a disk system which is
650 KBS at 64K blocks, you may be surprised to find that it is
only 120 KBS (or less) at the 1/2K block rate. You may even find
that you paid a premium for a "fast" disk system and you never
will see the "speed" because the programs you run cannot utilize
One last word about average access times. They are important.
We at Columbia Data Products do not dispute that. They account
for a sizeable portion of your command processing latency. You
should not, however, pay a premium for a "fast" access time if
you can't take advantage of it. This test will help you
determine that. The last example I wish to give will compare two
hard drives. The first hard drive has a 1 millisecond average
access time and the other has a 100 millisecond average access
time. The 1 millisecond access time hard drive can move data off
itself and into the computer at 100 thousand bytes per second
(100 KBS). The 100 millisecond access time hard drive can move
data off itself and into the computer at 900 thousand bytes per
second (900 KBS).
To move 900 thousand bytes, the 1 millisecond access time hard
drive takes 9 seconds while the 100 millisecond access time hard
drive takes only 1 second. Therefore, the 100 millisecond access
time hard drive is 9 times faster moving data even though it is
100 times slower moving its head! It probably costs a lot less,
too. I think you are beginning to see the picture.
To start testdisk, simply type TESTDISK and press enter. First,
you must choose whether or not you want to run testdisk in color.
Next, you will be asked whether or not your printer can print the
two characters which are displayed on the screen. A printer
which can print the IBM character set will have no trouble
printing the displayed characters. The printer will print out a
graph of the test results, if you desire, at the completion of
You will then be asked for a DOS drive letter (C:, D: etc) to
test. Next you will be asked how much data in Kilobytes you
would like to read. We recommend a minimum of at least 1000K for
an accurate test. The test then proceeds to perform a
verification pass to insure that the area being tested is
error-free. If it is not, an error will occur and the test will
stop. If it is error-free, the test will continue.
The data is then read by two methods--SEQUENTIAL and REPETITIVE.
The SEQUENTIAL test will read data from the drive the same way
that your application program will. The REPETITIVE test, which
reads data the way most "disk test" programs do, will not reflect
real world performance. We have included the REPETITIVE test for
comparitive purposes only.
TESTDISK creates a graph of 16 individual tests, 8 SEQUENTIAL,
and 8 REPETITIVE. Each individual test reads data in only one
block size. The tests are run in pairs (one SEQUENTIAL and one
REPETITIVE) for block sizes of 1/2K, 1K, 2K, 4K, 8K, 16K, 32K and
64K. In other words, if you are reading 1000K in 1/2K blocks,
you will access the disk drive 2000 times for each individual
test. For the 1K test, you will access the disk 1000 times, then
500 times for the 2K test, 250 times for the 4K test, and so on.
The SEQUENTIAL test reads the disk one sector after another,
beginning with data in sector 1 on the hard drive. To read data
sequentially, the hard drive head reads all the sectors on the
first track, then is moved from that track to the next track, and
so on, until the amount of data requested is read. The first
sequential test shows the rate for reading the requested amount
of data in 1/2K blocks, starting in sector 1. The second test
also begins in sector 1 and reads in 1K blocks, the third begins
once again in sector 1 and reads in 2K blocks, and so on.
The REPETITIVE read test is included to show how most other disk
test programs read their data--repetitively, 64K at a time.
Rather than reading the data sequentially, as required by most
application programs, data from the same spot on the disk is
repeatedly read over and over in 64K blocks. Reading the same
data repeatedly and in such large blocks does not accurately test
disk performance for an application program. Our first
repetitive test represents the requested amount of data being
read from one spot on the disk in 1/2K blocks, the second test
shows the rate for reading the same data from the same spot in 1K
blocks, and so on, up to 64K.
TESTDISK has a couple of options that can be passed in on the
command line. The /I option will give you descriptive text on
exactly what TESTDISK is doing at every step. It will also
provide you insight on what type of programs will perform I/O in
a given manner. The /C option will run TESTDISK continuously for
the purpose of stress testing your disk system. The test will
run indefinitely until you press the ESC key to stop the test.
The /C option will override the /I option.
The sequential test results will indicate your disk performance
on user programs which call for data from the disk. Use of the
repetitive test will show only what other test programs show.
Columbia Data Products, Inc. is a leading supplier of SCSI device
driver and application software. If you are interested in
obtaining information on our products, or would be interested in
getting on our mailing list for our quarterly Columbia Chronicles
newsletter, please let us know and we will be happy to send it to
you. Columbia's flagship product, SST software, provides a
comprehensive device driver/application solution for disk, tape,
optical, printer, scanner, changer and CD-ROM devices for a wide
variety of today's popular SCSI host adapters. We are dedicated
to providing you with the highest quality software to meet all
your SCSI subsystem needs. We would really like to hear from you
and we hope that Testdisk has provided you with quality
information about your disk system!