Contents of the HARDDISK.TXT file
HOW TO FINE TUNE YOUR HARD DISK
Four easy, inexpensive ways to make your hard disk bigger and faster
By Mark Minasi
Need a faster disk? You could spend a pile of money on a new drive,
but there may be a better way. Your PC's disk subsystem is characterized by
bottlenecksm redundancy, and other innefficiencies. To enable you to fix
those problems, we'll discuss four techniques for tuning your disk: using
disk caches and track buffers, directory-structure caching with FASTOPEN,
rearranging the order of your directories, and unfragmenting your disks.
Amazingly, a good part of the PC world still doesn't use a cache
(pronounced "cash") program, though caches and track buffers have been around
the computer world nearly since its inception. The reason may be that many
people don't know what caches are. They're included in a class of programs
that perform speed matching; that is, they try to imbue the relatively slow
disk drive with the relatively high speed of the computer's RAM.
Your probably know that disks are slower than memory, but do you know
how slow? When your PC requests data from the hard disk, the disk must
deliver the data in 512-byte chunks called sectors. The disk typically
locates and reads a particular sector in 10-100 milliseconds (ms); this
number is the average seek time of the disk.
Ancient XT 10MB hard disks seek in around 100 ms. Newer drives
typically seek in 10-20 ms. So if we say, for example, that a disk can
transfer 512 bytes of data into the computer's RAM in 20 ms, how much time
is required to transfer 512 bytes of data from RAM to RAM? In other words,
what's the corresponding seek time for a block of data in RAM? A best case
scenario would be about 0.05 ms on a 20-MHz 386 computer, or 400 times
faster. So every disk access seems painfully slow--geological, in fact--to
the processor. This is where track buffers and caches come in.
Sectors are grouped together into a structure called a track. The
disk head floats over the track as the track spins beneath it at 3600 rpm.
The disk spins whether or not the head is reading the disk's data.
The notion of a track buffer grows out of the idea that since sectors
fly by the disk head while th head is waiting for the right one, we might as
well read them. It's generally true that when DOS needs sector x on a
particular track, the next sector it will need will be sector x+1 on that
same track. So track buffer programs like Microsoft's SMARTDrive 3.0 and
earlier (SMARTDrive 4.0, shipped with Windows v3.1, is a real cache)
intercept the DOS request for a single sector and reformulate it into a
request for all of the sectors on that track. When the disk hardware returns
with all of the sectors on the track, the track buffer puts the copies of the
disk data into an area of memory and passed to DOS the sector originally
requested. Soon thereafter, DOS will probably want the next sector on that
track. The track buffer, monitoring all disk activity, sees this request and
shields the disk hardware from it. Then it grabs the sector that already has
been read into its buffer area and passes the data to DOS. DOS has no idea
that this has happened, only that the disk drive is suddenly fast.
Obviously, track buffers work best when data is accessed in a nice orderly,
There's a class of programs that are more generic in the way that
they use extra RAM to increase apparent disk speed: disk cache software.
Disk caches don't worry about sector and track read-aheads, although they may
implement a bit of read-ahead for best preformance. Instead, they focus on
what, exactly, you use your disk for. If you're like most people, you use
the same areas on your disk over and over. Say you get in and out of
WordPerfect several times a day; that implies that your disk must reread the
WP.EXE program and its attendant files every time you start WordPerfect.
A disk cache improves on things in a manner similar to that of a
track buffer by sitting quietly in memory and monitoring disk activity. As a
file--say WP.EXE--is read, the disk cache makes a copy of the data that's
been read from the disk and puts it in the cache's memory area. Then, the
next time that DOS needs WP.EXE, the disk cache program steps in, removing
the need for the hardware to reread the WP.EXE file.
It should be obvious that cache programs need a fair amount of memory
in order to do any real good. Most cache programs use either expanded or
extended memory. If you have RAM to burn, I'd recommend a cache of at least
If you're running Windows, don't put your cache in expanded memory;
put it in extended memory. Memory-manager conflicts with Windows can cause
any program that uses expanded memory to lose its data in expanded memory.
If the program is a disk cache, parts of the disk may be affected, corrupting
the disk's data. I found this out the hard way. Running an early memory
manager with Windows caused the system to overwrite the first sector of my D
drive's file allocation table. In common parlance, that means DOS no longer
knew how to find the first 250 or so files on my D drive. Fortunately, I'd
just finished writing a book on bringing dead hard drives back to life (The
Hard Disk Survival Guide, published by Sybex), or those files would've been
gone forever. So stick your cache in extended memory if you use Windows.
WHAT TO CACHE
This leads to the next question about caches. If I allocate 1024K
(1MB) of RAM to a cache (that's tiny when compared to the capacity of my hard
disk), how does the cache program know what to put in the cache? Simple: It
just keeps copying everything that you read into the cache until it runs out
of cache space. Then it's got to make some decisions.
In order to accommodate new stuff, a cache throws out old stuff
according to earlier LRU or LFU algorithms. With LRU (Least Recently Used),
the cache throws out the oldest stuff. With LFU (Least Frequently Used), it
figures how often something is used. Which is better? Truthfully, that's
like asking, How many angels can dance on the head of a pin? Experts can
argue the merits of one method over another, but for normal PC usage, there's
no difference. I just mention LRU and LFU because you'll see references to
them in the cache documentation or in marketing literature.
THREE TO CONSIDER
OK, that's the techie stuff--how about some solid recommendations?
First, there's SMARTDrive 4.0, the newest version of the cache that ships
with Windows 3.1. In this incarnation, SMARTDrive is an EXE file that you
loard in your AUTOEXEC.BAT file (previous versions of SMARTDrive are SYS
files loaded in CONFIG.SYS), and it's a real cache. Not only does it allow
you to change the size of the cache block, but it also caches writes (which
gives it a big performance boost over previous versions), offers a raft of
new configuration features, and comes free with Windows. If you opt to use
the new SMARTDrive and cache writes, be sure to flush your cache before
turning your machine off. To run SMARTDrive without caching writes, simply
follow the SMARTDRV.EXE command with your drive letters.
Super PC-Kwik Disk Accelerator from Multisoft is my overall favorite,
and it's under $100. Unlike many other caches, it has been specifically
designed to work with Windows and even includes a small Windows program that
monitors what percentage of disk accesses have been satisfied from the cache.
You'll typically find that 80-85 percent of your disk accesses are
intercepted and handled by the cache. You can contact Multisoft at 15100
Southwest Koll Parkway, Suite L, Beaverton, Oregon 97006; (800)274-5945.
The other cache to consider is HyperDisk, from HyperWare. When I
last checked, it was a shareware product found on CompuServe, GEnie, and the
Suppose you don't want to spend any money. (Yes, you're supposed to
register--read pay--for shareware such as HyperDisk.) Assuming you've got
DOS 5.0, there are three commands that will help. First is good old BUFFERS,
a very simply system that, well, buffers sectors. Once upon a time, we all
tried to keep our BUFFERS values to a minimum because each of them took a
little over 500 bytes apiece from our precious conventional memory. But with
DOS 5.0 and a 286 or higher, you just load the HIMEM.SYS device driver and
specify DOS=HIGH in your CONFIG.SYS file, and all the buffers go live far
away from your 640k conventional memory. Crank up your BUFFERS number as
large as your like. It won't do much, but it may help some applications. On
older, slower computers, this advice doesn't apply, as too many BUFFERS
will slow things down.
Since version 3.3, DOS also has had a very small cache program that
caches just one thing: the directory structure. FASTOPEN's only job is to
prestore the information that DOS needs to traverse the subdirectory
structure. You see, subdirectory information--what files are in a
subdirectory, how big the files are, when they were created--is all kept in a
special kind of file. Accessing a data file in a subdirectory, then,
requires reading a bunch of files to understand the directory structure
before we even get close to reading the data file. By prereading the
directory structure into RAM, FASTOPEN speeds up the files-access process
noticeably. A word to the wise, however: Be careful about using FASTOPEN
in conjunction with disk caches, file unfragmenters, or any other disk
utility. Check the disk utility's documentation before you use it with
In additon to BUFFERS and FASTOPEN, DOS also has, as already
mentioned, the track buffer program SMARTDRV.SYS. If you're running
SMARTDrive with Windows, be aware that the Windows and DOS installation
programs are fairly dumb about the amount of memory they grant to SMARTDrive.
On a 4MB system, the Windows 3.0 installation program gives 2MB to
SMARTDrive--way to much, particularly since Windows desperately needs that
SORT YOUR DIRECTORIES
Reading files in subdirectories involves reading the files that are
the subdirectory structure, and that bring up another problem. DOS doesn't
keep files including subdirectoreis, in any particular order; it just puts
them wherever seems good at the time the files are created. Then, when DOS
needs a file or needs to find a subdirectory, it starts at the top of the
directory and sequentially works down until it finds the file.
Note that word sequentially: It points out a weakness in the DOS disk
structure. Say you've got 500 items in your root directory--495 files of
various kind and five subdirectories. The result is that every time you need
a file that's in one of those subdirectories, DOS must first find the
subdirectory itself. To do that, it has to look through the 495 files. All
of that searching takes time, and that's one reason why Microsoft wrote
FASTOPEN and included it with DOS. But there's another way.
The Norton Utilities includes a program called DIRSORT, which is
intended to sort your directories. There's really no point in sorting
directories--who needs alphabetized subdirectory names? DIRSORT's value is
that is allows you to throw out the alphabetizing nonsense and rearrange your
directories by hand. When rearranging your directories, use two simply
rules: Put the subdirectories above the files, and place the most-used
subdirectories at the top.
UNFRAGMENT YOUR FILES
Running out of disk space? Hey, who isn't? Most of us have hard
disks that are packed to the gills. It's a pain to constantly have to remove
one thing in order to put another on disk. Worse yet, there's a nasty side
effect: Your files get fragmented.
If it doesn't have enough room in one place, DOS scatters a file's
data all over the disk.
You see, when you ask DOS to put a new file on a mostly full disk,
DOS would like to put the file all in one place, but it probably can't.
Because the free space largely consists of empty spaces left behind by
deleted files, it's not all one nice pool of unused space; rather, it's
scattered all over the disk. So DOS has no choice but to scatter your file;
such a file is said to be fragmented. This isn't an error, as DOS can
retrieve fragmented files when needed. But it's undesirable because reading
fragmented files requires that the disk head move to and fro, requiring more
time than would be necessary otherwise.
To alleviate this problem, software manufacturers have produced a
slew of programs that will unfragment the data on your disk. The first was a
program called Disk Optimizer, from SoftLogic Solutions. Although it's still
available, the Big Three disk utility packages (Norton, Mace, and PC Tools)
all now incorporate unfragmenter programs. Norton's is called Speed Disk, PC
Tools has Compress, and Mace has Unfragment.
The best unfragmenter of all, is no longer available, as far as I
know. Called FastTrax, this program first examines the dates on your files.
Then, reasoning that the older files are the ones that won't be changed, it
puts those files near the "bottom" of the disk space, leaving at the top a
single pool of free space. As the newer files--that is, the ones most likely
to change--all reside near the top, they aren't fragmented as much or as
quickly when they grow. It's too bad that there doesn't seem to be a way to
get in touch with the program's makers; FastTrax is a nice utility, and
there's nothing on the market that works quite like it.
In any case, be sure to unfragment your disk now and then. But don't
do it to improve your disk's speed--you won't see that great an increase.
You'll see the difference if you ever need to do some kind of data recovery
on your disk. Think about it: If you had to use Norton or a similar program
to piece a file back together, would you rather do so with the fragmented
file picture or with the unfragmented one? The unfragmented file would be
much easier to reassemble.
There you have it--four ways to speed up your disk and save space.
So get started: Unfragment your disk, rearrange your directories, and spend
some cash on more memory so you can spend some extra memory on some cache.