Category : Science and Education
Archive   : HDFNEWS.ZIP
Filename : HDFNEWS2.TXT

 
Output of file : HDFNEWS2.TXT contained in archive : HDFNEWS.ZIP

NCSA HDF Newsletter #2
August 20, 1990

CONTENTS
HDF Version 3.1
VMS
DFSDgetslice and DFSDputslice documentation errors
Other releases
The HDF V-set
PolyView 1.0 (2-D and 3-D polygon viewer)
NCSA Image 3.0 (Macintosh): late September
Import2HDF (Mac): beta release
Reformat, a Unix-based file converter: beta release
Enhanced fptohdf coming soon
HDF 4.0 in the works
HDF and netCDF


Mike Folk ([email protected])
--------------------------- ** ---------------------------

HDF Version 3.1

During the first half of July, we released Version 3.1 of HDF.

The differences between HDF 3.0 and HDF 3.1 are not major. (See the list
below.) If you have been using HDF 3.0, you should not need to change
anything to your code or to the way you compile and link programs that invoke
HDF. If you have a program that runs correctly with HDF 3.0 but does not run
correctly under HDF 3.1, let us know immediately. ([email protected]
(217-244-0647) or [email protected] (217-2440014))

--- Changes in 3.10 ---

These are changes made in Release 3.10:

* fixed a bug concerning checking the status of opening a file with
unbuffered i/o

* Added function DF24readref and DFGRreadref for random access of 24-bit
rasters

* Added function DF24restart

* Added function DF24setil

* Speeded up the DFSDgetdata, DFSDputdata, DFSDadddata, DFSDgetslice and
DFSDputslice functions, especially for UNICOS machines. (I/o
performance on the Crays should be at least 10 times faster, and in some
cases much more than that.)

* Repaired the DFSDgetslice and DFSDputslice routines, which did not work
correctly. Also, the treatment of these routines in the HDF 3.0 manual
is incorrect. The correct versions are described briefly later in this
newsletter.

* Added several functions to the annotations interface. Most of the new
functions are to allow you to annotate HDF files. The already-existing
annotation functions are for annotating HDF objects within HDF files.
The new functions include: DFANaddfid, DFABaddfds, DFANgetfidlen,
DFANgetfid, DFANgetdslen, DFANgetfds, DFANaddfann, DFANgetfannlen,
DFANgetfann and DFANlastref.

* Revised DFANlablist so that it returns all reference numbers for a given
tag

* Fixed bug with DFSDgetdata where it did not move to the next SDG

* Added some macros to make passing character arrays from fortran to C
easier

* Other minor modifications.

--- Updated manual soon ---

An updated version of the HDF manual should be available soon. We will send
out another announcement when it is.

--- Supported platforms ---

With this release, we now support the HDF library on the following platforms:

Cray 2 and X-MP (UNICOS)
Suns (Unix)
SGI (Unix)
Alliant (Concentrix)
Macintosh (Mac OS)
IBM PC (MS-DOS)
Vax (VMS)

The Macintosh, PC, and VMS versions are somewhat special, unfortunately:

* On the Mac, you need MPW C and Languages Systems Fortran (we're trying to
make it work with Absoft Fortran as well). You need MPW C even if you only
want to write Fortran calls, because HDF makes calls to MPW C routines that
are proprietary and cannot be included with the library. We are also playing
with AUX 2.0 on the Mac, and may support an AUX-based HDF soon.

* The PC version currently requires Lattice C, but we are currently switching
over to Microsoft C, which seems to be more popular.

* See the writeup on VMS HDF below.


--- Other platforms ---

There are several other machines that HDF is running on successfully, but that
we can't officially support because we don't have them here. We have started
a HINTS file (on anonymous ftp in the HDF/src directory) where we put hints on
how you might go about porting HDF to your particular platform. When people
send us information about new machines that they have ported HDF to, we put
that information in this file. We can provide information (or put you in
touch with someone who can) on how you can probably make HDF work on the
following machines:

Cray (CTSS)
Apollo
MIPS machines (e.g. DEC 3100 (Ultrix))
Convex
Stellar

In general, if you have a Unix platform, there is a good chance that it is
similar enough to one of the machines that we do support that you can get HDF
working on it.

Also, a group at UCLA has been pouring a lot of effort into porting HDF to
their IBM 3090 MVS and VM systems. They have had a lot of success, and we
expect them to be able to share their results with us soon. Stay tuned.


--------------------------- ** ---------------------------

VMS

The VMS port finally seems to be working pretty well, at least if the lack of
bug reports on it is any indication. We owe a real debt of thanks to several
people for substantial amounts of help getting VMS to work, including Joe St.
Sauver at Oregon, Ron Schmucker at Lawrence Livermore National Laboratory,
Brian Wallace at Oak Ridge National Laboratory, and Bob Mark of the U.S.
Geological Survey. Future suggestions for ways to improve our VMS support are
extremely welcome.

I am pleased to report that we managed to scrounge a VaxStation II a couple of
months ago, and just recently have gotten it to be functional with VMS. Now,
with luck we can keep a really stable HDF working.

--- fixatr ---

One frequent question we get asked is how to use the fixatr routine. The
fixatr routine needs to be used to convert between VMS's Stream-LF format,
which VMS C reads and writes, and fixed- 512 format, which ftp and other
transfer programs work best with. The README.VMS file explains how to use
fixatr.

One problem that some people have encountered that has not been properly
covered in the README.VMS file is the need to identify correctly the file
recformat.exe that is invoked on the second line of fixatr.cld. After
executing makefix.com (enter: @makefix) you need to find out the full path
name to recformat.exe. This path name has to be substituted in the proper
place on line 2 of the file fixatr.cld, which originally reads:

image disk$system:[fixatr]recformat

For instance, if the directory that contains recformat.exe is sys$login:[hdf],
you change the line to read

image sys$login:[hdf]recformat.exe

Also, before executing fixatr, you need to execute the command:

set command fixatr

--- Utilities on VMS ---

Another problem several people have encountered has been in using the
utilities, such as hdfls and fptohdf. If a program name (e.g. hdfls) is to be
treated as a command procedure, VMS requires that you assign (using :==) the
name to the full path of the corresponding executable procedure. For example,
if the full path name for the executable hdfls is sys$login:[hdf]hdfls.exe,
you would enter

hdfls :== sys$login:[hdf]hdfls.exe

We have included with the new release on anonymous ftp a file called
setuputils.com that does this for you for all of the hdf utilities. (You have
to change a path name within setuputils.com to correspond to your system
before you execute it.)

--- hdfrseq ---

The utility hdfrseq requires special treatment when use from a VMS machine.
The problem is as follows: hdfrseq sends a stream of bytes via telnet to your
terminal. If this stream of bytes doesn't have a line-feed every 512 bytes or
less, VMS in its infinite wisdom adds one for you. This of corrupts the
image, and you get wierd streaks in your output.

Brian Wallace has given us a simple solution to this problem: You can
eliminate the extra line-feed by entering the following line before you
execute hdfrseq:

set terminal/nowrap

(After executing hdfrseq, you may want to set the terminal back to its
original modes with something like "set terminal/wrap.")

In release 3.1 this is done for you, as hdfrseq is included in a ".com" file
that sets "nowrap" before executing hdfrseq.

--------------------------- ** ---------------------------

DFSDgetslice and DFSDputslice
documentation errors fixed


The documentation for DFSDputslice and DFSDgetslice is wrong in the most
recently published version of the HDF manual. Most important is that the
parameter lists shown in the documentation are incorrect. Here are some
excerpts from the new documentation that explain how the routines should be
called.


--- Writing Parts of a Scientific Dataset ---

To store an array in slices, make calls to DFSDstartslice, DFSDputslice, and
DFSDendslice in the following order:

DFSDstartslice(filename)
DFSDputslice(windims, data, dims)
DFSDputslice(windims, data, dims)
...
DFSDputslice(windims, data, dims)

DFSDendslice()


--- DFSDstartslice ---

FORTRAN:
INTEGER FUNCTION dfsdstartslice(filename)
CHARACTER*64 filename

C:
int DFSDstartslice(filename)
char *filename; /* name of HDF file */

Purpose: To prepare the system to write a slice to a file.
Returns: 0 on success; -1 on failure.

Before DFSDstartslice is called, DFSDsetdims must be called to specify the
dimensions of the dataset to be written to the file. DFSDstartslice always
appends a new dataset to an existing file.


--- DFSDputslice ---

FORTRAN:
INTEGER FUNCTION DFSDputslice(windims, source, dims)
INTEGER windims(*)
REAL source()
INTEGER dims(*)

C:
int DFSDputslice(windims, source, dims)
int32 windims[]; /* dimensions of slice*/
float32 *source; /* array for storing slice*/
int32 dims[]; /* dimensions of array source*/

Purpose: To write a slice to an SDS
Returns: 0 on success; -1 on failure.

DFSDputslice stores part of an array to the dataset last declared by
DFSDsetdims. Slices must be stored contiguously.

Array windims ("window dimensions") specifies the size of the slice to be
written. windims has as many elements as there are dimensions in the entire
SDS array. source is an array containing the slice.


--- DFSDendslice ---

FORTRAN:
INTEGER FUNCTION DFSDendslice()

C:
int DFSDendslice()
Purpose: To specify that the entire dataset has been written.
Returns: 0 on success; -1 on failure.

DFSDendslice must be called after all the slices are written. It checks to
ensure that the entire dataset has been written, and if it has not, returns an
error code.


--- Example: Writing slices to a 10x12 SDS. ---
/****************************************************
*
* Example C code: Write out slices of different sizes
* from a 10 x 12 array.
*
****************************************************/

...

int rank;
int dimsizes[2], windims[2];
float data[10][12];

/* code that builds the array goes here */
...

dimsizes[0]=10;
dimsizes[1]=12;

DFSDsetdims(2,dimsizes);

/* write out scientific data set in slices */
DFSDstartslice(filename);

windims[0]=2; windims[1]=12; /* {(1,1) to (2,12)} */
DFSDputslice(windims, &data[0][0], dimsizes);

windims[0]=4; windims[1]=12; /* {(3,1) to (6,12)} */
DFSDputslice(windims, &data[2][0], dimsizes);

windims[0]=1; windims[1]=4; /* {(7,1) to (7,4)} */
DFSDputslice(windims, &data[6][0], dimsizes);

windims[0]=1; windims[1]=8; /* {(7,5) to (7,12)} */
DFSDputslice(windims, &data[6][4], dimsizes);

windims[0]=3; windims[1]=12; /* {(8,1) to (10,12)} */
DFSDputslice(windims, &data[7][0], dimsizes);

DFSDendslice();

...


--- Reading Part of a Scientific Dataset ---

The routine DFSDgetslice lets you read in a slice from an SDS. A slice is an
array of elements that is a subarray, or "hypercube", of the SDS from which
it is read. (Note that, for the purposes of reading slices, the definition of
a slice is more general that it is for writing slices.)

A slice can be described with two one-dimensional arrays, one containing the
coordinates of the corner that is nearest to the origin and the other
containing the sizes of the slices dimensions.



--- DFSDgetslice ---

FORTRAN:
INTEGER FUNCTION DFSDgetslice(filename,winst,windims dest, dims)
CHARACTER*(*) filename
INTEGER winst
INTEGER windims
REAL dest
INTEGER dims

Purpose: To read part of an SDS from a file.
Returns: 0 on success; -1 on failure.

DFSDgetslice accesses the dataset last accessed by DFSDgetdims. If DFSDgetdims
has not been called for the named file, DFSDgetslice gets a slice from the
next dataset in the file.

Array winst specifies the coordinates of the start of the slice. Array winend
gives the size of the slice. The number of elements in winst and winend must
be equal to the rank of the dataset. For example, if the file contains a three
dimensional dataset, winst may contain the values {2, 4, 3}, while windims
contains the values {3,1,4}. This will extract a 3 x 4, two-dimensional slice,
containing the elements between (2,4,3) and (4,4,6) from the original dataset.

dest is the array into which the slice is read. It must be at least as big as
the desired slice.

dims is an array containing the actual dimensions of the array dest. The user
assigns values to dims before calling DFSDgetslice.

NOTE: All the parameters on the call assume FORTRAN-style 1-based arrays.


--- Example ---

/****************************************************
*
* Example C code: Read in slices from a 10 x 12 array.
*
****************************************************/
#include "df.h"
...

int i, rank;
int32 dimsizes[2];

DFSDgetdims(filename, &rank, dimsizes, 2);

/* starting at (3,4) read 4 x 6 window */
getit("myfile", 3,4,4,6);

/* starting at (1,10) read 10 x 2 window */
getit("myfile", 1,10,10,2);


printf("\n");

}

getit(filename, st0, st1, rows, cols)
int st0, st1, rows, cols;
char *filename;
{
int i, j;
int32 winst[2], windims[2], dims[2];
float32 data[500];

winst[0]=st0; winst[1]=st1;
dims[0] = windims[0] = rows;
dims[1] = windims[1] = cols;
DFSDgetslice(filename, winst, windims, data,dims);

for (i=0; i printf("\n");
for (j=0; j printf("%5.0f%c",data[i*cols+j], ' ');
}
printf("\n");
}

--------------------------- ** ---------------------------

Other Releases

Here are some short takes on other items of interest. All of these items can
be gotten from our anonymous ftp server ("ftp.ncsa.uiuc.edu" (128.174.20.50)).
For those that are officially released, you can also order them on tape (and
sometimes on disk) through our technical resource catalog. To obtain a
catalog, contact:

NCSA Documentation Orders
152 Computing Applications Building
605 East Springfield Avenue
Champaign, IL 61820
(217) 244-0072


--- HDF Vset ---

The HDF Vset (formerly "vgroup") structures and interface are now available in
a separate directory on anonymous ftp. We described this structure in the
previous newsletter. Here is the gist of what we said:

Vset provides two important new structures:

1. a general grouping structure that lets the user form groups out of any
set of HDF objects, including other Vgroups

2. a general structure made up of a set of record-like structures, each
record being made up of a set of fields. Fields can be use-defined or
predefined.

Vgroups are useful for a number of important scientific application areas,
including finite element and non-rectilinear mesh data, and 3-D polygonal
data.

The primary use that have made so far of Vsets is in storing data for use with
our SGI-based polyview program. For Polyview, we store 3-D vertices,
connectivity lists (polygons), and associated scalar data. See the
description of Polyview below.

Vset is currently in a separate library, but we plan to integrate it with the
regular HDF library with the next full release of HDF, planned for late Fall
of this year. You can find Vset in a separate directory on anonymous ftp, or
contact Jason Ng ((217)- 244-8524; [email protected]).



--- PolyView 1.0 (2-D and 3-D polygon viewer) ---

PolyView is an interactive visualization tool for HDF Vset data. PolyView
displays an HDF Vset of polygons or points that describe a two or three
dimensional, interactive image with optional annotation. PolyView-produced
images may be written to a RIS8 HDF format file. The program also allows you
to:

-- change display projection
-- render image as points, lines, or polygons
-- choose constant or gouraud shaded polygons
-- load and manipulate the colormap
-- animate a series of vdata sets
-- view a fly-by of the data using a script file

PolyView is only supported on the Silicon Graphics Personal IRIS 4D/20G
(24-bit color, 24-bit Z-buffer). It makes extensive use of Z-buffering.
Although it has not been ported to or tested on higher-end IRISes, it should
run on most recent models.

Sample code which demonstrates the creation of an HDF Vset file is included.

--- NCSA Image 3.0 (Macintosh): late September --

This is the next upgrade for the Image program. It will support several new
features:

* Distributed computing capabilities
* HDF list windows (discussed below)
* Enhanced 3D support
* Bug fixes and minor upgrades
* Animation from a single file

The HDF list window allows one to view the tag/refs in an HDF file. In Image
three kinds of tag/refs are shown (RIG, SDG and IP8). The HDF list window
allows one to select any of the tag/refs and display its "contents." The list
window also allows one to annotate and label any of the tag/refs in the
window.

The code for the list window was written in a portable fashion for MPW C 3.0.
It should be easy to add the list window to most Mac programs. We have
already added it to Layout and plan on putting it into the other Mac tools
(DataScope and PalEdit).


---. Import2HDF (Macintosh): Beta release ---

Import2HDF is a program that allows HDF users to convert files in other
formats to HDF format. This program provides rudimentary display capability
for HDF file contents. When you open an HDF file it simply lists the contents
of the file in a window. From here, individual elements (RISs and SDSs) can be
displayed. Annotations for data groups can be added or changed.

This program is currently in BETA release form, and should be released in
early fall.


--- Reformat, a Unix-based file converter: Beta release ---

This utility provides a mechanism by which images can be transformed from one
storage format to another. It was initially developed to facilitate the
conversion of TIFF, FITS and GIF files to HDF files, so the conversion
routines are much the same as those used in the NCSA MacIntosh tool,
Import2HDF.

The graphical user interface requires X version 11, release 4 and the Athena
Widget set. It has been tested with release 2 and higher servers.

There is also a command line interface that does not require X.


--- fptohdf enhancement coming --

Bob Weaver at INEL recently sent me a greatly improved version of fptohdf. We
haven't installed this new version on anonymous ftp, but we hope to do so
soon. The current version accepts only 2-D data sets that are text or hdf SDS
files. The new version can accept 3-D scientific data sets, and also 32-bit
and 64-bit raw binary files.

We will put the new version on anonymous ftp soon. (We'll call it fp2hdf to
distinguish it from the old version.)



--------------------------- ** ---------------------------

HDF 4.0 in the works

HDF is currently undergoing a major overhaul. The primary goals of the
rewrite of HDF are

* to improve the underlying code structure, based on what we have learned
over the past two years
* to allow multiple file access
* to integrate the Vset interface into HDF
* to improve HDF's error handling facilities.

All of the current interfaces will be supported in HDF 4.0, but there will
also be new, corresponding interfaces that permit greater user control over
file access.

We will provide more details on HDF 4.0 in a future Newsletter.


--------------------------- ** ---------------------------

HDF and netCDF


NetCDF is an interface for data access produced by the Unidata Program Center
at the University Center for Atmospheric Research. It is a excellent
interface, providing a very effective data abstraction model for describing
scientific data. We discussed with Unidata the possibility of incorporating
the netCDF interface in HDF, and have concluded that it is something that we
should do. We are currently looking for funding to undertake this project,
which would be fairly substantial.

We will give more details about this effort in a future newsletter.

--------------------------- ** ---------------------------





  3 Responses to “Category : Science and Education
Archive   : HDFNEWS.ZIP
Filename : HDFNEWS2.TXT

  1. Very nice! Thank you for this wonderful archive. I wonder why I found it only now. Long live the BBS file archives!

  2. This is so awesome! 😀 I’d be cool if you could download an entire archive of this at once, though.

  3. But one thing that puzzles me is the “mtswslnkmcjklsdlsbdmMICROSOFT” string. There is an article about it here. It is definitely worth a read: http://www.os2museum.com/wp/mtswslnk/