Dec 142017
 
Documentation for POV 2.0 ray tracer.

Full Description of File


This archive contains the
documentation for the Persistence of
Vision Raytracer, POV-Ray V2.0,
formatted in plain ASCII text for the
IBM-PC and other systems. Also
contains standard include files and
some demo scenes. Create your
own \POVRAY2 directory and un-zip
this file using the -d option to
insure that the proper sub-directories
are created. Other files are needed.
See POVINF.DOC from this library for
more information.


File POV2DOC.ZIP from The Programmer’s Corner in
Category Printer + Display Graphics
Documentation for POV 2.0 ray tracer.
File Name File Size Zip Size Zip Type
DEMO 0 0 stored
AREALIT1.POV 1776 553 deflated
AREALIT2.POV 1783 562 deflated
AREALIT3.POV 1951 689 deflated
CHARS.POV 2848 601 deflated
COLORS.POV 9908 1390 deflated
DEMO.CAT 1793 647 deflated
NORMAL.POV 1056 415 deflated
PIGMENT.POV 1796 547 deflated
PLASMA3.GIF 11689 11029 deflated
PRIMITIV.POV 3465 1042 deflated
SHAPES.POV 2302 673 deflated
SHAPES2.POV 1798 564 deflated
SHOTXTR.INC 2712 551 deflated
STAGE1.INC 460 247 deflated
TEXTURE1.POV 1253 391 deflated
TEXTURE2.POV 1383 436 deflated
TEXTURE3.POV 1213 377 deflated
TEXTURE4.POV 1179 314 deflated
TEXTURE5.POV 1194 323 deflated
TEXTURE6.POV 1168 315 deflated
DOCS 0 0 stored
POVRAY.DOC 268700 81079 deflated
TEXTURES.DOC 16804 6097 deflated
FILE_ID.DIZ 450 292 deflated
INCLUDE 0 0 stored
CHARS.INC 15584 2206 deflated
COLORS.INC 8705 1908 deflated
FOV.INC 702 355 deflated
INCLUDE.CAT 1314 576 deflated
IOR.INC 398 247 deflated
ROUGH.GIF 39211 38572 deflated
SHAPES.INC 3387 1150 deflated
SHAPES.OLD 4107 1076 deflated
SHAPES2.INC 4768 1105 deflated
SHAPESQ.INC 11327 2911 deflated
STONES.INC 60161 7298 deflated
TEST.GIF 5080 5080 stored
TEXTURES.INC 32455 5925 deflated
POVDOC.CAT 590 325 deflated
POVINF.DOC 11630 4279 deflated
POVLEGAL.DOC 12596 4747 deflated
WHATSNEW.DOC 2924 1367 deflated

Download File POV2DOC.ZIP Here

Contents of the POVRAY.DOC file


Persistence of Vision Ray Tracer (POV-Ray)

Version 2.0

User's Documentation


Copyright 1993 POV-Ray Team








Table of Contents


1.0 INTRODUCTION

2.0 ABOUT POV-Ray

2.1 PROGRAM DESCRIPTION -- WHAT IS RAY TRACING?

2.2 WHICH VERSION OF POV-Ray SHOULD YOU USE?

2.2.1 IBM-PC AND COMPATIBLES
2.2.2 APPLE MACINTOSH
2.2.3 COMMODORE AMIGA
2.2.4 UNIX AND OTHER SYSTEMS
2.2.5 ALL VERSIONS

2.3 WHERE TO FIND POV-Ray FILES

2.3.1 GRAPHICS DEVELOPER'S FORUM ON COMPUSERVE
2.3.2 PC GRAPHICS AREA ON AMERICA ON-LINE
2.3.3 YOU CAN CALL ME RAY BBS IN CHICAGO
2.3.4 THE GRAPHICS ALTERNATIVE BBS IN EL CERRITO, CA
2.3.5 PI SQUARED BBS MARYLAND
2.3.6 INTERNET

3.0 QUICK START

3.1 INSTALLING POV-Ray

3.2 USING SAMPLE SCENES

3.3 COMMAND LINE PARAMETERS

3.3.1 ANTI-ALIASING
3.3.2 BUFFERING
3.3.3 CONTINUING INTERRUPTED TRACE
3.3.4 DISPLAY PREVIEW IMAGE
3.3.5 RENDER PARTIAL IMAGE
3.3.6 FILE OUTPUT TYPE
3.3.7 HEIGHT AND WIDTH OF IMAGE
3.3.8 INPUT AND OUTPUT FILE NAMES
3.3.10 ANIMATION CLOCK VARIABLE
3.3.11 LIBRARY SEARCH PATH
3.3.12 BOUNDING SLABS CONTROL
3.3.13 SYMBOL TABLE SIZE
3.3.14 VERSION COMPATIBILITY MODE
3.3.15 PAUSE WHEN FINISHED
3.3.16 QUALITY SETTINGS
3.3.17 VERBOSE STATISTICS
3.3.18 ALLOW ABORTED RENDERING

3.4 DEFAULT PARAMETER FILE AND ENVIRONMENT VARIABLE

4.0 BEGINNING TUTORIAL

4.1 YOUR FIRST IMAGE

4.1.1 THE POV-Ray COORDINATE SYSTEM
4.1.2 ADDING STANDARD INCLUDE FILES
4.1.3 PLACING THE CAMERA
4.1.4 DESCRIBING AN OBJECT
4.1.5 ADDING TEXTURE TO AN OBJECT
4.1.6 DEFINING A LIGHT SOURCE

4.2 MORE TEXTURE OPTIONS

4.2.1 SURFACE FINISHES
4.2.2 ADDING BUMPINESS
4.2.3 CREATING COLOR PATTERNS
4.2.4 PRE-DEFINED TEXTURES

4.3 MORE SHAPES

4.3.1 PLANE OBJECT
4.3.2 BOX OBJECT
4.3.3 CONE OBJECT
4.3.4 CYLINDER OBJECT

5.0 SCENE DESCRIPTION LANGUAGE REFERENCE

5.1 LANGUAGE BASICS

5.1.1 IDENTIFIERS AND KEYWORDS
5.1.2 COMMENTS
5.1.3 INCLUDE FILES
5.1.4 FLOAT EXPRESSIONS
5.1.5 VECTOR EXPRESSIONS
5.1.6 TRANSFORMATIONS
5.1.6.1 Translate
5.1.6.2 Scale
5.1.6.3 Rotate
5.1.6.4 Transforming Textures and Objects
5.1.6.5 Transformation Order
5.1.7 DECLARE

5.2 OBJECTS

5.2.1 SOLID FINITE PRIMITIVES
5.2.1.1 Spheres
5.2.1.2 Boxes
5.2.1.3 Cylinders
5.2.1.4 Cones
5.2.1.5 Torus
5.2.1.6 Blob
5.2.1.7 Height Fields
5.2.2 FINITE PATCH PRIMITIVES
5.2.2.1 Triangle and Smooth_triangle
5.2.2.2 Bicubic_patch
5.2.2.3 Disc
5.2.3 INFINITE SOLID PRIMITIVES
5.2.3.1 Plane
5.2.3.2 Quadric
5.2.3.3 Poly, Cubic and Quartic.
5.2.4 CONSTRUCTIVE SOLID GEOMETRY (CSG)
5.2.4.1 About CSG
5.2.4.2 Inside and outside
5.2.4.3 Union
5.2.4.4 Intersection
5.2.4.5 Difference
5.2.4.6 Merge
5.2.5 LIGHT SOURCES
5.2.5.1 Point Lights
5.2.5.2 Spotlights
5.2.3.3 Area Lights
5.2.3.4 Looks_like

5.3 OBJECT MODIFIERS

5.3.1 CLIPPED_BY
5.3.1 BOUNDED_BY
5.3.2 NO_SHADOW

5.4 TEXTURES

5.4.1 PIGMENT
5.4.1.1 Color
5.4.1.2 Color List Patterns -- checker and hexagon
5.4.1.3 Color Mapped Patterns
5.4.1.3.1 Gradient
5.4.1.3.2 Color Maps
5.4.1.3.3 Marble
5.4.1.3.4 Wood
5.4.1.3.5 Onion
5.4.1.3.6 Leopard
5.4.1.3.7 Granite
5.4.1.3.8 Bozo
5.4.1.3.9 Spotted
5.4.1.3.10 Agate
5.4.1.3.11 Mandel
5.4.1.3.12 Radial
5.4.1.4 Image Maps
5.4.1.4.1 Specifying an image map.
5.4.1.4.2 The "once" option.
5.4.1.4.3 The "map_type" option.
5.4.1.4.4 The "filter" options.
5.4.1.4.5 The "interpolate" option.
5.4.1.5 Pigment Modifiers
5.4.1.5.1 Turbulence
5.4.1.5.2 Octaves
5.4.1.5.3 Omega
5.4.1.5.4 Lambda
5.4.1.5.5 Quick_color
5.4.1.5.6 Frequency and Phase
5.4.1.5.7 Transforming pigments
5.4.2 NORMAL
5.4.2.1 Bumps
5.4.2.2 Dents
5.4.2.3 Ripples
5.4.2.4 Waves
5.4.2.5 Wrinkles
5.4.2.6 Bump_map
5.4.2.6.1 Specifying a bump map.
5.4.2.6.2 Bump_size
5.4.2.6.3 Use_index & use_color
5.4.2.6.4 The "once" option.
5.4.2.6.5 The "map_type" option.
5.4.2.6.6 The "interpolate" option.
5.4.2.7 Normal Modifiers
5.4.2.7.1 Turbulence
5.4.2.7.2 Frequency and Phase
5.4.2.7.3 Transforming normals
5.4.3 FINISH
5.4.3.1 Diffuse Reflection Items
5.4.3.1.1 Diffuse
5.4.3.1.2 Brilliance
5.4.3.1.3 Crand Graininess
5.4.3.1.4 Ambient
5.4.3.2 Specular Reflection Items
5.4.3.3 Highlights
5.4.3.3.1 Phong Highlights
5.4.3.3.2 Specular Highlight
5.4.3.3.3 Metallic Highlight Modifier
5.4.3.4 Refraction
5.4.4 SPECIAL TEXTURES
5.4.4.1 Tiles
5.4.4.2 Material_Map
5.4.4.2.1 Specifying a material map.
5.4.4.2.2 Material_map options.
5.4.5 LAYERED TEXTURES
5.4.6 DEFAULT TEXTURE

5.5 CAMERA

5.5.1 LOCATION AND LOOK_AT
5.5.2 THE SKY VECTOR
5.5.3 THE DIRECTION VECTOR
5.5.4 UP AND RIGHT VECTORS
5.5.4.1 Aspect Ratio
5.5.4.2 Handedness
5.5.5 TRANSFORMING THE CAMERA
5.5.6 CAMERA IDENTIFIERS

5.6 MISC FEATURES

5.6.1 FOG
5.6.2 MAX_TRACE_LEVEL
5.6.3 MAX_INTERSECTIONS
5.6.4 BACKGROUND
5.6.5 THE #VERSION DIRECTIVE

APPENDIX A COMMON QUESTIONS AND ANSWERS

APPENDIX B TIPS AND HINTS

B.1 SCENE DESIGN
B.2 SCENE DEBUGGING TIPS
B.3 ANIMATION
B.4 TEXTURES
B.5 HEIGHT FIELDS
B.6 FIELD-OF-VIEW
B.7 CONVERTING "HANDEDNESS"

APPENDIX C SUGGESTED READING

APPENDIX D LEGAL INFORMATION

APPENDIX E CONTACTING THE AUTHORS



1.0 INTRODUCTION
==================

This document details the use of the Persistence of Vision Ray Tracer (POV-
Ray) and is broken down into several sections.

The first section describes the program POV-Ray, explains what ray tracing
is and also describes where to find the latest version of the POV-Ray
software.

The next section is a quick start that helps you quickly begin to use the
software.

After the quick start is a more in-depth tutorial for beginning POV-Ray
users.

Following the beginning tutorial is a scene description language reference
that describes the language used with POV-Ray to create an image.

The last sections include some tips and hints, suggested reading, and legal
information.

POV-Ray is based on DKBTrace 2.12 by David Buck & Aaron A. Collins


2.0 ABOUT POV-Ray
===================

This section describes POV-Ray and explains what a ray tracer does. It
also describes where to find the latest version of the POV-Ray software.


2.1 PROGRAM DESCRIPTION -- WHAT IS RAY TRACING?
------------------------------------------------

The Persistence of Vision Ray Tracer (POV-Ray) is a copyrighted freeware
program that allows a user to easily create fantastic, three dimensional,
photo-realistic images on just about any computer. POV-Ray reads standard
ASCII text files that describe the shapes, colors, textures and lighting in
a scene and mathematically simulates the rays of light moving through the
scene to produce a photo-realistic image!

No traditional artistic or programming skills are required to use POV-Ray.
First, you describe a picture in POV-Ray's scene description language, then
POV-Ray takes your description and automatically creates an image from it
with near perfect shading, perspective, reflections and lighting.

The standard POV-Ray package also includes a collection of sample scene
files that illustrate the program's features. Additionally the POV-Ray
Team distributes several volumes of scenes that have been created by other
artists using the program. These scenes can be rendered and enjoyed even
before learning the scene description language. They can also be modified
to create new scenes.

Here are some highlights of POV-Ray's features:
* Easy to use scene description language
* Large library of stunning example scene files
* Standard include files that pre-define many shapes, colors and
textures
* Very high quality output image files (24-bit color.)
* 15 and 24 bit color display on IBM-PC's using appropriate hardware
* Create landscapes using smoothed height fields
* Spotlights for sophisticated lighting
* Phong and specular highlighting for more realistic-looking surfaces.
* Several image file output formats including Targa, dump and raw
* Wide range of shapes:
* Basic Shape Primitives such as... Sphere, Box, Quadric, Cylinder,
Cone, Triangle and Plane
* Advanced Shape Primitives such as... Torus (Donut), Hyperboloid,
Paraboloid, Bezier Patch, Height Fields (Mountains), Blobs,
Quartics, Smooth Triangles (Phong shaded)
* Shapes can easily be combined to create new complex shapes. This
feature is called Constructive Solid Geometry (CSG). POV-Ray
supports unions, merges, intersections and differences in CSG.
* Objects are assigned materials called textures. (A texture describes
the coloring and surface properties of a shape.)
* Built-in color patterns: Agate, Bozo, Checker, Granite, Gradient,
Leopard, Mandel, Marble, Onion, Spotted, Radial, Wood and image file
mapping.
* Built-in surface bump patterns: Bumps, Dents, Ripples, Waves,
Wrinkles and mapping.
* Users can create their own textures or use pre-defined textures such
as... Mirror, Metals like Chrome, Brass, Gold and Silver, Bright
Blue Sky with Clouds, Sunset with Clouds, Sapphire Agate, Jade,
Shiny, Brown Agate, Apocalypse, Blood Marble, Glass, Brown Onion,
Pine Wood, Cherry Wood
* Combine textures using layering of semi-transparent textures or tile
or material map files.
* Display preview of image while computing (not available on all
computers)
* Halt rendering when part way through
* Continue rendering a halted partial scene later


2.2 WHICH VERSION OF POV-Ray SHOULD YOU USE?
----------------------------------------------

There are specific versions of POV-Ray available for three different
computers, the IBM-PC, the Apple Macintosh, and the Commodore Amiga.


2.2.1 IBM-PC AND COMPATIBLES

The IBM-PC version is called POVRAY.EXE and is found in the self-extracting
archive POVIBM.EXE. It can be run on any IBM-PC with a 386 or 486 CPU and 2
megabytes of memory. A math co-processor is not required, but it is
recommended. This version of POV-Ray may be run under DOS, OS\2, and
Windows. It will not run under Desqview at this time. A version that runs
on IBM-PC's using the 286 CPU is also available in the self-extracting
archive POV286.EXE.


2.2.2 APPLE MACINTOSH

The Apple Macintosh version of POV-Ray can be found in the archive
POVMAC.SEA or POVMNF.SEA. POVMAC.SEA contains the preferred "high-
performance" executable for Macs with a floating point coprocessor (FPU).
POVMNF.SEA contains the slower more universal executable, which will run on
any 68020 or better Mac without an FPU.

The Macintosh version of POV-Ray needs a 68020 or better CPU (Mac II
series, SE/30, Quadras, some Powerbooks, etc.) It will run under Sytem
6.0.4 or newer (System 7 preferred.) It also requires 32 bit Color
Quickdraw, which is built into System 7, and is an optional init in System
6. The init can be found on the System 6 System disk "Printing", under the
"Apple Color" folder. It should also be available from any authorized
Apple Service Center, or CompuServe or local Macintosh bulletin boards.
QuickTime 1.5 or newer is preferred but not required. If installed, it
will allow compression of the final PICT images. It will also allow adding
custom System 7 Thumbnail icons to the PICT files in the Finder. Of
course, a color monitor is preferred, but not required.

2.2.3 COMMODORE AMIGA

The Commodore Amiga version of POV-Ray can be found in the file POVAMI.LZH.
Two executables are supplied, one for computers with a math co-processor,
and one for computers without a math co-processor. This version will run on
Amiga 500, 1000, 2000, and 3000's and should work under AmigaDOS 1.3 or
2.xx. The Amiga version supports HAM mode as well as HAM-E and the
Firecracker.


2.2.4 UNIX AND OTHER SYSTEMS

POV-Ray is written in highly portable C source code and it can be compiled
and run on many different computers. There is specific source code in the
source archive for UNIX, X-Windows, VAX, and generic computers. If you have
one of these, you can use the C compiler included with your operating
system to compile a POV-Ray executable for your own use. This executable
may not be distributed except under the terms specified in the file
POVLEGAL.DOC. Users on high powered computers like Suns, SGI, RS-6000's,
Crays, and so on use this method to run POV-Ray.


2.2.5 ALL VERSIONS

All versions of the program share the same ray tracing features like
shapes, lighting and textures. In other words, an IBM-PC can create the
same pictures as a Cray supercomputer as long as it has enough memory.

The user will want to get the executable that best matches their computer
hardware. See the section "Where to find POV-Ray files" for where to find
these files. You can contact those sources to find out what the best
version is for you and your computer.


2.3 WHERE TO FIND POV-Ray FILES
---------------------------------

POV-Ray is a complex piece of software made up of many files. The POV-Ray
package is made up of several archives including executables,
documentation, and example scene files.

The average user will need an executable for their computer, the example
scene files and the documentation. The example scenes are invaluable for
learning about POV-Ray, and they include some exciting artwork.

Advanced users, developers, or the curious may want to download the C
source code as well.

There are also many different utilities for POV-Ray that generate scenes,
convert scene information from one format to another, create new materials,
and so on. You can find these files from the same sources as the other POV-
Ray files. No comprehensive list of these utilities is available at the
time of this writing.

The latest versions of the POV-Ray software are available from these
sources:


2.3.1 GRAPHICS DEVELOPER'S FORUM ON COMPUSERVE

POV-Ray headquarters are on CompuServe Graphics Developer's Forum (GO
GRAPHDEV) sections 8 POV Sources and 9 POV Images. We meet there to share
info and graphics and discuss ray tracing. The forum is also home to
development projects on fractals, animation and morphing. It is the home
of the Stone Soup Group, developers of Fractint, a popular IBM-PC fractal
program. Everyone is welcome to join in on the action on CIS GraphDev. Hope
to see you there! You can get information on joining CompuServe by calling
(800)848-8990. CompuServe access is also available in Japan, Europe and
many other countries.


2.3.2 PC GRAPHICS AREA ON AMERICA ON-LINE

There's an area now on America On-Line dedicated to POV-Ray support and
information. You can find it in the PC Graphics section of AOL. Jump
keyword "PCGRAPHICS". This area includes the Apple Macintosh executables
also.


2.3.3 YOU CAN CALL ME RAY BBS IN CHICAGO

There is a ray trace specific BBS in the (708) Area Code (Chicago suburbia,
United States) for all you Traceaholics out there. The phone number of this
BBS is (708) 358-5611. Bill Minus is the sysop and Aaron Collins is co-
sysop of that board, and it's filled with interesting stuff.


2.3.4 THE GRAPHICS ALTERNATIVE BBS IN EL CERRITO, CA

For those on the West coast, you may want to find the POV-Ray files on The
Graphics Alternative BBS. It's a great graphics BBS run by Adam Shiffman.
TGA is high quality, active and progressive BBS system which offers both
quality messaging and files to its 1300+ users.

510-524-2780 (PM14400FXSA v.32bis 14.4k, Public)
510-524-2165 (USR DS v.32bis/HST 14.4k, Subscribers)

2.3.5 PI SQUARED BBS MARYLAND

For those on the East coast you may want to try th Pi Squared BBS in
Maryland. The sysop Alfonso Hermida CIS: 72114,2060 is the creator of
POVCAD. He carries the latest POV files and utilities, plus supports his
software. Call (301)-725-9080 in Maryland USA running @ 14.4K bps 24 hrs.


2.3.6 INTERNET

The POV-Ray files are also available over Internet by anonymous FTP from
alfred.ccs.carleton.ca (134.117.1.1).


3.0 QUICK START
=================

The next section describes how to quickly install POV-Ray and render a
sample scene on your computer.


3.1 INSTALLING POV-Ray
------------------------

Specific installation instructions are included with the executable program
for your computer. In general, there are two ways to install POV-Ray.

[ Note that the generic word "directory" is used throughout. Your
operating system may use another word (subdirectory, folder, etc.) ]

1-- The messy way: Create a directory called POVRAY and copy all POV-Ray
files into it. Edit and run all files and programs from this directory.
This method works, but is not recommended.

Or the preferred way:
2-- Create a directory called POVRAY and several subdirectories called
INCLUDE, DEMO, SCENES, UTIL. The self-extracting archives used in some
versions of the program will create subdirectories for you. If you create
your own, the file tree for this should look something like this:
\--
|
+POVRAY --
|
+INCLUDE
|
+DEMO
|
+SCENES
|
+UTIL

Copy the executable file and docs into the directory POVRAY. Copy the
standard include files into the subdirectory INCLUDE. Copy the sample scene
files into the subdirectory SCENES. And copy any POV-Ray related utility
programs and their related files into the subdirectory UTIL. Your own scene
files will go into the SCENES subdirectory. Also, you'll need to add the
directories \POVRAY and \POVRAY\UTIL to your "search path" so the
executable programs can be run from any directory.

*Note that some operating systems don't
*have an equivalent to the
*multi-path search command.

The second method is a bit more difficult to set-up, but is preferred.
There are many files associated with POV-Ray and they are far easier to
deal with when separated into several directories.


3.2 USING SAMPLE SCENES
-------------------------

This section describes how to render a sample scene file. You can use these
steps to render any of the sample scene files included in the sample scenes
archive.

A scene file is a standard ASCII text file that contains a description of a
three dimensional scene in the POV-Ray language. The scene file text
describes objects and lights in the scene, and a camera to view the scene.
Scene files have the file extension .POV and can be created by any word
processor or editor that can save in standard ASCII text format.

Quite a few example scenes are provided with this distribution in the
example scenes archive. The scenes in the standard archives are designed to
illustrate and teach you the features of the program. Additionally the
POV-Ray Team distributes several volumes of scenes in its ongoing series
"The POV-Ray Scene Library" These scene files range from very simple to
very complex. They have been created by users of POV-Ray all over the
world, and were picked to give examples of the variety of features in POV-
Ray. Many of them are stunning in their own right.

The scenes were graciously donated by the artists because they wanted to
share what they had created with other users. Feel free to use these scenes
for any purpose. You can just marvel at them as-is, you can study the scene
files to learn the artists techniques, or you can use them as a starting
point to create new scenes of your own.

Here's how to make these sample scenes into images you can view on your
computer. We'll use SIMPLE.POV as an example, just substitute another
filename to render a different image.

Note: The sequence of commands is not the same for
every version of POV-Ray. There should be a
document with the executable describing the
specific commands to render a file.

The file SIMPLE.POV was included with the standard scene files and should
now be in the DEMO directory. Make that the active directory, and then at
the command line, type:

POVRAY +Isimple.pov +V +W80 +H60

POVRAY is the name of your executable, +Ifilename.pov tells POV-Ray what
scene file it should use as input, and +V tells the program to output its
status to the text screen as it's working. +W and +H set the width and

height of the image in pixels. This image will be 80 pixels wide by 60
pixels high.

POV-Ray will read in the text file SIMPLE.POV and begin working to render
the image. It will write the image to a file called DATA.TGA. The file
DATA.TGA contains a 24 bit image of the scene file SIMPLE.POV. Because many
computers can't display a 24 bit image, you will probably have to convert
DATA.TGA to an 8 bit format before you can view it on your computer. The
docs included with your executable lists the specific steps required to
convert a 24 bit file to an 8 bit file.


3.3 COMMAND LINE PARAMETERS
-----------------------------

The following section gives a detailed description of the command-line
options.

The command-line parameters may be specified in any order. Repeated
parameters overwrite the previous values except for the +L switch which
defines include file library paths. Up to 10 +L paths may be specified.
Default parameters may also be specified in a file called "povray.def" or
by the environment variable "POVRAYOPT".

Switches may be specified in upper or lower case. Switches must be
preceded by a + (plus) or - (minus). In switches which toggle a feature,
the plus turns it on and minus turns it off. For example +P turns on the
"pause for keypress when finished" option while -P turns it off. Other
switches are used to specify values and do not toggle a feature. Either
plus or minus may be used in that instance. For example +W320 sets the
width to 320 pixels. You could also use -W320 and get the same results.
More examples follow this table.

Table 1 Command Line Parameters
Parameter......|.....Range........|...Description........................
-------------------------------------------------------------------------
+Annn | 0.0 to 3.0 | Render picture with anti-aliasing,or
| | "smoothing", on. Lower values cause
| | more smoothing.
+A | | Use default 0.3 anti-aliasing
-A | | Turn anti-aliasing off (default)
+Bnnn or -Bnnn | Varies w/ sys | Output file buffer size.
+C | | Continue an aborted partial image.
-C | | Start rendering from first line.
+Dxxx | Varies w/sys | Display image graphically while
| | rendering (Not available on all vers).
+Enn or +ERnn | 1 to 32,767 | End row for tracing
| or 0.0 to 1.0 | a portion of a scene.
+ECnn | 1 to 32,767 | End column for tracing
| or 0.0 to 1.0 | a portion of a scene.
+FT | | Output Targa format file
+FD | | Output dump format file
+FR | | Output raw format file
-F | | Disable file output.
+Hnnn | 1 to 32,767 | Height of image in pixels.
+Ifilespec | Varies w/ sys | Input scene file name, generally ends
| | in .pov.
+Jnnn.nnn | 0.0 to 1.0 | Set amount of jitter for anti-aliasing
+J | | Use anti-aliasing jitter 1.0 (default)
-J | | Turn off anti-aliasing jitter
+Knnn.nnn | any real value | Set "clock" float value for animation
+Lpathspec | Varies w/ sys | Library path: POV-Ray will search for
| | files in the directory listed here.
| | Multiple lib paths may be specified.
-MB | | Turn off bounding slabs
+MBnnn | 0 to 32,767 | Use bounding slabs if more than nnn
| | objects in scene.
+MSnnn | 300 or more | Set symbol table size (default 1000)
+MVn.m | 1.0 or 2.0 | Set version compatibility mode
+Ofilespec | Varies w/ sys | Output image filename.
+P | | Pause and wait for keypress after
| | tracing image.
-P | | Don't pause
+Qn | 0 to 9 | Image quality: 9 highest(default) to
| | 0 lowest.
+Rn or -Rn | 1 to 9 | Use n*n rays for anti-aliasing. Default
| | of 3 gives 9 rays; 4 gives 16 rays etc.
+Snn or +SRnn | 1-32,768 | Start row for tracing
| or 0.0 to 1.0 | a portion of a scene.
+SCnn | 1-32,768 | Start column for tracing
| or 0.0 to 1.0 | a portion of a scene.
+Vnn | Varies w/sys | Display verbose image stats while
| | rendering.
-V | | No stats during rendering
+Wnnn | 1-32,768 | Width of image in pixels.
+X | | Allow abort with keypress.(IBM-PC).
-X | | Disable abort with keypress.(IBM-PC).
--------------------------------------------------------------


3.3.1 ANTI-ALIASING

+Annn Anti-alias with tolerance level nnn.
+A Anti-alias with tolerance level 0.3
-A Don't anti-alias (default)
+Jn.nn Scale factor for jittering
+J Jitter AA with scale 1.0 (default)
-J Turn off jittering
+Rn or -Rn Use n*n rays when anti-aliasing (default 3)

Anti-aliasing is a technique used to make the ray traced image look
smoother. Often the color difference between two objects creates a "jaggy"
appearance. When anti-aliasing is turned on, POV-Ray attempts to "smooth"
the jaggies by shooting more rays into the scene and averaging the results.
This technique can really improve the appearance of the final image. Be
forewarned though, anti-aliasing drastically slows the time required to
render a scene since it has do many more calculations to "smooth" the
image. Lower numbers mean more anti-aliasing and also more time. Use anti-
aliasing for your final version of a picture, not the rough draft.

The +A option enables adaptive anti-aliasing. The number after the +A
option determines the threshold for the anti-aliasing.

If the color of a pixel differs from its neighbor (to the left or above) by
more than the threshold, then the pixel is subdivided and super-sampled. If
r1,g1,b1 and r2,g2,b2 are the rgb components of two pixels then the
difference between pixels is computed by:

diff=abs(r1-r2)+abs(g1-g2)+abs(b1-b2)

The rgb values are in the range 0.0 to 1.0 thus the most two pixels can
differ is 3.0. If the anti-aliasing threshold is 0.0, then every pixel is
super-sampled. If the threshold is 3.0, then no anti-aliasing is done.

The lower the contrast, the lower the threshold should be. Higher contrast
pictures can get away with higher tolerance values.

Good values seem to be around 0.2 to 0.4.

The super-samples are jittered to introduce noise and to eliminate moire
interference patterns. Note that the jittering "noise" is non-random and
repeatable in nature, based on an object's 3-D orientation in space. Thus,
it's okay to use anti-aliasing for animation sequences, as the anti-aliased
pixels won't vary and flicker annoyingly from frame to frame. The +Jnn.nn
switch scales down the amount of jitter from its default value 1.0. For
example +J0.5 uses half the normal jitter. Values over 1.0 jitter outside
the pixel bounds and are not recommended. Use -J to turn off jittering.

The +R switch controls the number of rows and columns of rays per pixel
with anti-aliasing. The default value 3 gives 3x3=9 rays per pixel.

The jittering and multiple rays are only used when +A is on.


3.3.2 BUFFERING

+Bnnn Use an output file buffer of nnn kilobytes.
-Bnnn Same as +Bnnn

The +B option allows you to assign large buffers to the output file. This
reduces the amount of time spent writing to the disk. If this parameter is
not specified, then as each scanline is finished, the line is written to
the file and the file is flushed. On most systems, this operation insures
that the file is written to the disk so that in the event of a system crash
or other catastrophic event, at least part of the picture has been stored
properly and retrievable on disk. (see also the +C option below.) A value
of +B30 is a good value to use to speed up small renderings. A value of
+B0 defaults to a small system-dependent buffer size. Note neither +B0 nor
-B turns this feature off. Once a buffer is set, subsequent +B commands
can change its size but cannot turn it off.


3.3.3 CONTINUING INTERRUPTED TRACE

+C Continue partially complete rendering
-C Render from beginning (default)

If you abort a render while it's in progress or if you used the +E or +ER
options to end the render prematurely, you can use the +C option to
continue the render when you get back to it. This option reads in the
previously generated output file, displays the image to date on the screen,
then proceeds with the ray tracing. This option cannot be used if file
output is disabled with -F. It does not work with +S, +SR, +SC or +EC
switches.


3.3.4 DISPLAY PREVIEW IMAGE

+D Use preview display
-D Turn preview display off (default)

If the +D option is used and your computer supports a graphic display, then
the image will be displayed while the program performs the ray tracing. On
most systems, the picture displayed is not as good as the one created by
the post-processor because it does not try to make optimum choices for the
color registers.

The +D parameters are system-dependent and are listed in the executable
documentation.


3.3.5 RENDER PARTIAL IMAGE

+Snnn or +SRnnn Start tracing at row number nnn.
+SCnnn Start tracing at column number nnn.
+Ennn or +ERnnn End tracing at row number nnn.
+ECnnn End tracing at column number nnn.

When doing test rendering it is often convenient to define a rectangular
section of the whole screen so you can quickly check out one area of the
image. The +S and +E switches let you define starting and ending rows and
columns for partial renderings.

The +S and +E options also allow you to begin and end the rendering of an
image at a specific scan line so you can render groups of scanlines on
different systems and concatenate them later.

WARNING: Image files created on with different executables on the same or
different computers may not look exactly the same due to different random
number generators used in some textures. If you are merging output files
from different systems, make sure that the random number generators are the
same. If not, the textures from one will not blend in with the textures
from the other.

Note if the number following +SR, +SC, +ER or +EC is a greater 1 then it is
interpreted as a number of pixels. If it is a decimal value between 0.0
and 1.0 then it is interpreted as a percent of the total width or height of
the image. For example: +SR0.75 +SC0.75 starts on a row 75% down from the
top at a column 75% from the left and thus renders only the lower-right 25%
of the image.


3.3.6 FILE OUTPUT TYPE

+FT Uncompressed Targa-24 format (IBM-PC Default)
+FD Dump format (QRT-style)
+FR Raw format - one file each for Red, Green and Blue.
+F Use default file type for your system
-F Turn off file output

Normally, you don't need to specify any form of +F option. The default
setting will create the correct format image file for your computer. The
docs included with the executable specify which format is used.

You can disable image file output by using the command line option -F. This
is only useful if your computer has display options and should be used in
conjunction with the +P option. If you disable file output using -F, there
will be no record kept of the image file generated. This option is not
normally used.

Unless file output is disabled (-F) POV-Ray will create an image file of
the picture. This output file describes each pixel with 24 bits of color
information. Currently, three output file formats are directly supported.
They are +FT - Uncompressed Targa-24 format (IBM-PC Default), +FD - Dump
format (QRT-style) and +FR - Raw format - one file each for Red, Green and
Blue.


3.3.7 HEIGHT AND WIDTH OF IMAGE

+Hnnn or -Hnnn Set height of image in pixels
+Wnnn or -Wnnn Set width of image in pixels

These switches set the height and width of the image in pixels. This
specifies the image size for file output. The preview display with the +D
option will generally attempt to pick a video mode to accommodate this size
but the +D settings do not in any way affect the resulting file output.


3.3.8 INPUT AND OUTPUT FILE NAMES

+Ifilename Set the input filename
+Ofilename Set output filename

The default input filename is "object.pov". The default output filename is
"data" and the suffix for your default file type. The +O switch has no
effect unless file output is turned on with +F

IBM-PC default file type is Targa, so the file is "data.tga".

Amiga uses dump format and the default outfile name is "data.dis".

Raw mode writes three files, "data.red", "data.grn" and "data.blu". On IBM-
PC's, the default extensions for raw mode are ".r8", ".g8", and ".b8" to
conform to Piclab's "raw" format. Piclab is a widely used free-ware image
processing program. Normally, Targa files are used with Piclab, not raw
files.


3.3.10 ANIMATION CLOCK VARIABLE

+Knnn or -Knnn Set the "clock" float value

The +K switch may be used to pass a single float value to the program for
basic animation. The value is stored in the float identifier "clock". If
an object had a "rotate <0,clock,0>" attached then you could rotate the

object by different amounts over different frames by setting +K10, +K20...
etc. on successive renderings.


3.3.11 LIBRARY SEARCH PATH

+Lpathspec Specify on of 10 library search paths

The +L option may be used to specify a "library" pathname to look in for
include, parameter and image files. Multiple uses of the +L switch do not
override previous settings. Up to ten +L options may be used to specify a
search path. The home (current) directory will be searched first followed
by the indicated library directories in order.


3.3.12 BOUNDING SLABS CONTROL

-MB Turn off bounding slabs
+MBnnn Use bounding slabs if more than nnn objects in scene.

New in POV-Ray 2.0 is a spatial sub-division system called bounding slabs.
It compartmentalizes all of the objects in a scene into rectangular slabs
and computes which slabs a particular ray hits before testing the objects
within the slab. This can greatly improve rendering speed. However for
scenes with only a few objects the overhead of using slabs is not worth the
effort. The +MB switch sets the minimum number of objects before slabs are
used. The default is +MB25. The -MB switch turns off slabs completely.


3.3.13 SYMBOL TABLE SIZE

+MSnnn or -MSnnn Sets symbol table size (default 1000)

POV-Ray allocates a fixed number of spaces in its symbol table for declared
identifiers. The default of 1000 may be increased if you get a "Too many
symbols" error message.


3.3.14 VERSION COMPATIBILITY MODE

+MVn.n or -MVn.n Set version compatibility mode

While many language changes have been made for POV-Ray 2.0, most version
1.0 syntax still works. One new feature in 2.0 that is incompatible with
any 1.0 scene files is the parsing of float expressions. Setting +MV1.0
turns off expression parsing as well as many warning messages so that
nearly all 1.0 files will still work. The "#version" language directive
also can be used to change modes within scene files. The +MV switch
affects only the initial setting.


3.3.15 PAUSE WHEN FINISHED

+P Pause when image is complete so preview image can
be seen.
-P Do not pause. (default)

Normally when preview display is on you want to look at the image awhile
before continuing. The +P switch pauses and waits for you to press a key
before going on.


3.3.16 QUALITY SETTINGS

+Qn or -Qn Set rendering quality

The +Q option allows you to specify the image rendering quality, for
quickly rendering images for testing. You may also use -Q with no
difference. The parameter can range from 0 to 9. The values correspond to
the following quality levels:

0,1 Just show quick colors. Ambient lighting only.
Quick colors are used only at 5 or below.
2,3 Show Diffuse and Ambient light
4,5 Render shadows, use extended lights at 5 but not 4
6,7 Create surface textures
8,9 Compute reflected, refracted, and transmitted rays.

The default is +Q9 (maximum quality) if not specified.


3.3.17 VERBOSE STATISTICS

+V Verbose statistics on
-V Verbose statistics off

When the +D option is not used, it is often desirable to monitor progress
of the rendering. The +V switch turns on verbose reporting while -V turns
it off. The format of the output is system dependent.


3.3.18 ALLOW ABORTED RENDERING

+X Allow abort with keypress
-X Disable abort with keypress

On the IBM-PC versions only, when you specify the +X switch then any
keypress will abort rendering. The -X switch disables this feature.


3.4 DEFAULT PARAMETER FILE AND ENVIRONMENT VARIABLE
-----------------------------------------------------

You may specify the default parameters by modifying the file "povray.def"
which contains the parameters in the above format. This filename contains a
complete command line as though you had typed it in, and is processed
before any options supplied on the command line are recognized. You may put
commands on more than one line in the "povray.def" file.

Examples:

POVRAY +Ibox.pov +Obox.tga +V +X +W320 +H200

+Ibox.pov = Use the scene file box.pov for input
+Obox.tga = Output the image as a Targa file to box.tga
+V = Show line numbers while rendering.
+X = Allow key press to abort render.
+W320 = Set image width to 320 pixels
+H200 = Set image height to 200 pixels

Some of these parameters could have been put in the POVRAYOPT environment
variable to save typing:

SET POVRAYOPT = +V +X +W320 +H200

Then you could just type:

POVRAY +Ibox.pov +Obox.tga

Or, you could create a file called POVRAY.DEF in the same directory as the
scene file. If POVRAY.DEF contains "+V +X +W320 +H200" then you could also
type:

POVRAY +Ibox.pov +Obox.tga

With the same results. You could also create an option file with a
different name and specify it on the command line:

For example, if QUICK.DEF contains "+V +X +W80 +H60" then you could also
type:

POVRAY +Ibox.pov +Obox.tga QUICK.DEF

When POV-Ray sees QUICK.DEF, it will read it in just as if you typed it on
the command line.

The order that the options are read in for the IBM-PC version are as
follows:

POVRAYOPT environment variable

POVRAY.DEF in current directory or,
if not found, in library path

Command line and command line option files

For example, +V in POVRAY.DEF would override -V in POVRAYOPT. +X on the
command line would override -X in POVRAY.DEF and so on.

Other computer's versions may read in the POVRAY.DEF file before the
POVRAYOPT environment variable. See the documentation on your version.


4.0 BEGINNING TUTORIAL
========================

This section describes how to create a scene using POV-Ray's scene
description language and how to render this scene.


4.1 YOUR FIRST IMAGE
----------------------

Let's create the scene file for a simple picture. Since ray tracers thrive
on spheres, that's what we'll render first.


4.1.1 THE POV-Ray COORDINATE SYSTEM

First, we have to tell POV-Ray where our camera is and where it's looking.
To do this, we use 3D coordinates. The usual coordinate system for POV-Ray
has the positive Y axis pointing up, the positive X axis pointing to the
right, and the positive Z axis pointing into the screen as follows:

^+Y
| /+Z
| /
| /
|/ +X
|-------->


The negative values of the axes point the other direction, as follows:


^+Y
| /+Z
| /
| /
-X |/ +X
<-------|-------->
/|
/ |
/ |
-Z/ |
v-Y


4.1.2 ADDING STANDARD INCLUDE FILES

Using your personal favorite text editor, create a file called
"picture1.pov". Now, type in the following (note: The input is case
sensitive, so be sure to get capital and lowercase letters correct):

#include "colors.inc" // The include files contain
#include "shapes.inc" // pre-defined scene elements
#include "textures.inc"

camera {
location <0, 2, -3>
look_at <0, 1, 2>
}

The first include statement reads in definitions for various useful colors.
The second and third include statements read in some useful shapes and
textures respectively. When you get a chance, have a look through them to
see but a few of the many possible shapes and textures available.

You may have as many include files as needed in a scene file. Include files
may themselves contain include files, but you are limited to declaring
includes nested only 10 "deep".

Filenames specified in the include statements will be searched for in the
home (current) directory first, and if not found, will then be searched for
in directories specified by any "+L" (library path) options active. This
would facilitate keeping all your "include" (.inc) files such as
shapes.inc, colors.inc, and textures.inc in an "include" subdirectory, and
giving an "+L" option on the command line to where your library of include
files are.


4.1.3 PLACING THE CAMERA

The camera declaration describes where and how the camera sees the scene.
It gives X, Y, Z coordinates to indicate the position of the camera and
what part of the scene it is pointing at. You describe X, Y, Z coordinates
using a 3-part "vector". A vector is specified by putting 3 numeric values
between a pair of angle brackets and separating the values with commas.

Briefly, "location <0, 2, -3>" places the camera up two units and back
three units from the center of the ray tracing universe which is at <0, 0,
0>. Remember that by default +Z is into the screen and -Z is back out of
the screen.

Also "look_at <0, 1, 2>" rotates the camera to point at X, Y, Z coordinates
<0, 1, 2>. A point 5 units in front of and 1 unit lower than the camera.
The look_at point should be the center of attention of your image.


4.1.4 DESCRIBING AN OBJECT

Now that the camera is set up to record the scene, let's place a red sphere
into the scene. Type the following into your scene file:

sphere {
<0, 1, 2>, 2
texture {
pigment {color Yellow} // Yellow is pre-defined in COLORS.INC
}
}

The first vector specifies center of the sphere. In this example the X
coordinate is zero so it is centered left and right. It is also at Y=1 or
1 unit up from the origin. The Z coordinate is 2 which is 5 units in front
of the camera at Z=-3. After the center vector is a comma followed by the
radius which in this case is 2 units. Since the radius is 1/2 the width of
a sphere, the sphere is 4 units wide.


4.1.5 ADDING TEXTURE TO AN OBJECT

Now that we've defined the location and size of the sphere, we need to
describe the appearance of the surface. The texture {...} block specifies
these parameters. Texture blocks describe the color, bumpiness and finish
properties of an object. In this example we will specify the color only.
This is the minimum we must do. All other texture options except color
will use the default values.

The color you define is the way you want it to look if fully illuminated.
If you were painting a picture of a sphere you would use dark shades of a
color to indicate the shadowed side and bright shades on the illuminated
side. However ray tracing takes care of that for you. You pick the basic
color inherent in the object and POV-Ray brightens or darkens it depending
on the lighting in the scene. Because we are defining the basic color the
object actually IS rather than how it LOOKS the parameter is called
"pigment".

Many types of color patterns are available for use in a pigment {...}
statement. The keyword "color" specifies that the whole object is to be
one solid color rather than some pattern of colors. The word "Yellow" is a
color identifier which was previously defined in the standard include file
"colors.inc".

If no standard color is available for your needs, you may define your own
color by using the color keyword followed by "red", "green" and "blue"
keywords specifying the amount of red, green and blue to be mixed. For
example a nice shade of pink can be specified by:

color red 1.0 green 0.8 blue 0.8

The values after each keyword should be in the range 0.0 to 1.0. Any of
the three components not specified will default to 0. A shortcut notation
may also be used. The following produces the same shade of pink:

color rgb <1.0, 0.8, 0.8>

Colors are explained in more detail later.


4.1.6 DEFINING A LIGHT SOURCE

One more detail is needed for our scene. We need a light source. Until you
create one, there is no light in this virtual world. Add the following
text to your scene file:

light_source { <2, 4, -3> color White}

The vector specifies the location of the light as 2 units to our right, 4
units above the origin and 3 units back from the origin. The light_source
is invisible, it only casts light, so no texture is needed.

That's it! Close the file and render a small picture of it using this
command:

POVRAY +W160 +H120 +P +X +D0 -V -Ipicture1.pov

If your computer does not use the command line, see the executable docs for
the correct command to render a scene.

You may set any other command line options you like, also. The scene is
output to the image file DATA.TGA (or some suffix other than TGA if your
computer uses a different file format). You can convert DATA.TGA to a GIF
image using the commands listed in the docs included with your executable.


4.2 MORE TEXTURE OPTIONS
--------------------------

You've now rendered your first picture but it is somewhat boring. Let's
add some fancy features to the texture.


4.2.1 SURFACE FINISHES

One of the main features of a ray tracer is its ability to do interesting
things with surface finishes such as highlights and reflection. Let's add
a nice little phong highlight (shiny spot) to the sphere. To do this you
need a "finish" parameter. Change the definition of the sphere to this:

sphere {
<0, 1, 2>, 2
texture {
pigment {color Yellow} // Yellow is pre-defined in COLORS.INC
finish {phong 1}
}
}

Now render this the same way you did before. The phong keyword adds a
highlight the same color of the light shining on the object. It adds a lot
of credibility to the picture and makes the object look smooth and shiny.
Lower values of phong will make the highlight less bright. Phong can be
between 0 and 1.


4.2.2 ADDING BUMPINESS

The highlight you've added illustrates how much of our perception depends
on the reflective properties of an object. Ray tracing can exploit this by
playing tricks on our perception to make us see complex details that aren't
really there.

Suppose you wanted a very bumpy surface on the object. It would be very
difficult to mathematically model lots of bumps. We can however simulate
the way bumps look by altering the way light reflects off of the surface.
Reflection calculations depend on a vector called a "surface normal"
vector. This is a vector which points away from the surface and is
perpendicular to it. By artificially modifying (or perturbing) this normal
vector you can simulate bumps. Change the scene to read as follows and
render it:

sphere {
<0, 1, 2>, 2
texture {
pigment {color Yellow}
normal {bumps 0.4 scale 0.2}
finish {phong 1}
}
}

This tells POV-Ray to use a bump pattern to modify the surface normal. The
value 0.4 controls the apparent depth of the bumps. Usually the bumps are
about 1 unit wide which doesn't work very well with a sphere of radius 2.
The scale makes the bumps 1/5th as wide but does not affect their depth.


4.2.3 CREATING COLOR PATTERNS

You can do more than assign a solid color to an object. You can create
complex patterns in the pigment block. Consider this example:

sphere {
<0, 1, 2>, 2
texture {
pigment {
wood
color_map {
[0.0 color DarkTan]
[0.9 color DarkBrown]
[1.0 color VeryDarkBrown]
}
turbulence 0.05
scale <0.2, 0.3, 1>
}
finish {phong 1}
}
}

The keyword "wood" specifies a pigment pattern of concentric rings like
rings in wood. The color_map specifies that the color of the wood should
blend from DarkTan to DarkBrown over the first 90% of the vein and from
DarkBrown to VeryDarkBrown over the remaining 10%. The turbulence slightly
stirs up the pattern so the veins aren't perfect circles and the scale
factor adjusts the size of the pattern.

The most of the patterns are set up by default to give you one "feature"
across a sphere of radius 1.0. A "feature" is very roughly defined as a
color transition. For example, a wood texture would have one band on a
sphere of radius 1.0. In this example we scale the pattern using the
"scale" keyword followed by a vector. In this case we scaled 0.2 in the x
direction, 0.3 in the y direction an the z direction is scaled by 1, which
leaves it unchanged. Scale values larger than 1 will stretch an element.
Scale values smaller than one will squish an element. And scale value 1
will leave an element unchanged.


4.2.4 PRE-DEFINED TEXTURES

POV-Ray has some very sophisticated textures pre-defined in the standard
include files "textures.inc" and "stones.inc". Some are entire textures
with pigment, normal and/or finish parameters already defined. Some are
just pigments or just finishes. Change the definition of our sphere to
the following and then re-render it:

sphere {
<0, 1, 2>, 2
texture {
pigment {
DMFWood4 // Pre-defined from textures.inc
scale 4 // Scale by the same amount in all
// directions
}
finish {Shiny} // This finish defined in textures.inc
}
}

The pigment identifier DMFWood4 has already been scaled down quite small
when it was defined. For this example we want to scale the pattern larger.
Because we want to scale it uniformly we can put a single value after the
scale keyword rather than a vector of x,y,z scale factors.

Look through the file TEXTURES.INC to see what pigments and finishes are
defined and try them out. Just insert the name of the new pigment where
DMFWood1 is now or try a different finish in place of Shiny and re-render
your file.

Here is an example of using a complete texture identifier rather than just
the pieces.

sphere {
<0, 1, 2>, 2
texture { PinkAlabaster }
}


4.3 MORE SHAPES
-----------------

So far, we've just used the sphere shape. There are many other types of
shapes that can be rendered by POV-Ray. First let's make some room in the
image by changing the sphere from a radius of 2 to a radius of 1 like this:

sphere {
<0, 1, 2>, 1
texture { ... and so on.


4.3.1 PLANE OBJECT

Let's try out a computer graphics standard - "The Checkered Floor." Add
the following object to your .pov file:

plane {
<0, 1, 0>, 0
pigment {
checker
color Red
color Blue
}
}

The object defined here is an infinite plane. The vector <0, 1, 0> is the
surface normal of the plane (i.e., if you were standing on the surface, the
normal points straight up.) The number afterward is the distance that the
plane is displaced along the normal from the origin - in this case, the
floor is placed at Y=0 so that the sphere at Y=1, radius= 1, is resting on
it.

Notice that there is no "texture{...}" statement. There really is an
implied texture there. You might find that continually typing statements
that are nested like "texture {pigment {...}}" can get to be a tiresome so
POV-Ray lets you leave out the "texture{...}" under many circumstances. In
general you only need the texture block surrounding a texture identifier
(like the PinkAlabaster example above), or when creating layered textures
(which are covered later).

This pigment uses the checker color pattern and specifies that the two
colors red and blue should be used.

Because the vectors <1,0,0>, <0,1,0> and <0,0,1> are used frequently, POV-
Ray has 3 built-in vector identifiers "x", "y", and "z" respectively that
can be used as shorthand. Thus the plane could be defined as:

plane {
y,0
pigment {... etc.

Note that you do not use angle brackets around vector identifiers.

Looking at the floor, you'll notice that the ball casts a shadow on the
floor. Shadows are calculated very accurately by the ray tracer and creates
precise, sharp shadows. In the real world, penumbral or "soft" shadows are
often seen. Later you'll learn how to use extended light sources to soften
the shadows.


4.3.2 BOX OBJECT

There are several other simple shapes available in POV-Ray. The most
common are the box, cylinder and cone. Try these examples in place of the
sphere:

box {
<-1,0 ,-1>, // Near lower left corner
< 1,0.5, 3> // Far upper right corner
pigment {
DMFWood4 // Pre-defined from textures.inc
scale 4 // Scale by the same amount in all
// directions
}
rotate y*20 // Equivalent to "rotate <0,20,0>"
}

In this example you can see that a box is defined by specifying the 3D
coordinates of opposite corners. The first vector must be the minimum
x,y,z coordinates and the 2nd vector must be the maximum x,y,z values. Box
objects can only be defined parallel to the axes. You can later rotate
them to any angle. Note that you can perform simple math on values and
vectors. In the rotate parameter we multiplied the vector identifier "y"
by 20. This is the same as "<0,1,0>*20" or "<0,20,0>".


4.3.3 CONE OBJECT

Here's another example:

cone {
<0,1,0>,0.3 // Center and radius of one end
<1,2,3>,1.0 // Center and radius of other end
pigment {DMFWood4 scale 4 }
finish {Shiny}
}

The cone shape is defined by the center and radius of each end. In this
example one end is at location <0,1,0> and has radius of 0.3 while the
other end is centered at <1,2,3> with radius 1. If you want the cone to
come to a sharp point then use a 0 radius. The solid end caps are parallel
to each other and perpendicular to the cone axis. If you want a hollow
cone with no end caps then add the keyword "open" after the 2nd radius like
this:

cone {
<0,1,0>,0.3 // Center and radius of one end
<1,2,3>,1.0 // Center and radius of other end
open // Removes end caps
pigment {DMFWood4 scale 4 }
finish {Shiny}
}


4.3.4 CYLINDER OBJECT

You may also define a cylinder like this:

cylinder {
<0,1,0>, // Center of one end
<1,2,3>, // Center of other end
0.5 // Radius
open // Remove end caps
pigment {DMFWood4 scale 4 }
finish {Shiny}
}

Finally the standard include file "shapes.inc" contains some pre-defined
shapes that are about the size of a sphere with a radius of one unit. You
can invoke them like this:

object {
UnitBox
pigment {DMFWood4 scale 4 }
finish {Shiny}
scale 0.75
rotate <-20,25,0>
translate y
}

That's the end of our brief tutorial. We've only scratched the surface.
The rest of this document provides a reference to all of POV-Ray's
features.


5.0 SCENE DESCRIPTION LANGUAGE REFERENCE
==========================================

The Scene Description Language allows the user to describe the world in a
readable and convenient way. Files are created in plain ASCII text using
an editor of your choice. POV-Ray reads the file, processes it by creating
an internal model of the scene and the renders the scene.


5.1 LANGUAGE BASICS
---------------------

The POV-Ray language consists of identifiers, reserved keywords, floating
point literals, string literals, special symbols and comments. The text of
a POV-Ray scene file is free format. You may put statements on separate
lines or on the same line as you desire. You may add blank lines, spaces
or indentations as long as you do not split any keywords or identifiers.


5.1.1 IDENTIFIERS AND KEYWORDS

POV-Ray allows you to define identifiers for later use in the file. An
identifier may be 1 to 40 characters long. It may consist of upper or
lower case letters, the digits 0 through 9 or an underscore character. The
first character must be an alphabetic character. The declaration of
identifiers is covered later.

POV-Ray has a number of reserved words which are used in the language. All
reserved words are fully lower case. Therefore it is recommended that your
identifiers contain at least 1 upper case character so it is sure to avoid
conflict with reserved words.

The following keywords are reserved in POV-Ray:

adaptive height_field rgbf
agate hexagon right
agate_turb iff ripples
all image_map rotate
alpha include roughness
ambient interpolate scale
area_light intersection sky
background inverse smooth
bicubic_patch ior smooth_triangle
blob jitter specular
blue lambda sphere
bounded_by leopard spotlight
box light_source spotted
bozo location sturm
brilliance looks_like texture
bumps look_at tga
bump_map mandel threshold
bump_size map_type tightness
camera marble tile2
checker material_map tiles
clipped_by max_intersections torus
clock max_trace_level translate
color merge triangle
color_map metallic turbulence
colour normal type
colour_map no_shadow union
component object up
composite octaves use_color
cone omega use_colour
crand once use_index
cubic onion u_steps
cylinder open version
declare phase v_steps
default phong water_level
dents phong_size waves
difference pigment wood
diffuse plane wrinkles
direction point_at x
disc poly y
distance pot z
dump quadric
falloff quartic
filter quick_color
finish quick_colour
flatness radial
fog radius
frequency raw
gif red
gradient reflection
granite refraction
green rgb


5.1.2 COMMENTS

Comments are text in the scene file included to make the scene file easier
to read or understand. They are ignored by the ray tracer and are there for
humans to read. There are two types of comments in POV-Ray.

Two slashes are used for single line comments. Anything on a line after a
double slash // is ignored by the ray tracer. For example:

// This line is ignored

You can have scene file information on the line in front of the comment, as
in:

object { FooBar } // this is an object

The other type of comment is used for multiple lines. This type of comment
starts with /* and ends with */ everything in-between is ignored. For
example:

/* These lines
Are ignored
By the
Raytracer */

This can be useful if you want to temporarily remove elements from a scene
file. /*...*/ comments can "comment out" lines containing the other //
comments, and thus can be used to temporarily or permanently comment out
parts of a scene. /*..*/ comments can be nested, the following is legal:

/* This is a comment
// This too
/* This also */
*/

Use comments liberally and generously. Well used, they really improve the
readability of scene files.


5.1.3 INCLUDE FILES

The language allows include files to be specified by placing the line:

#include "filename.inc"

at any point in the input file. The filename must be enclosed in double
quotes and may be up to 40 characters long (or your computer's limit),
including the two double-quote (") characters.

The include file is read in as if it were inserted at that point in the
file. Using include is the same as actually cutting and pasting the entire
contents of this file into your scene.

Include files may be nested. You may have at most 10 nested include files.
There is no limit on un-nested include files.

Generally, include files have data for scenes, but are not scenes in
themselves. By convention scene files end in .pov and include files end
with .inc.


5.1.4 FLOAT EXPRESSIONS

Many parts of the POV-Ray language require you to specify one or more
floating point numbers. A floating point number is a number with a decimal
point. Float literals are represented by an optional sign (-), some
digits, an optional decimal point, and more digits. If the number is an
integer you may omit the decimal point and trailing zero. If it is all
fractional you may omit the leading zero. POV-Ray supports scientific
notation for very large or very small numbers. The following are all valid
float literals:

1.0 -2.0 -4 34 3.4e6 2e-5 .3 0.6

Float identifiers may be declared and used anywhere a float can be used.
See section 5.1.7 on declaring identifiers.

Complex float expressions can be created using + - * / ( ) with float
literals or identifiers. Assuming the identifiers have been previously
declared as floats, the following are valid float expressions:

1+2+3 2*5 1/3 Row*3 Col*5

(Offset-5)/2 This/That+Other*Thing

Expressions are evaluated left to right with innermost parenthesis
evaluated first, then unary + or -, then multiply or divide, then add or
subtract.

There are two built-in float identifiers. The identifier "version" is the
current setting of the version compatibility switch (See +MV under command-
line switches). This allows you to save and restore the previous version
switch. For example suppose MYSTUFF.INC is in version 1.0 format. At the
top of the file you could put:

#declare Temp_Vers = version // Save previous value
#version 1.0 // Change to 1.0 mode

... // Version 1.0 stuff goes here...

#version Temp_Vers // Restore previous version

The other float identifier is "clock". Its value is set by the +K command-
line switch. (See +K under command-line switches). This allows you to do
limited animation control. For example you could move an object using:

translate <0.1*clock,0,0>

and render successive frames with +K1, +K2, +K3 etc. In each frame the
object would move 1/10th of a unit.


5.1.5 VECTOR EXPRESSIONS

POV-Ray operates in a 3D x,y,z coordinate system. Often you will need to
specify x, y and z values. A "vector" is a set of three float values used
for such specification. Vectors consist of three float expressions that
are bracketed by angle brackets < and >. The three terms are separated by
commas. For example:

< 1.0, 3.2, -5.4578 >

The commas are necessary to keep the program from thinking that the 2nd
term is "3.2-5.4578" and that there is no 3rd term. If you see an error
message "Float expected but '>' found instead" it probably means two floats
were combined because a comma was missing.

The three values correspond to the x, y and z directions respectively. For
example, the vector <1,2,3> means the point that's 1 unit to the right, 2
units up, and 3 units in front the center of the "universe" at <0,0,0>.
Vectors are not always points, though. They can also refer to an amount to
size, move, or rotate a scene element.

Vectors may also be combined in expressions the same as float values. For
example <1,2,3>+<4,5,6> evaluates as <5,7,9>. Subtraction, multiplication
and division are also performed on a term-by-term basis. You may also
combine floats with vectors. For example 5*<1,2,3> evaluates as <5,10,15>.

Sometimes POV-Ray requires you to specify floats and vectors side-by-side.
Thus commas are required separators whenever an ambiguity might arise. For
example <1,2,3>-4 evaluates as <-3,-2,-1> but <1,2,3>,-4 is a vector
followed by a float.

Vector identifiers may be declared and used anywhere a vector can be used.
See section 5.1.7 on declaring identifiers.

Because vectors almost always refer to the x, y and z coordinates, POV-Ray
has three built-in vector identifiers "x "y" and "z". Like all POV-Ray
keywords they must be lower case. The vector identifier x is equivalent to
the vector <1,0,0>. Similarly y is <0,1,0> and z is <0,0,1>.

Thus an expression like 5*x evaluates to 5*<1,0,0> or <5,0,0>. The use of
these identifiers can make the scene file easier to read.


5.1.6 TRANSFORMATIONS

Vectors are used not only as a notation for a point in space but are used
in the transformations scale, rotate, and translate. Scale sizes a texture
or object. Translate moves a texture or object. And rotate turns a texture
or object.


5.1.6.1 Translate

An object or texture pattern may be moved by adding a "translate"
parameter. It consists of the keyword "translate" followed by a vector.
The terms of the vector specify the number of units to move in each of the
x, y, and z directions. Translate moves the element relative to it's
current position. For example,

sphere { <10, 10, 10>, 1
pigment { Green }
translate <-5, 2, 1>
}

Will move the sphere from <10, 10, 10> to <5, 12, 11>. It does not move it
to absolute location <5, 2, 1>. Translating by zero will leave the element
unchanged on that axis. For example,

sphere { <10, 10, 10>, 1
pigment { Green }
translate <0, 0, 0>
}

Will not move the sphere at all.


5.1.6.2 Scale

You may change the size of an object or texture pattern by adding a "scale"
parameter. It consists of the keyword "scale" followed by a vector or a
single float value. If a vector is used, terms of the vector specify the
amount of scaling in each of the x, y, and z directions. If a float value
is used, the item is uniformly scaled by the same amount in all directions.

Scale, is used to "stretch" or "squish" an element. Values larger than 1
stretch the element on that axis. Values smaller than one are used to
squish the element on that axis. Scale is relative to the current element
size. If the element has been previously re-sized using scale, then scale
will size relative to the new size. Multiple scale values may used.


5.1.6.3 Rotate

You may change the orientation of an object or texture pattern by adding a
"rotate" parameter. It consists of the keyword "rotate" followed by a
vector. The three terms of the vector specify the number of degrees to
rotate about each of the x, y, and z axes.

Note that the order of the rotations does matter. Rotations occur about
the x axis first, then the y axis, then the z axis. If you are not sure if
this is what you want then you should use multiple rotation statements to
get a correct rotation. You should only rotate on one axis at a time. As
in,

rotate <0, 30, 0> // 30 degrees around Y axis then,
rotate <-20, 0, 0> // -20 degrees around X axis then,
rotate <0, 0, 10> // 10 degrees around Z axis.

Rotation is always performed relative to the axis. Thus if an object is
some distance from the axis of rotation, its will not only rotate but it
will "orbit" about the axis as though it was swinging around on an
invisible string.

To work out the rotation directions, you must perform the famous "Computer
Graphics Aerobics" exercise. Hold up your left hand. Point your thumb in
the positive direction of the axis of rotation. Your fingers will curl in
the positive direction of rotation. Similarly if you point your thumb in
the negative direction of the axis your fingers will curl in the negative
direction of rotation. This is the famous "left-hand coordinate system".

^
+Y| +Z/ _
| /_| |_ _
| _| | | |/ \
| | | | | | |
| /| | | | | V
-X |/ | | | | | +X
<----------+--|-|-|-|-|------>
/| | \____
/ | | ___|
/ | \ /
/ | | /
-Z/ -Y|
/ |

In this illustration, the left hand is curling around the X axis. The thumb
points in the positive X direction and the fingers curl over in the
positive rotation direction.

If you want to use a right hand system, as some CAD systems such as AutoCAD
do, the "right" vector in the camera specification needs to be changed. See
the detailed description of the camera. In a right handed system you use
your right hand for the "Aerobics".


5.1.6.4 Transforming Textures and Objects

When an object is transformed, all textures attached to the object AT THAT
TIME are transformed as well. This means that if you have a translate,
rotate, or scale in an object BEFORE a texture, the texture will not be
transformed. If the scale, translate, or rotate is AFTER the texture then
the texture will be transformed with the object. If the transformation is
INSIDE the "texture { }" statement then ONLY THE TEXTURE is affected. The
shape remains the same. For example:

sphere { <0, 0, 0>, 1
texture { White_Marble } // texture identifier from TEXTURES.INC
scale 3 // This scale affects both the
// shape and texture
}

sphere { <0, 0, 0>, 1
scale 3 // This scale affects the shape only
texture { White_Marble }
}

sphere { <0, 0, 0>, 1
texture {
White_Marble
scale 3 // This scale affects the texture only
}
}

Transformations may also be independently applied to pigment patterns and
surface normal (bump) patterns. Note scaling a normal pattern affects only
the width and spacing. It does not affect the height or depth. For
example:

box { <0, 0, 0>, <1, 1, 1>
texture {
pigment {
checker color Red color White
scale 0.25 // This affects only the color pattern
}
normal {
bumps 0.3 // This specifies apparent height of bumps
scale 0.2 // Scales diameter and space between bumps but not
// not the height. Has no effect on color pattern.
}
rotate y*45 // This affects the entire texture but not
} // the object.
}


5.1.6.5 Transformation Order

Because rotations are always relative to the axis and scaling is relative
to the origin, you will generally want to create an object at the origin
and scale and rotate it first. Then you may translate it into its proper
position. It is a common mistake to carefully position an object and then
to decide to rotate it. Because a rotation of an object causes it to orbit
the axis, the position of the object may change so much that it orbits out
of the field of view of the camera!

Similarly scaling after translation also moves an object unexpectedly. If
you scale after you translate, the scale will multiply the translate
amount. For example:

translate <5, 6, 7>
scale 4

Will translate to 20, 24, 28 instead of 5, 6, 7. Be careful when
transforming to get the order correct for your purposes.


5.1.7 DECLARE

The parameters used to describe the scene elements can be tedious to use at
times. Some parameters are often repeated and it seems wasteful to have to
type them over and over again. To make this task easier, the program allows
users to create identifiers as synonyms for a pre-defined set of parameters
and use them anywhere the parameters would normally be used. For example,
the color white is defined in the POV-Ray language as:

color red 1 green 1 blue 1

This can be pre-defined in the scene as:

#declare White = color red 1 green 1 blue 1

and then substituted for the full description in the scene file, for
example:

sphere {
<0, 0, 0>, 1
pigment { color red 1 green 1 blue 1 }
}

becomes:

#declare White = color red 1 green 1 blue 1

sphere {
<0, 0, 0>, 1
pigment { color White }
}

This is much easier to type and to read. The pre-defined element may be
used many times in a scene.

You use the keyword "declare" to pre-define a scene element and give it a
one-word identifier. This pre-defined scene element is not used in the
scene until you invoke its identifier. Textures, objects, colors, numbers
and more can be predefined.

In most cases when you invoke an identifier you simply use the form
"keyword{identifier}" where the keyword used is the type of statement that
was declared. For example:

#declare Shiny = finish {phong 0.8 phong_size 50 reflection 0.2}

sphere {
<0, 0, 0>, 1
pigment { color White }
finish { Shiny }
}

The identifier "Shiny" was declared as a "finish" and is invoked by placing
it inside a "finish { }" statement.

One exception is object identifiers. If you declare any object of any kind
such as sphere, box, union, intersection etc. you should invoke it by
placing it in an "object { }" statement. Thus you might have:

#declare Thing = intersection {...}

object {Thing} // not "intersection{Thing}"

Pre-defined elements may be modified when they are used, for example:

#declare Mickey = // Pre-define a union object called Mickey
union {
sphere { < 0, 0, 0>, 2 }
sphere { <-2, 2, 0>, 1 }
sphere { < 2, 2, 0>, 1 }
}

// Use Mickey
object{ // Note use of "object", not "union" keyword
Mickey
scale 3
rotate y*20
translate <0, 8, 10>
pigment {color red 1}
finish {phong .7}
}

This scene will only have one "Mickey", the Mickey that is described
doesn't appear in the scene. Notice that Mickey is scaled, rotated,
translated, and a texture is added to it. The Mickey identifier could be
used many times in a scene file, and each could have a different size,
position, orientation, and texture.

Declare is especially powerful when used to create a complex object. Each
part of the object is defined separately using declare. These parts can be
tested, rotated, sized, positioned, and textured separately then combined
in one shape or object for the final sizing, positioning, etc. For example,
you could define all the parts of a car like this:

#declare Wheel = object {...}
#declare Seat = object {...}
#declare Body = object {...}
#declare Engine = object {...}
#declare Steering_Wheel = object {...}

#declare Car =
union {
object { Wheel translate < 1, 1, 2>}
object { Wheel translate <-1, 1, 2>}
object { Wheel translate < 1, 1,-2>}
object { Wheel translate <-1, 1,-2>}
object { Seat translate < .5, 1.4, 1>}
object { Seat translate <-.5, 1.4, 1>}
object { Steering_Wheel translate <-.5, 1.6, 1.3>}
object { Body texture { Brushed_Steel } }
object { Engine translate <0, 1.5, 1.5>
}

and then it like this:

// Here is a car
object {
Car
translate <4, 0, 23>
}

Notice that the Wheel and Seat are used more than once. A declared element
can be used as many times as you need. Declared elements may be placed in
"include" files so they can be used with more than one scene.

There are several files included with POV-Ray that use declare to pre-
define many shapes, colors, and textures. See the archive INCLUDE for more
info.

NOTE: Declare is not the same as the C language's define. Declare creates
an internal object of the type specified that POV-Ray can copy for later
use. The "define" used in C creates a text substitution macro.

Here's a list of what can be declared, how to declare the element, and how

to use the declaration. See the reference section for element syntax.

Objects: (Any type may be declared, sphere, box, height_field, blob, etc.)
#declare Tree = union {...}
#declare Ball = sphere {...}
#declare Crate= box {...}

object {
Tree
(OBJECT_MODIFIERS...)
}

object {
Ball
(OBJECT_MODIFIERS...)
}

object {
Crate
(OBJECT_MODIFIERS...)
}

Textures:
#declare Fred = texture {...}

sphere { <0, 0, 0>, 1
texture {
Fred
(texture_modifiers)
}
}

Layered textures:
#declare Fred =
texture {...}
texture {...}
texture {...} (etc.)

sphere { <0, 0, 0>, 1
texture {
Fred
(texture_modifiers)
}
}

Pigment:
#declare Fred = pigment {checker color Red color White}

sphere { <0, 0, 0>, 1
pigment {
Fred
(pigment_modifiers)
}
}

Normal:
#declare Fred = normal {bumps 0.5}

sphere { <0, 0, 0>, 1
pigment {White}
normal {
Fred
(normal_modifiers)
}
}

Finish:
#declare Fred = finish {phong 0.7 reflection 0.2}

sphere { <0, 0, 0>, 1
pigment {White}
finish {
Fred
(finish_items)
}
}

Colors:
#declare Fred = color red 1 green 1 blue 1

sphere { <0, 0, 0>, 1
pigment { color Fred }
}

Color_map:
#declare Rainbow =
color_map {
[0.0 color Cyan]
[1/3 color Yellow]
[2/3 color Magenta]
[1.0 color Cyan]
}

sphere { <0, 0, 0>, 1
pigment { radial color_map{Rainbow} rotate -90*x}
}

Float Values:
#declare Fred = 3.45
#declare Fred2 = .02
#declare Fred3 = .5

// Use the numeric value identifier
// anywhere a number would go
sphere { <-Fred, 2, Fred>, Fred
pigment { color red 1 }
finish { phong Fred3 }
}

Camera:
#declare Fred = camera {..}

camera { Fred }

Vectors:
#declare Fred = <9, 3, 2>
#declare Fred2 = <4, 1, 4>

sphere { Fred, 1 // Note do not put < > brackets
scale Fred2 // around vector identifiers
}


5.2 OBJECTS
-------------

Objects are the building blocks of your scene. There are 20 different
types of objects supported by POV-Ray. Seven of them are finite solid
primitives, 4 are finite patch primitives, 5 are infinite solid polynomial
primitives, 3 are types of Constructive Solid Geometry types and one is a
specialized object that is a light source.

The basic syntax of an object is a keyword describing its type, some
floats, vectors or other parameters which further define its location
and/or shape and some optional object modifiers such as texture, pigment,
normal, finish, bounding, clipping or transformations.

The texture describes what the object looks like, ie. its material.
Textures are combinations of pigments, normals and finishes. Pigment is
the color or pattern of colors inherent in the material. Normal is a
method of simulating various patterns of bumps, dents, ripples or waves by
modifying the surface normal vector. Finish describes the reflective and
refractive properties of a material.

Bounding shapes are finite, invisible shapes which wrap around complex,
slow rendering shapes in order to speed up rendering time. Clipping shapes
are used to cut away parts of shapes to expose a hollow interior.
Transformations tell the ray tracer how to move, size or rotate the shape
and/or the texture in the scene.


5.2.1 SOLID FINITE PRIMITIVES

There are 7 different solid finite primitive shapes: blob, box, cone,
cylinder, height_field, sphere, and torus. These have a well-defined
"inside" and can be used in Constructive Solid Geometry. Because these
types are finite, POV-Ray can use automatic bounding on them to speed up
rendering time.


5.2.1.1 Spheres

Since spheres are so common in ray traced graphics, POV-Ray has a highly
optimized sphere primitive which renders much more quickly than the
corresponding polynomial quadric shape. The syntax is:

sphere {
, RADIUS }

Where
is a vector specifying the x,y,z coordinates of the center
of the sphere and RADIUS is a float value specifying the radius. You can
also add translations, rotations, and scaling to the sphere. For example,
the following two objects are identical:

sphere { <0, 25, 0>, 10
pigment {Blue}
}

sphere { <0, 0, 0>, 1.0
pigment {Blue}
scale 10
translate y*25
}

Note that Spheres may be scaled unevenly giving an ellipsoid shape.

Because spheres are highly optimized they make good bounding shapes.
Because they are finite they respond to automatic bounding. As with all
shapes, they can be translated, rotated and scaled.


5.2.1.2 Boxes

A simple box can be defined by listing two corners of the box like this:

box { , }

Where and are vectors defining the x,y,z coordinates of
opposite corners of the box. For example:

box { <0, 0, 0>, <1, 1, 1> }

Note that all boxes are defined with their faces parallel to the coordinate
axes. They may later be rotated to any orientation using a rotate
parameter.

Each element of CORNER1 should always be less than the corresponding
element in CORNER2. If any elements of CORNER1 are larger than CORNER2, the
box will not appear in the scene.

Boxes are calculated efficiently and make good bounding shapes. Because
they are finite they respond to automatic bounding. As with all
shapes, they can be translated, rotated and scaled.


5.2.1.3 Cylinders

A finite length cylinder with parallel end caps may be defined by.

cylinder { , , RADIUS }

Where and are vectors defining the x,y,z coordinates of the
center of each end of the cylinder and RADIUS is a float value for the
radius. For example:

cylinder { <0,0,0>, <3,0,0>, 2}

is a cylinder 3 units long lying along the x axis from the origin to x=3
with a radius of 2.

Normally the ends of a cylinder are closed by flat planes which are
parallel to each other and perpendicular to the length of the cylinder.
Adding the optional keyword "open" after the radius will remove the end
caps and results in a hollow tube.

Because they are finite they respond to automatic bounding. As with all
shapes, they can be translated, rotated and scaled.


5.2.1.4 Cones

A finite length cone or a frustum (a cone with the point cut off) may be
defined by.

cone { , RADIUS1, , RADIUS2 }

Where and are vectors defining the x,y,z coordinates of the
center of each end of the cone and RADIUS1 and RADIUS2 are float values for
the radius of those ends. For example:

cone { <0,0,0>,2 <0,3,0>, 0}

is a cone 3 units tall pointing up the y axis from the origin to y=3. The
base has a radius of 2. The other end has a radius of 0 which means it
comes to a sharp point. If neither radius is zero then the results look
like a tapered cylinder or a cone with the point cut off.

Like a cylinder, normally the ends of a cone are closed by flat planes
which are parallel to each other and perpendicular to the length of the
cone. Adding the optional keyword "open" after RADIUS2 will remove the end
caps and results in a tapered hollow tube like a megaphone or funnel.

Because they are finite they respond to automatic bounding. As with all
shapes, they can be translated, rotated and scaled.


5.2.1.5 Torus

A torus is a 4th order quartic polynomial shape that looks like a donut or
inner tube. Because this shape is so useful and quartics are difficult to
define, POV-Ray lets you take a short-cut and define a torus by:

torus { MAJOR, MINOR }

where MAJOR is a float value giving the major radius and MINOR is a float
specifying the minor radius. The major radius extends from the center of
the hole to the mid-line of the rim while the minor radius is the radius of
the cross-section of the rim. The torus is centered at the origin and lies
in the X-Z plane with the Y-axis sticking through the hole.

----------- - - - - - - - ---------- +Y
/ \ / \ |
/ \ / \ |
| | | |<-B-->| -X---|---+X
\ / \ / |
\__________/_ _ _ _ _ _ _ \__________/ |
|<-----A----->| -Y

A = Major Radius
B = Minor Radius

Internally the torus is computed the same as any other quartic or 4th order
polynomial however a torus defined this way will respond to automatic
bounding while a quartic must be manually bound if at all. As with all
shapes, a torus can be translated, rotated and scaled. Calculations for
all higher order polynomials must be very accurate. If this shape renders
improperly you may add the keyword "sturm" after the MINOR value to use
POV-Ray's slower-yet-more-accurate Sturmian root solver.


5.2.1.6 Blob

Blobs are an interesting shape type. Their components are "flexible"
spheres that attract or repel each other creating a "blobby" organic
looking shape. The spheres' surfaces actually stretch out smoothly and
connect, as if coated in silly putty (honey? glop?) and pulled apart.

Picture each blob component as a point floating in space. Each point has a
field around it that starts very strong at the center point and drops off
to zero at some radius. POV-Ray adds together the field strength of each
component and looks for the places that the strength of the field is
exactly the same as the "threshold" value that was specified. Points with
a total field strength greater than the threshold are considered inside the
blob. Those less than the threshold are outside. Points equal to the
threshold are on the surface of the blob.

A blob is defined as follows:

blob {

threshold THRESHOLD_VALUE
component STRENGTH, RADIUS,

component STRENGTH, RADIUS,
// Repeat for any number
component STRENGTH, RADIUS,
// of components
}

The keyword "threshold" is followed by a float THRESHOLD_VALUE. Each
component begins with the keyword "component". STRENGTH is a float value
specifying the field strength at its center. The strength may be positive
or negative. A positive value will make that component attract other
components. Negative strength will make that component repel other
components. Components in different, separate blob shapes do not affect
each other. The strength tapers off to zero at the value specified by the
float RADIUS. The vector
specifies the x,y,z coordinates of the
component. For example:

blob {
threshold 0.6
component 1.0, 1.0, <.75, 0, 0>
component 1.0, 1.0, <-.375, .64952, 0>
component 1.0, 1.0, <-.375, -.64952, 0>
scale 2
}

If you have a single blob component then the surface you see will look just
like a sphere, with the radius of the surface being somewhere inside the
"radius" value you specified for the component. The exact radius of this
sphere-like surface can be determined from the blob equation listed below
(you will probably never need to know this, blobs are more for visual
appeal than for exact modeling).

If you have a number of blob components, then their fields add together at
every point in space - this means that if the blob components are close
together the resulting surface will smoothly flow around the components.

The various numbers that you specify in the blob declaration interact in
several ways. The meaning of each can be roughly stated as:

THRESHOLD:
This is the total density value that POV-Ray is looking for. By
following the ray out into space and looking at how each blob component
affects the ray, POV-Ray will find the points in space where the density is
equal to the "threshold" value.

1) "threshold" must be greater than 0. POV-Ray only looks for positive
densities.
2) If "threshold" is greater than the strength of a component, then
the component will disappear.
3) As "threshold" gets larger the surface you see gets closer to the
centers of the components.
4) As "threshold" gets smaller, the surface you see gets closer to the
spheres at a distance of "radius" from the centers of the components.

STRENGTH:
Each component has a strength value - this defines the density of the
component at the center of the component. Changing this value will usually
have only a subtle effect.

1) "strength" may be positive or negative. Zero is a bad value, as the
net result is that no density was added - you might just as well have not
used this component.
2) If "strength" is positive, then POV-Ray will add its density to the
space around the center of the component. If this adds enough density to be
greater than "threshold you will see a surface.
3) If "strength" is negative, then POV-Ray will subtract its density
from the space around the center of the component. This will only do
something if there happen to be positive components nearby. What happens is
that the surface around any nearby positive components will be dented away
from the center of the negative component.

RADIUS:
Each component has a radius of influence. The component can only
affect space within "radius" of its center. This means that if all of the
components are farther than "radius" from each other, you will only see a
bunch of spheres. If a component is within the radius of another
component, then the two components start to affect each other. At first
there is only a small bulge outwards on each of the two components, as they
get closer they bulge more and more until they attach along a smooth neck.
If the components are very close (i.e. their centers are on top of each
other), then you will only see a sphere (this is just like having a
component of more strength. bigger than the size of each of the component
radii)
1) "radius" must be bigger than 0.
2) As "radius" increases the apparent size of the component will
increase.

CENTER:
This is simply a point in space. It defines the center of a blob
component. By changing the x/y/z values of the center you move the
component around.

THE FORMULA
For the more mathematically minded, here's the formula used internally
by POV-Ray to create blobs. You don't need to understand this to use blobs.

The formula used for a single blob component is:

density = strength * (1 - radius^2)^2

This formula has the nice property that it is exactly equal to strength" at
the center of the component and drops off to exactly 0 at a distance of
"radius" from the center of the component. The density formula for more
than one blob component is just the sum of the individual component
densities:

density = density1 + density2 + ...

Blobs can be used in CSG shapes and they can be scaled, rotated and
translated. Because they are finite they respond to automatic bounding.
The calculations for blobs must be very accurate. If this shape renders
improperly you may add the keyword "sturm" after the last component to use
POV-Ray's slower-yet-more-accurate Sturmian root solver.


5.2.1.7 Height Fields

Height fields are fast, efficient objects that are generally used to create
mountains or other raised surfaces out of hundreds of triangles in a mesh.

A height field is essentially a 1 unit wide by 1 unit long box with a
mountainous surface on top. The height of the mountain at each point is
taken from the color number (palette index) of the pixels in a graphic
image file.


________ <---- image index 255
/ /|
+1y ---------- |
| | |
| | |+1z <- Image upper-right
| | /
0,0,0---------- +1x
^
|____ Image lower-left


NOTE: Image resolution is irrelevant to the scale of the heightfield.

The mesh of triangles corresponds directly to the pixels in the image file.
In fact, there are two small triangles for every pixel in the image file.
The Y (height) component of the triangles is determined by the palette
index number stored at each location in the image file. The higher the
number, the higher the triangle. The maximum height of an un-scaled height
field is 1 unit.

The higher the resolution of the image file used to create the height
field, the smoother the height field will look. A 640 X 480 GIF will create
a smoother height field than a 320 x 200 GIF. The size/resolution of the
image does not affect the size of the height field. The un-scaled height
field size will always be 1x1. Higher resolution image files will create
smaller triangles, not larger height fields.

There are three types files which can define a height field as follows:

height_field { gif "filename.gif" }
height_field { tga "filename.tga" }
height_field { pot "filename.pot" }

The image file used to create a height field can be a GIF, TGA or POT
format file. The GIF format is the only one that can be created using a
standard paint program.

In a GIF file, the color number is the palette index at a given point. Use
a paint program to look at the palette of a GIF image. The first color is
palette index zero, the second is index 1, the third is index 2, and so on.
The last palette entry is index 255. Portions of the image that use low
palette entries will be lower on the height field. Portions of the image
that use higher palette entries will be higher on the height field. For
example, an image that was completely made up of entry 0 would be a flat
1x1 square. An image that was completely made up of entry 255 would be a
1x1x1 cube.

The maximum number of colors in a GIF are 256, so a GIF height field can
have any number of triangles, but they will only 256 different height
values.

The color of the palette entry does not affect the height of the pixel.
Color entry 0 could be red, blue, black, or orange, but the height of any
pixel that uses color entry 0 will always be 0. Color entry 255 could be
indigo, hot pink, white, or sky blue, but the height of any pixel that uses
color entry 255 will always be 1.

You can create height field GIF images with a paint program or a fractal
program like "Fractint". If you have access to an IBM-PC, you can get
Fractint from most of the same sources as POV-Ray.

A POT file is essentially a GIF file with a 16 bit palette. The maximum
number of colors in a POT file is greater than 32,000. This means a POT
height field can have over 32,000 possible height values. This makes it
possible to have much smoother height fields. Note that the maximum height
of the field is still 1 even though more intermediate values are possible.

At the time of this writing, the only program that created POT files was a
freeware IBM-PC program called Fractint. POT files generated with this
fractal program create fantastic landscapes. If you have access to an IBM-
PC, you can get Fractint from most of the same sources as POV-Ray.

The TGA file format may be used as a storage device for 16 bit numbers
rather than an image file. The TGA format uses the red and green bytes of
each pixel to store the high and low bytes of a height value. TGA files are
as smooth as POT files, but they must be generated with special custom-made
programs. Currently, this format is of most use to programmers, though you
may see TGA height field generator programs arriving soon. There is
example C source code included with the POV-Ray source archive to create a
TGA file for use with a height field.

It is nearly impossible to take advantage of the 16 bits of resolution
offered by the use of tga files in height fields when the tga file is
created in a paint program. A gif file is a better choice for paint
created height fields in 8 bits. Also see Appendix B.5 for a tip on
creating tga files for height fields.

An optional "water_level" parameter may be added after the file name. It
consists of the keyword "water_level" followed by a float value tells the
program not to look for the height field below that value. Default value is
0, and legal values are between 0 and 1. For example, "water_level .5"
tells POV-Ray to only render the top half of the height field. The other
half is "below the water" and couldn't be seen anyway. This term comes from
the popular use of height fields to render landscapes. A height field would
be used to create islands and another shape would be used to simulate water
around the islands. A large portion of the height field would be obscured
by the "water" so the "water_level" parameter was introduced to allow the
ray-tracer to ignore the unseen parts of the height field. Water_level is
also used to "cut away" unwanted lower values in a height field. For
example, if you have an image of a fractal on a solid colored background,
where the background color is palette entry 0, you can remove the
background in the height field by specifying, "water_level .001"

Normally height fields have a rough, jagged look because they are made of
lots of flat triangles. Adding the keyword "smooth" causes POV-Ray to
modify the surface normal vectors of the triangles in such a way that the
lighting and shading of the triangles will give a smooth look. This may
allow you to use a lower resolution file for your height field than would
otherwise be needed.

Height fields can be used in CSG shapes and they can be scaled, rotated and
translated. Because they are finite they respond to automatic bounding.

Here are a notes and helpful hints on height fields from their creator,
Doug Muir:

The height field is mapped to the x-z plane, with its lower left corner
sitting at the origin. It extends to 1 in the positive x direction and to 1
in the positive z direction. It is maximum 1 unit high in the y direction.
You can translate it, scale it, and rotate it to your heart's content.

When deciding on what water_level to use, remember, this applies to the un-
transformed height field. If you are a Fractint user, the water_level
should be used just like the water_level parameter for 3d projections in
Fractint.

Here's a detailed explanation of how the ray-tracer creates the height
field. You can skip this if you aren't interested in the technical side of
ray-tracing. This information is not needed to create or use height fields.

To find an intersection with the height field, the ray tracer first checks
to see if the ray intersects the box which surrounds the height field.
Before any transformations, this box's two opposite vertexes are at (0,
water_level, 0) and (1, 1, 1). If the box is intersected, the ray tracer
figures out where, and then follows the line from where the ray enters the
box to where it leaves the box, checking each pixel it crosses for an
intersection.

It checks the pixel by dividing it up into two triangles. The height vertex
of the triangle is determined by the color index at the corresponding
position in the GIF, POT, or TGA file.

If your file has a uses the color map randomly, your height field is going
to look pretty chaotic, with tall, thin spikes shooting up all over the
place. Not every GIF will make a good height field.

If you want to get an idea of what your height field will look like, I
recommend using the IBM-PC program Fractint's 3d projection features to do
a sort of preview. If it doesn't look good there, the ray tracer isn't
going to fix it. For those of you who can't use Fractint, convert the image
palette to a gray scale from black at entry 0 to white at entry 255 with
smooth steps of gray in-between. The dark parts will lower than the
brighter parts, so you can get a feel for how the image will look as a
height field.


5.2.2 FINITE PATCH PRIMITIVES

There are 4 totally thin, finite objects which have NO well-defined inside.
They may be combined in CSG union but cannot be use in other types of CSG.
They are bicubic_patch, disc, smooth_triangle and triangle. Because these
types are finite, POV-Ray can use automatic bounding on them to speed up
rendering time.


5.2.2.1 Triangle and Smooth_triangle

The triangle primitive is available in order to make more complex objects
than the built-in shapes will permit. Triangles are usually not created by
hand, but are converted from other files or generated by utilities.

A triangle is defined by:

triangle { , , }

where is a vector defining the x,y,z coordinates of each corner
of the triangle.

Because triangles are perfectly flat surfaces it would require extremely
large numbers of very small triangles to approximate a smooth, curved
surface. However much of our perception of smooth surfaces is dependent
upon the way light and shading is done. By artificially modifying the
surface normals we can simulate as smooth surface and hide the sharp-edged
seams between individual triangles.

The smooth_triangle primitive is used for just such purposes. The
smooth_triangles use a formula called Phong normal interpolation to
calculate the surface normal for any point on the triangle based on normal
vectors which you define for the three corners. This makes the triangle
appear to be a smooth curved surface. A smooth_triangle is defined by:

smooth_triangle {
, ,
, ,
,
}

where the corners are defined as in regular triangles and is a
vector describing the direction of the surface normal at each corner.

These normal vectors are prohibitively difficult to compute by hand.
Therefore smooth_triangles are almost always generated by utility programs.
To achieve smooth results, any triangles which share a common vertex should
have the same normal vector at that vertex. Generally the smoothed normal
should be the average of all the actual normals of the triangles which
share that point.


5.2.2.2 Bicubic_patch

A bicubic patch is a 3D curved surface created from a mesh of triangles.
POV-Ray supports a type of bicubic patch called a Bezier patch. A bicubic
patch is defined as follows:

bicubic_patch {
type PATCH_TYPE
flatness FLATNESS_VALUE
u_steps NUM_U_STEPS
v_steps NUM_V_STEPS
, , , ,
, , , ,
, , , ,
, , ,
}

The keyword "type" is followed by a float PATCH_TYPE which currently must
be either 0 or 1. For type 0 only the control points are retained within
POV-Ray. This means that a minimal amount of memory is needed, but POV-Ray
will need to perform many extra calculations when trying to render the
patch. Type 1 preprocesses the patch into many subpatches. This results
in a significant speedup in rendering, at the cost of memory.

These 4 parameters: type, flatness, u_steps & v_steps, may appear in any
order. They are followed by 16 vectors that define the x,y,z coordinates
of the 16 control points which define the patch. The patch touches the 4
corner points , , and while the other 12 points
pull and stretch the patch into shape.

The keywords "u_steps" and "v_steps" are each followed by float values
which tell how many rows and columns of triangles are the minimum to use to
create the surface. The maximum number of individual pieces of the patch
that are tested by POV-Ray can be calculated from the following:

sub-pieces = 2^u_steps * 2^v_steps

This means that you really should keep "u_steps" and "v_steps" under 4 or
5. Most patches look just fine with "u_steps 3" and "v_steps 3", which
translates to 64 subpatches (128 smooth triangles).

As POV-Ray processes the Bezier patch, it makes a test of the current piece
of the patch to see if it is flat enough to just pretend it is a rectangle.
The statement that controls this test is: "flatness xxx". Typical flatness
values range from 0 to 1 (the lower the slower).

If the value for flatness is 0, then POV-Ray will always subdivide the
patch to the extend specified by u_steps and v_steps. If flatness is
greater than 0, then every time the patch is split, POV-Ray will check to
see if there is any need to split further.

There are both advantages and disadvantages to using a non-zero flatness.
The advantages include:

If the patch isn't very curved, then this will be detected and POV-Ray
won't waste a lot of time looking at the wrong pieces.

If the patch is only highly curved in a couple of places, POV-Ray will
keep subdividing there and concentrate it's efforts on the hard part.

The biggest disadvantage is that if POV-Ray stops subdividing at a
particular level on one part of the patch and at a different level on an
adjacent part of the patch, there is the potential for "cracking". This is
typically visible as spots within the patch where you can see through. How
bad this appears depends very highly on the angle at which you are viewing
the patch.

Like triangles, the bicubic patch is not meant to be generated by hand.
These shapes should be created by a special utility. You may be able to
acquire utilities to generate these shapes from the same source from which
you obtained POV-Ray.

Example:
bicubic_patch {
type 1
flatness 0.01
u_steps 4
v_steps 4
<0, 0, 2>, <1, 0, 0>, <2, 0, 0>, <3, 0,-2>,
<0, 1 0>, <1, 1, 0>, <2, 1, 0>, <3, 1, 0>,
<0, 2, 0>, <1, 2, 0>, <2, 2, 0>, <3, 2, 0>,
<0, 3, 2>, <1, 3, 0>, <2, 3, 0>, <3, 3, -2>
}

The triangles in a POV-Ray bicubic_patch are automatically smoothed using
normal interpolation but it is up to the user (or the user's utility
program) to create control points which smoothly stitch together groups of
patches.

As with the other shapes, bicubic_patch objects can be translated, rotated,
and scaled. Because they are finite they respond to automatic bounding.
Since it's made from triangles, a bicubic_patch cannot be used in CSG
intersection or difference types or inside a clipped_by modifier because
triangles have no clear "inside". The CSG union type works acceptably.


5.2.2.3 Disc

One other flat, finite object type is available with POV-Ray. Note that a
disc is infinitely thin. It has no thickness. If you want a disc with
true thickness you should use a very short cylinder. A disc shape may be
defined by:

disc {
, , RADIUS }

or

disc {
, , RADIUS, HOLE_RADIUS }

The vector
defines the x,y,z coordinates of the center of the
disc. The vector describes its orientation by describing its
surface normal vector. This is followed by a float specifying the RADIUS.
This may be optionally followed by another float specifying the radius of a
hole to be cut from the center of the disc.

Example:
disc {
<-2,-0.5, 0>, //center location
<0, 1, 0>, //normal vector
2 //radius
pigment { color Cyan }
}

disc {
<0, 1, 0>, //center location
<-1, 3, -2>, //normal vector
1.5, //radius
0.5 //hole radius (optional)
pigment { color Yellow }
}

As with the other shapes, discs can be translated, rotated, and scaled.
Because they are finite they respond to automatic bounding. Disc cannot be
used in CSG intersection or difference types or inside a clipped_by
modifier because it has no clear "inside". The CSG union type works
acceptably.


5.2.3 INFINITE SOLID PRIMITIVES

There are 5 polynomial primitive shapes that are possibly infinite and do
not respond to automatic bounding. They do have a well defined inside and
may be used in CSG. They are plane, cubic, poly, quadric, and quartic.


5.2.3.1 Plane

The plane primitive is a fast, efficient way to define an infinite flat
surface. The plane is specified as follows:

plane { , DISTANCE }

The vector defines the surface normal of the plane. A surface
normal is a vector which points up from the surface at a 90 degree angle.
This is followed by a float value that gives the distance along the normal
that the plane is from the origin. For example:

plane { <0,1,0>,4 }

This is a plane where "straight up" is defined in the positive y direction.
The plane is 4 units in that direction away from the origin. Because most
planes are defined with surface normals in the direction of an axis, you
will often see planes defined using the "x", "y", or "z" built-in vector
identifiers. The example above could be specified as:

plane { y,4 }

The plane extends infinitely in the x and z directions. It effectively
divides the world into two pieces. By definition the normal vector points
to the outside of the plane while any points away from the vector are
defined as inside. This inside/outside distinction is only important when
using planes in CSG.

As with the other shapes, planes can be translated, rotated, and scaled.
Because they are infinite they do not respond to automatic bounding. Plane
can be used freely in CSG because it has a clear defined "inside".

A plane is called a "polynomial" shape because it is defined by a first
order polynomial equation. Given a plane:

plane { , D }

it can be represented by the formula:

A*x + B*y + C*z = D

Therefore our example "plane {y,4}" is actually the polynomial equation
"y=4". You can think of this as a set of all x,y,z points where all have y
values equal to 4, regardless of the x or z values.

This equation is a "first order" polynomial because each term contains only
single powers of x, y or z. A second order equation has terms like x^2,
y^2, z^2, xy, xz and yz. Another name for a 2nd order equation is a
quadric equation. Third order polys are called cubics. A 4th order
equation is a quartic. Such shapes are described in the sections below.


5.2.3.2 Quadric

Quadric surfaces can produce shapes like ellipsoids, spheres, cones,
cylinders, paraboloids (dish shapes), and hyperboloids (saddle or hourglass
shapes). NOTE: Do not confuse "quaDRic" with "quaRTic". A quadric is a
2nd order polynomial while a quartic is 4th order.

A quadric is defined in POV-Ray by:

quadric { , , , J }

where A through J are float expressions.

This defines a surface of x,y,z points which satisfy the equation:

A x^2 + B y^2 + C z^2
+ D xy + E xz + F yz
+ G x + H y + I z + J = 0

Different values of A,B,C,...J will give different shapes. So, if you take
any three dimensional point and use its x, y, and z coordinates in the
above equation, the answer will be 0 if the point is on the surface of the
object. The answer will be negative if the point is inside the object and
positive if the point is outside the object. Here are some examples:

X^2 + Y^2 + Z^2 - 1 = 0 Sphere
X^2 + Y^2 - 1 = 0 Infinitely long cylinder along the Z axis
X^2 + Y^2 - Z^2 = 0 Infinitely long cone along the Z axis

The easiest way to use these shapes is to include the standard file
"SHAPES.INC" into your program. It contains several pre-defined quadrics
and you can transform these pre-defined shapes (using translate, rotate,
and scale) into the ones you want.

You can invoke them by using the syntax,

object { Quadric_Name }

The pre-defined quadrics are centered about the origin <0, 0, 0> and have a
radius of 1. Don't confuse radius with width. The radius is half the
diameter or width making the standard quadrics 2 units wide.

Some of the pre-defined quadrics are,

Ellipsoid
Cylinder_X, Cylinder_Y, Cylinder_Z
QCone_X, QCone_Y, QCone_Z
Paraboloid_X, Paraboloid_Y, Paraboloid_Z

For a complete list, see the file SHAPES.INC.


5.2.3.3 Poly, Cubic and Quartic.

Higher order polynomial surfaces may be defined by the use of a poly shape.
The syntax is:

poly { ORDER, }

Where ORDER is a whole number from 2 to 7 inclusively that specifies the
order of the equation. T1, T2... Tm are float values for the coefficients
of the equation. There are "m" such terms where

m=((ORDER+1)*(ORDER+2)*(ORDER+3))/6

An alternate way to specify 3rd order polys is:

cubic { }

Also 4th order equations may be specified with:

quartic { }

Here's a more mathematical description of quartics for those who are
interested. Quartic surfaces are 4th order surfaces, and can be used to
describe a large class of shapes including the torus, the lemniscate, etc.
The general equation for a quartic equation in three variables is (hold
onto your hat):

a00 x^4 + a01 x^3 y + a02 x^3 z+ a03 x^3 + a04 x^2 y^2+
a05 x^2 y z+ a06 x^2 y + a07 x^2 z^2+a08 x^2 z+a09 x^2+
a10 x y^3+a11 x y^2 z+ a12 x y^2+a13 x y z^2+a14 x y z+
a15 x y + a16 x z^3 + a17 x z^2 + a18 x z + a19 x+
a20 y^4 + a21 y^3 z + a22 y^3+ a23 y^2 z^2 +a24 y^2 z+
a25 y^2 + a26 y z^3 + a27 y z^2 + a28 y z + a29 y+
a30 z^4 + a31 z^3 + a32 z^2 + a33 z + a34

To declare a quartic surface requires that each of the coefficients (a0 ->
a34) be placed in order into a single long vector of 35 terms.

As an example let's define a torus the hard way. A Torus can be
represented by the equation:

x^4 + y^4 + z^4 + 2 x^2 y^2 + 2 x^2 z^2 + 2 y^2 z^2
-2 (r0^2 + r1^2) x^2 + 2 (r0^2 - r1^2) y^2
-2 (r0^2 + r1^2) z^2 + (r0^2 - r1^2)^2 = 0

Where r0 is the "major" radius of the torus - the distance from the hole of
the donut to the middle of the ring of the donut, and r1 is the "minor"
radius of the torus - the distance from the middle of the ring of the donut
to the outer surface. The following object declaration is for a torus
having major radius 6.3 minor radius 3.5 (Making the maximum width just
under 10).

//Torus having major radius sqrt(40), minor radius sqrt(12)

quartic {
< 1, 0, 0, 0, 2, 0, 0, 2, 0,
-104, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 2, 0, 56, 0,
0, 0, 0, 1, 0, -104, 0, 784 >
sturm
bounded_by { // bounded_by speeds up the render,
// see bounded_by
// explanation later
// in docs for more info.
sphere { <0, 0, 0>, 10 }
}
}

Poly, cubic and quartics are just like quadrics in that you don't have to
understand what one is to use one. The file SHAPESQ.INC has plenty of pre-
defined quartics for you to play with. The most common one is the torus or
donut. The syntax for using a pre-defined quartic is:

object { Quartic_Name }

As with the other shapes, these shapes can be translated, rotated, and
scaled. Because they are infinite they do not respond to automatic
bounding. They can be used freely in CSG because they have a clear defined
"inside".

Polys use highly complex computations and will not always render perfectly.
If the surface is not smooth, has dropouts, or extra random pixels, try
using the optional keyword "sturm" in the definition. This will cause a
slower, but more accurate calculation method to be used. Usually, but not
always, this will solve the problem. If sturm doesn't work, try rotating,
or translating the shape by some small amount. See the sub-directory MATH
for examples of polys in scenes.

There are really so many different quartic shapes, we can't even begin to
list or describe them all. If you are interested and mathematically
inclined, an excellent reference book for curves and surfaces where you'll
find more quartic shape formulas is:

"The CRC Handbook of Mathematical Curves and Surfaces"
David von Seggern
CRC Press
1990


5.2.4 CONSTRUCTIVE SOLID GEOMETRY (CSG)

POV-Ray supports Constructive Solid Geometry (also called Boolean
operations) in order to make the shape definition abilities more powerful.


5.2.4.1 About CSG

The simple shapes used so far are nice, but not terribly useful on their
own for making realistic scenes. It's hard to make interesting objects when
you're limited to spheres, boxes, cylinders, planes, and so forth.

Constructive Solid Geometry (CSG) is a technique for taking these simple
building blocks and combining them together. You can use a cylinder to bore
a hole through a sphere. You can start with solid blocks and carve away
pieces. Objects may be combined in groups and treated as though they were
single objects.

Constructive Solid Geometry allows you to define shapes which are the
union, intersection, or difference of other shapes. Additionally you may
clip sections of objects revealing their hollow interiors.

Unions superimpose two or more shapes. This has the same effect as defining
two or more separate objects, but is simpler to create and/or manipulate.
In POV-Ray 2.0 the union keyword may be used anyplace composite was used in
previous versions of POV-Ray. Also a new type of union called "merge" can
eliminate internal surfaces on transparent or clipped objects.

Intersections define the space where the two or more surfaces overlap.

Differences allow you to cut one object out of another.

CSG intersections, unions, and differences can consist of two or more
shapes. For example:

union {
object{O1}
object{O2}
object{O3} // any number of objects
texture{T1}
}

CSG shapes may be used in CSG shapes. In fact, CSG shapes may be used
anyplace that a standard shape is used.

The order of the component shapes with the CSG doesn't matter except in a
difference shape. For CSG differences, the first shape is visible and the
remaining shapes are cut out of the first.

Constructive solid geometry shapes may be translated, rotated, or scaled in
the same way as any shape. The shapes making up the CSG shape may be
individually translated, rotated, and scaled as well.

When using CSG, it is often useful to invert a shape so that it's inside-
out. The appearance of the shape is not changed, just the way that POV-Ray
perceives it. The inverse keyword can be used to do this for any shape.
When inverse is used, the "inside" of the shape is flipped to become the
"outside". For planes, "inside" is defined to be "in the opposite direction
to the "normal" or "up" direction.

Note that performing an intersection between a shape and some other inverse
shapes is the same as performing a difference. In fact, the difference is
actually implemented in this way in the code.


5.2.4.2 Inside and outside

Most shape primitives, like spheres, boxes, and blobs, divide the world
into two regions. One region is inside the surface and one is outside.
(The exceptions to this rule are triangles, disc and bezier patches - we'll
talk about this later.)

Given any point in space, you can say it's either inside or outside any
particular primitive object (well, it could be exactly on the surface, but
numerical inaccuracies will put it to one side or the other).

Even planes have an inside and an outside. By definition, the surface
normal of the plane points towards the outside of the plane. (For a simple
floor, for example, the space above the floor is "outside" and the space
below the floor is "inside". For simple floors this in un-important, but
for planes as parts of CSG's it becomes much more important). CSG uses the
concepts of inside and outside to combine shapes together. Take the
following situation:

Note: The diagrams shown here demonstrate the concepts in 2D and are

intended only as an analogy to the 3D case.

Note that the triangles and triangle-based shapes cannot be used as solid
objects in CSG since they have no clear inside and outside.

In this diagram, point 1 is inside object A only. Point 2 is inside B
only. Point 3 is inside both A and B while point 0 is outside everything.

* = Object A
% = Object B

* 0
* * %
* * % %
* *% %
* 1 %* %
* % * 2 %
* % 3 * %
*******%******* %
% %
%%%%%%%%%%%%%%%%%


Complex shapes may be created by combining other shapes using a technique
called "Constructive Solid Geometry" (or CSG for short). The CSG shapes
are difference, intersection, and union. The following gives a simple 2D
overview of how these functions work.

5.2.4.3 Union

Unions are simply "glue", used bind two or more shapes into a single entity
that can be manipulated as a single object. The diagram above shows the
union of A and B. The new object created by the union operation can then
be scaled, translated, and rotated as a single shape. The entire union can
share a single texture, but each object contained in the union may also
have its own texture, which will override any matching texture statements
in the parent object:

union {
sphere { <0, 0.5, 0> 1 pigment { Red } }
sphere { <0, 0.0, 0> 1 }
sphere { <0,-0.5, 0> 1 }
pigment { Blue }
finish { Shiny }
}

This union will contain three spheres. The first sphere is explicitly
colored Red while the other two will be shiny blue. Note that the shiny
finish does NOT apply to the first sphere. This is because the
"pigment{Red}" is actually shorthand for "texture{pigment{Red}}". It
attaches an entire texture with default normals and finish. The textures
or pieces of textures attached to the union apply ONLY to components with
no textures. These texturing rules also apply to intersection, difference
and merge as well.

Earlier versions of POV-Ray placed restrictions on unions so you often had
to combine objects with composite statements. Those earlier restrictions
have been lifted so composite is no longer needed. Composite is still
supported for backwards compatibility but it is recommended that union now
be used in it's place since future support for the composite keyword is not
guarantied.


5.2.4.4 Intersection

A point is inside the intersection if it's inside both A AND B. This
"logical AND's" the shapes and gets the common part, most useful for
"cutting" infinite shapes off. The diagram below consists of only those
parts common to A and B.


%*
% *
% 3 *
%*******

For example:

intersection {
sphere {<-0.75,0,0>,1}
sphere {< 0.75,0,0>,1}
pigment {Yellow}
}


5.2.4.5 Difference

A point is inside the difference if it's inside A but not inside B. The
results is a "subtraction" of the 2nd shape from the first shape:

*
* *
* *
* *
* 1 %
* %
* %
*******%

For example:

difference {
sphere {<-0.75,0,0>,1}
sphere {< 0.75,0,-0.25>,1}
pigment {Yellow}
}


5.2.4.6 Merge

As can be seen in the diagram for union, the inner surfaces where the
objects overlap is still present. On transparent or clipped objects these
inner surfaces cause problems. A merge object works just like union but it
eliminates the inner surfaces like this:

*
* * %
* * % %
* *% %
* %
* %
* %
*******% %
% %
%%%%%%%%%%%%%%%%%



5.2.5 LIGHT SOURCES

The last object we'll cover is the light source. Light sources have no
visible shape of their own. They are just points or areas which emit
light.


5.2.5.1 Point Lights

Most light sources are infinitely small points which emit light. Point
light sources are treated like shapes, but they are invisible points from
which light rays stream out. They light objects and create shadows and
highlights. Because of the way ray tracing works, lights do not reflect
from a surface. You can use many light sources in a scene, but each light
source used will increase rendering time. The brightness of a light is
determined by its color. A bright color is a bright light, a dark color, a
dark one. White is the brightest possible light, Black is completely dark
and Gray is somewhere in the middle.

The syntax for a light source is:

light_source { color red #, green #, blue #}

Where X, Y and Z are the coordinates of the location and "color" is any
color or color identifier. For example,

light_source { <3, 5, -6> color Gray50}

is a 50% Gray light at X=3, Y=5, Z=-6.

Point light sources in POV-Ray do not attenuate, or get dimmer, with
distance.


5.2.5.2 Spotlights

A spotlight is a point light source where the rays of light are constrained
by a cone. The light is bright in the center of the spotlight and falls
off/darkens to soft shadows at the edges of the circle.

The syntax is:

Syntax: light_source {

color red #, green #, blue #
spotlight
point_at
radius #
falloff #
tightness #
}

A spotlight is positioned using two vectors. The first vector is the usual
vector that you would use to position a point light source.

The second vector is the point_at , the vector position of
the point the light is pointing at, similar to the look_at in a camera
description.

The following illustrations will be helpful in understanding how these
values relate to each other:


(+) Spotlight

/ \
/ \
/ \
/ \
/ \
/ \
+-----*-----+
^ point_at

The center is specified the same way as a normal point light_source.

Point_at is the location that the cone of light is
aiming at.

Spotlights also have three other parameters: radius, falloff, and
tightness.

If you think of a spotlight as two nested cones, the inner cone would be
specified by the radius parameter, and would be fully lit. The outer cone
would be the falloff cone and beyond it would be totally unlit. The values
for these two parameters are specified in degrees of the half angle at the
peak of each cone:


(+) Spotlight

|\ <----- angle measured here
| \
|| \
|| \ shaded area = radius cone
||| \ outer line = falloff cone
|||| \
||||| \
+-------+

The radius# is the radius, in degrees, of the bright circular hotspot at
the center of the spotlight's area of affect.

The falloff# is the falloff angle of the radius of the total spotlight
area, in degrees. This is the value where the light "falls off" to zero
brightness. Falloff should be larger than the radius. Both values should
be between 1 and 180.

The tightness value specifies how quickly the light dims, or falls off, in
the region between the radius (full brightness) cone and the falloff (full
darkness) cone. The default value for tightness is 10. Lower tightness
values will make the spot have very soft edges. High values will make the
edges sharper, the spot "tighter". Values from 1 to 100 are acceptable.

Spotlights may used anyplace that a normal light source is used. Like
normal light sources, they are invisible points. They are treated as shapes
and may be included in CSG shapes. They may also be used in conjunction
with area_lights.

Example:
// This is the spotlight.
light_source {
<10, 10, 0>
color red 1, green 1, blue 0.5
spotlight
point_at <0, 1, 0>
tightness 50
radius 11
falloff 25
}



5.2.3.3 Area Lights

Regular light sources in POV-Ray are modeled as point light sources, that
is they emit light from a single point in space. Because of this the
shadows created by these lights have the characteristic sharp edges that
most of us are use to seeing in ray traced images. The reason for the
distinct edges is that a point light source is either fully in view or it
is fully blocked by an object. A point source can never be partially
blocked.

Area lights on the other hand occupy a finite area of space. Since it is
possible for an area light to be partially blocked by an object the shadows
created will have soft or "fuzzy" edges. The softness of the edge is
dependent on the dimensions of the light source and it's distance from the
object casting the shadow.

The area lights used in POV-Ray are rectangular in shape, sort of like a
flat panel light. Rather than performing the complex calculations that
would be required to model a true area light, POV-Ray approximates an area
light as an array of "point" light sources spread out over the area
occupied by the light. The intensity of each individual point light in the
array is dimmed so that the total amount of light emitted by the light is
equal to the light color specified in the declaration.


Syntax:

light_source {
color red # green # blue #

area_light , , N1, N2
adaptive #
jitter

[optional spotlight parameters]
}

The light's location and color are specified in the same way as a
regular light source.

The area_light command defines the size and orientation of the area light
as well as the number of lights in the light source array. The vectors
and specify the lengths and directions of the edges
of the light. Since the area lights are rectangular in shape these vectors
should be perpendicular to each other. The larger the size of the light the
thicker that the soft part of the shadow will be. The numbers N1 and N2
specify the dimensions of the array of point lights. The larger the number
of lights you use the smoother your shadows will be but the longer they
will take to render.

The adaptive command is used to enable adaptive sampling of the light
source. By default POV-Ray calculates the amount of light that reaches a
surface from an area light by shooting a test ray at every point light
within the array. As you can imagine this is VERY slow. Adaptive sampling
on the other hand attempts to approximate the same calculation by using a
minimum number of test rays. The number specified after the keyword
controls how much adaptive sampling is used. The higher the number the more
accurate your shadows will be but the longer they will take to render. If
you're not sure what value to use a good starting point is 'adaptive 1'.
The adaptive command only accepts integer values and cannot be set lower
than 0. Adaptive sampling is explained in more detail later.

The jitter command is optional. When used it causes the positions of the
point lights in the array to be randomly jittered to eliminate any shadow
banding that may occur. The jittering is completely random from render to
render and should not be used when generating animations.

Note: It's possible to specify spotlight parameters along with area_light
parameters to create "area spotlights." Using area spotlights is a good way
to speed up scenes that use area lights since you can confine the lengthy
soft shadow calculations to only the parts of your scene that need them.


Example:

light_source {
<0, 50, 0> color White

area_light <5, 0, 0>, <0, 0, 10>, 5, 5
adaptive 1
jitter
}

This defines an area light that extends 5 units along the x axis and 10
units along the z axis and is centered at the location <0,50,0>. The light
consists of a 5 by 5 jittered array of point sources for a total of 25
point lights. A minimum of 9 shadow rays will be used each time this light
is tested.

/ * * * * *
/ * * * * * Y
<0,0,10> / * * * * * | Z
/ * * * * * | /
/ * * * * * | /
+-----------> +------X
<5,0,0>


An interesting effect that can be created using area lights is a linear
light. Rather than having a rectangular shape, a linear light stretches
along a line sort of like a thin fluorescent tube. To create a linear light
just create an area light with one of the array dimensions set to 1.

Example:

light_source {
<0, 50, 0> color White

area_light <40, 0, 0>, <0, 0, 1>, 100, 1
adaptive 4
jitter
}

This defines a linear light that extends from <-40/2,50,0> to <+40/2,50,0>
and consists of 100 point sources along it's length. The vector <0,0,1> is
ignored in this case since a linear light has no width. Note: If the linear
light is fairly long you'll usually need to set the adaptive parameter
fairly high as in the above example.

When performing adaptive sampling POV-Ray starts by shooting a test ray at
each of the four corners of the area light. If the amount of light received
from all four corners is approximately the same then the area light is
assumed to be either fully in view or fully blocked. The light intensity is
then calculated as the average intensity of the light received from the
four corners. However, if the light intensity from the four corners
differs significantly then the area light is partially blocked. The light
is the split into four quarters and each section is sampled as described
above. This allows POV-Ray to rapidly approximate how much of the area
light is in view without having to shoot a test ray at every light in the
array.

While the adaptive sampling method is fast (relatively speaking) it can
sometimes produces inaccurate shadows. The solution is to reduce the amount
of adaptive sampling without completely turning it off. The number after
the adaptive keyword adjusts the number of times that the area light will
be split before the adaptive phase begins. For example if you use "adaptive
0" a minimum of 4 rays will be shot at the light. If you use "adaptive 1" a
minimum of 9 rays will be shot (adaptive 2 = 25 rays, adaptive 3 = 81 rays,
etc). Obviously the more shadow rays you shoot the slower the rendering
will be so you should use the lowest value that gives acceptable results.

The number of rays never exceeds the values you specify for rows and
columns of points. For example: area_light x,y,4,4 specifies a 4 by 4
array of lights. If you specify adaptive 3 it would mean that you should
start with a 5 by 5 array. In this case no adaptive sampling is done. The
4 by 4 array is used.


5.2.3.4 Looks_like

Normally the light source itself has no visible shape. The light simply
radiates from an invisible point or area. You may give a light source a
any shape by adding a "looks_like{OBJECT}" statement. For example:

light_source {
<100,200,-300> color White
looks_like {sphere{<0,0,0>,1 texture{T1}}
}

This creates a visible sphere which is automatically translated to the
light's location <100,200,-300> even though the sphere has <0,0,0> as its
center. There is an implied "no_shadow" also attached to the sphere so
that light is not blocked by the sphere. Without the automatic no_shadow,
the light inside the sphere would not escape. The sphere would, in effect,
cast a shadow over everything.

If you want the attached object to block light then you should attach it
with a union and not a looks_like as follows:

union {
light_source {<100,200,-300> color White}
object {My_Lamp_Shade}
}

Presumably parts of the lamp shade are open to let SOME light out.


5.3 OBJECT MODIFIERS
----------------------

A variety of modifiers may be attached to objects. Transformations such as
translate, rotate and scale have already been discussed. Textures are in a
section of their own below. Here are three other important modifiers:
clipped_by, bounded_by and no_shadow. Although the examples below use
object statements and object identifiers, these modifiers may be used on
any type of object such as sphere, box etc.


5.3.1 CLIPPED_BY

The "clipped_by" statement is technically an object modifier but it
provides a type of CSG similar to CSG intersection. You attach a clipping
object like this:

object {
My_Thing
clipped_by{plane{y,0}}
}

Every part of the object "My_Thing" that is inside the plane is retained
while the remaining part is clipped off and discarded. In an intersection
object, the hole is closed off. With clipped_by it leaves an opening. For
example this diagram shows our object "A" being clipped_by a plane{y,0}.



* *
* *
* *
***************

Clipped_by may be used to slice off portions of any shape. In many cases it
will also result in faster rendering times than other methods of altering a
shape.

Often you will want to use the clipped_by and bounded_by options with the
same object. The following shortcut saves typing and uses less memory.

object {
My_Thing
bounded_by{box{<0,0,0>,<1,1,1>}}
clipped_by{bounded_by}
}

This tells POV-Ray to use the same box as a clip that was used as a bounds.


5.3.1 BOUNDED_BY

The calculations necessary to test if a ray hits an object can be quite
time consuming. Each ray has to be tested against every object in the
scene. POV-Ray attempts so speed up the process by building a set of
invisible boxes, called bounding slabs, which cluster the objects together.
This way a ray that travels in one part of the scene doesn't have to be
tested against objects in another far away part of the scene. When large
number objects are present the slabs are nested inside each other. POV-Ray
can use slabs on any finite object. However infinite objects such as
plane, quadric, quartic, cubic & poly cannot be automatically bound. Also
CSG objects cannot be efficiently bound by automatic methods. By attaching
a bounded_by statement to such shapes you can speed up the testing of the
shape and make it capable of using bounding slabs.

If you use bounding shapes around any complex objects you can speed up the
rendering. Bounding shapes tell the ray tracer that the object is totally
enclosed by a simple shape. When tracing rays, the ray is first tested
against the simple bounding shape. If it strikes the bounding shape, then
the ray is further tested against the more complicated object inside.
Otherwise the entire complex shape is skipped, which greatly speeds
rendering.

To use bounding shapes, simply include the following lines in the
declaration of your object:

bounded_by {
object { ... }
}

An example of a Bounding Shape:

intersection {
sphere {<0,0,0>, 2}
plane {<0,1,0>, 0}
plane {<1,0,0>, 0}
bounded_by {sphere {<0,0,0>, 2}}
}

The best bounding shape is a sphere or a box since these shapes are highly
optimized, although, any shape may be used. If the bounding shape is
itself a finite shape which responds to bounding slabs then the object
which it encloses will also be used in the slab system.

CSG shapes can benefit from bounding slabs without a bounded_by statement
however they may do so inefficiently in intersection, difference and merge.
In these three CSG types the automatic bound used covers all of the
component objects in their entirety. However the result of these
intersections may result in a smaller object. Compare the sizes of the
illustrations for union and intersection in the CSG section above. It is
possible to draw a much smaller box around the intersection of A and B than
the union of A and B yet the automatic bounds are the size of union{A B}
regardless of the kind of CSG specified.

While it is almost always a good idea to manually add a bounded_by to
intersection, difference and merge, it is often best to NOT bound a union.
If a union has no bounded_by and no clipped_by then POV-Ray can internally
split apart the components of a union and apply automatic bounding slabs to
any of its finite parts. Note that some utilities such as RAW2POV may be
able to generate bounds more efficiently than POV-Ray's current system.
However most unions you create yourself can be easily bounded by the
automatic system. For technical reasons POV-Ray cannot split a merge
object. It is probably best to hand bound a merge, especially if it is
very complex.

Note that if bounding shape is too small or positioned incorrectly, it may
clip the object in undefined ways or the object may not appear at all. To
do true clipping, use clipped_by as explained above. Often you will want to
use the clipped_by and bounded_by options with the same object. The
following shortcut saves typing and uses less memory.

object {
My_Thing
clipped_by{box{<0,0,0>,<1,1,1>}}
bounded_by{clipped_by}
}

This tells POV-Ray to use the same box as a bounds that was used as a clip.

5.3.2 NO_SHADOW

You may specify the no_shadow keyword in object and that object will not
cast a shadow. This is useful for special effects and for creating the
illusion that a light source actually is visible. This keyword was
necessary in earlier versions of POV-Ray which did not have the
"looks_like" statement. Now it is useful for creating things like laser
beams or other unreal effects.

Simply attach the keyword as follows:

object {
My_Thing
no_shadow
}


5.4 TEXTURES
--------------

Textures are the materials from which the objects in POV-Ray are made. They
specifically describe the surface coloring, shading, and properties like
transparency and reflection.

You can create your own textures using the parameters described below, or
you can use the many pre-defined high quality textures that have been
provided in the files TEXTURES.INC and STONES.INC. The tutorial in section
4 above introduces the basics of defining textures and attaching them to
objects. It explains how textures are made up of three portions, a color
pattern called "pigment", a bump pattern called "normal", and surface
properties called "finish".

The most complete form for defining a texture is as follows:

texture {
TEXTURE_IDENTIFIER
pigment {...}
normal {...}
finish {...}
TRANSFORMATIONS...
}

Each of the items in a texture are optional but if they are present, the
identifier must be first and the transformations bust be last. The
pigment, normal and finish parameters modify any pigment, normal and finish
already specified in the TEXTURE_IDENTIFIER. If no texture identifier is
specified then the pigment, normal and finish statements modify the current
default values. TRANSFORMATIONs are translate, rotate and scale
statements. They should be specified last.

The sections below describe all of the options available in pigments,
normals and finishes.


5.4.1 PIGMENT

The color or pattern of colors for an object is defined by a pigment
statement. A pigment statement is part of a texture specification.
However it can be tedious to type "texture{pigment{...}}" just to add a
color to an object. Therefore you may attach a pigment directly to an
object without explicitly specifying that it as part of a texture. For
example...

this... can be shortened to this...

object { object {
My_Object My_Object
texture { pigment {color Purple}
pigment {color Purple} }
}
}

The color you define is the way you want it to look if fully illuminated.
You pick the basic color inherent in the object and POV-Ray brightens or
darkens it depending on the lighting in the scene. The parameter is called
"pigment" because we are defining the basic color the object actually IS
rather than how it LOOKS.

The most complete form for defining a pigment is as follows:

pigment {
PIGMENT_IDENTIFIER
PATTERN_TYPE
PIGMENT_MODIFIERS
TRANSFORMATIONS...
}

Each of the items in a pigment are optional but if they are present, they
should be in the order shown above to insure that the results are as
expected. Any items after the PIGMENT_IDENTIFIER modify or override
settings given in the IDENTIFIER. If no identifier is specified then the
items modify the pigment values in the current default texture.
TRANSFORMATIONs are translate, rotate and scale statements. They apply
only to the pigment and not to other parts of the texture. They should be
specified last.

The various PATTERN_TYPEs fall into roughly 4 categories. Each category is
discussed below. They are solid color, color list patterns, color mapped
patterns and image maps.


5.4.1.1 Color

The simplest type of pigment is a solid color. To specify a solid color
you simply put a color specification inside a pigment. For example...

pigment {color Orange}

A color specification consists of the keyword "color" followed a color
identifier or by a specification of the amount or red, green, blue and
transparency in the surface. For example:

color red 0.5 green 0.2 blue 1.0

The float values between 0.0 and 1.0 are used to specify the intensity of
each primary color of light. Note that we use additive color primaries
like the color phosphors on a color computer monitor or TV. Thus...

color red 1.0 green 1.0 blue 1.0

...specifies full intensity of all primary colors which is white light.
The primaries may be given in any order and if any primary is unspecified
its value defaults to zero.

In addition to the primary colors a 4th value called "filter" specifies the
amount of transparency. For example a piece of red tinted cellophane might
have...

color red 1.0 filter 1.0

Lowering the filter value would let less light through. The default value
if no filter is specified is 0.0 or no transparency. Note that the example
has an implied "green 0.0 blue 0.0" which means that no green or blue
light can pass through. Often users mistakenly specify a clear object
by...

color filter 1.0

but this has implied red, green and blue values of zero. You've just
specified a totally black filter so no light passes through. The correct
way is...

color red 1.0 green 1.0 blue 1.0 filter 1.0

Note in earlier versions of POV-Ray the keyword "alpha" was used for
transparency. However common usage of "alpha" in this context usually
means that light passes through unaffected. In POV-Ray however, light is
filtered when it passes through a colored surface. The program works the
same as it always did but the keyword has been changed to make its meaning
clearer.

A short-cut way to specify a color is...

color rgb<0.2, 0.5, 0.9>

or

color rgbf<0.2, 0.8, 1.0, 0.7>

Color specifications are used elsewhere in POV-Ray. Unless stated
otherwise, all of the above information on color specs given above applies
to any color spec.

Color identifiers may be declared. For examples see COLORS.INC. A color
identifier contains red, blue, green and filter values even if they are not
explicitly specified. For example:

color filter 1.0 My_Color // here My_Color overwrites the filter

color My_Color filter 1.0 // this changes My_Color's filter value to 1.0

When using a color specification to give an object a solid color pigment,
the keyword "color" may be omitted. For example...

pigment {red 1 blue 0.5}
or
pigment {My_Color}

are legal.



5.4.1.2 Color List Patterns -- checker and hexagon

Two of the simplest color patterns available are the checker and hexagon
patterns. These patterns take a simple list of colors one after the other.
For example a checker pattern is specified by...

pigment {checker color C1 color C2}

This produces a checkered pattern consisting of alternating squares of
color C1 and C2. If no colors are specified then default blue and green
colors are used.

All color patterns in POV-Ray are 3 dimensional. For every x,y,z point in
space, the pattern has a unique color. In the case of a checker pattern it
is actually a series of cubes that are one unit in size. Imagine a bunch
of 1 inch cubes made from two different colors of modeling clay. Now
imagine arranging the cubes in an alternating check pattern and stacking
them in layer after layer so that the colors still alternated in every
direction. Eventually you would have a larger cube. The pattern of checks
on each side is what the POV-Ray checker pattern produces when applied to a
box object. Finally imagine cutting away at the cube until it is carved
into a smooth sphere or any other shape. This is what the checker pattern
would look like on an object of any kind.

Color patterns do not wrap around the surfaces like putting wallpaper on an
object. The patterns exist in 3-d and the objects are carved from them
like carving stacked colored cubes. In a later section we describe wood
and marble patterns for example. The wood grain or stone swirls exist
through the whole object but they appear only at the surface.

Another pattern that uses a list of colors is the hexagon pattern. A
hexagon pattern is specified by...

pigment {hexagon color C1 color C2 color C3}

Hex pattern generates a repeating pattern of hexagons in the XZ plane. In
this instance imagine tall rods that are hexagonal in shape and are
parallel to the Y axis and grouped in bundles like this...

_____

/ \
/ C2 \_____
|\ / \
| \_____/ C3 \
| / \ /|
/ C1 \_____/ |
|\ /| | |
| \_____/ | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | |
| | | | |
| |
| |


The three colors will repeat the pattern shown above with hexagon C1
centered at the origin. Each side of the hexagon is one unit long. The
hexagonal "rods" of color extend infinitely in the +Y and -Y directions.
If no colors are specified then default blue, green, and red colors are
used.


5.4.1.3 Color Mapped Patterns

Most of the color patterns do not use abrupt color changes of just two or
three colors like those in the checker or hexagon patterns. They instead
use smooth transitions of many colors that gradually change from one point
to the next. The colors are defined in a color map that describes how the
pattern blends from one color to the next.


5.4.1.3.1 Gradient

This simplest such pattern is the "gradient" pattern. It is specified as
follows...

pigment {gradient VECTOR}

where VECTOR is a vector pointing in the direction that the colors blend.
For example:

sphere {
<0, 1, 2>, 2
pigment { gradient x } // bands of color vary as you move
// along the "x" direction.
}

This produces a series of smooth bands of color that look like layers of
color next to each other. Points at x=0 are black. As the X location
increases it smoothly turns to white at x=1. Then it starts over with
black and gradually turns white at x=2. The pattern reverses for negative
values of X. Using "gradient y" or "gradient z" makes the colors blend
along the y or z axis. Any vector may be used but x, y and z are most
common.


5.4.1.3.2 Color Maps

The gray scale default colors of the gradient pattern isn't a very
interesting sight. The real power comes from specifying a color map to
define how the colors should blend.

Each of the various pattern types available is in fact a mathematical
function that takes any x,y,z location and turns it into a number between
0.0 and 1.0. That number is used to specify what mix of colors to use from
the color map.

A color map is specified by...

color_map {
[ NUM_1 color C1]
[ NUM_2 color C2]
[ NUM_3 color C3]
...
}

Where NUM_1, NUM_2... are float values between 0.0 and 1.0 inclusive. C1,
C2 ... are color specifications. NOTE: the [] brackets are part of the
actual statement. They are not notational symbols denoting optional parts.
The brackets surround each entry in the color map. There may be from 2 to
20 entries in the map.

For example,

sphere {
<0,1,2>, 2
pigment {
gradient x

color_map {
[0.1 color Red]
[0.3 color Yellow]
[0.6 color Blue]
[0.6 color Green]
[0.8 color Cyan]
}
}
}

The pattern function is evaluated and the result is a value from 0.0 to
1.0. If the value is less than the first entry (in this case 0.1) then the
first color (Red) is used. Values from 0.1 to 0.3 use a blend of red and
yellow using linear interpolation of the two colors. Similarly values from
0.3 to 0.6 blend from yellow to blue. Note that the 3rd and 4th entries
both have values of 0.6. This causes an immediate abrupt shift of color
from blue to green. Specifically a value that is less than 0.6 will be
blue but exactly equal to 0.6 will be green. Moving along, values from 0.6
to 0.8 will be a blend of green and cyan. Finally any value greater than
or equal to 0.8 will be cyan.

If you want areas of unchanging color you simply specify the same color for
two adjacent entries. For example:

color_map {
[0.1 color Red]
[0.3 color Yellow]
[0.6 color Yellow]
[0.8 color Green]
}

In this case any value from 0.3 to 0.6 will be pure yellow.


5.4.1.3.3 Marble

A "gradient x" pattern uses colors from the color map from 0.0 up to 1.0 at
location x=1 but then jumps back to the first color for x=1.00000001 (or
some tiny fraction above 1.0) and repeats the pattern again and again. The
marble pattern is similar except that it uses the color map from 0 to 1 but
then it reverses the map and blends from 1 back to zero. For example:

pigment {
gradient x
color_map {
[0.0 color Yellow]
[1.0 color Cyan]
}
}

This blends from yellow to cyan and then it abruptly changes back to yellow
and repeats. However replacing "gradient x" with "marble" smoothly blends
from yellow to cyan as the x coordinate goes from 0.0 to 0.5 and then
smoothly blends back from cyan to yellow by x=1.0.

When used with a "turbulence" modifier and an appropriate color map, this
pattern looks like veins of color of real marble, jade or other types of
stone. By default, marble has no turbulence.


5.4.1.3.4 Wood

Wood uses the color map to create concentric cylindrical bands of color
centered on the Z axis. These bands look like the growth rings and veins
in real wood. Small amounts of turbulence should be added to make it look
more realistic. By default, wood has no turbulence.

Like marble, wood uses color map values 0 to 1 then repeats the colors in
reverse order from 1 to 0.


5.4.1.3.5 Onion

Onion is a pattern of concentric spheres like the layers of an onion. It
uses colors from a color map from 0 to 1, 0 to 1 etc without reversing.


5.4.1.3.6 Leopard

Leopard creates regular geometric pattern of circular spots. It uses
colors from a color map from 0 to 1, 0 to 1 etc without reversing.


5.4.1.3.7 Granite

This pattern uses a simple 1/f fractal noise function to give a pretty darn
good granite pattern. Typically used with small scaling values (2.0 to
5.0). This pattern is used with creative color maps in STONES.INC to
create some gorgeous layered stone textures. By default, granite has no
turbulence. It uses colors from a color map from 0 to 1, 0 to 1 etc
without reversing.


5.4.1.3.8 Bozo

The bozo color pattern takes a noise function and maps it onto the surface
of an object. It uses colors from a color map from 0 to 1, 0 to 1 etc
without reversing.

Noise in ray tracing is sort of like a random number generator, but it has
the following properties:

1) It's defined over 3D space i.e., it takes x, y, and z and returns the
noise value there.
2) If two points are far apart, the noise values at those points are
relatively random.
3) If two points are close together, the noise values at those points are
close to each other.

You can visualize this as having a large room and a thermometer that ranges
from 0.0 to 1.0. Each point in the room has a temperature. Points that are
far apart have relatively random temperatures. Points that are close
together have close temperatures. The temperature changes smoothly, but
randomly as we move through the room.

Now, let's place an object into this room along with an artist. The artist
measures the temperature at each point on the object and paints that point
a different color depending on the temperature. What do we get? A POV-Ray
bozo texture!


5.4.1.3.9 Spotted

This uses the same noise pattern as bozo but it is unaffected by
turbulence. It uses colors from a color map from 0 to 1, 0 to 1 etc
without reversing.


5.4.1.3.10 Agate

This pattern is very beautiful and similar to marble, but uses a different
turbulence function. The turbulence keyword has no effect, and as such it
is always very turbulent. You may control the amount of the built-in
turbulence by adding the "agate_turb" keyword followed by a float value.
For example:

pigment {
agate
agate_turb 0.5
color_map {
...
}
}


5.4.1.3.11 Mandel

The mandel pattern computes the standard Mandelbrot fractal pattern and
projects it onto the X-Y plane. It uses the X and Y coordinates to compute
the Mandelbrot set. The pattern is specified with the keyword mandel
followed by an integer number. This number is the maximum number of
iterations to be used to compute the set. Typical values range from 10 up
to 256 but any positive integer may be used. For example:

sphere {
<0, 0, 0>, 1
pigment {
mandel 25
color_map {
[0.0 color Cyan]
[0.3 color Yellow]
[0.6 color Magenta]
[1.0 color Cyan]
}
scale .5
}
}

The value passed to the color map is computed by the formula:

value = number_of_iterations / max_iterations

The color extends infinitely in the Z direction similar to a planar image
map.


5.4.1.3.12 Radial

The radial pattern is a radial blend that wraps around the +Y axis. The
color for value 0.0 starts at the +X direction and wraps the color map
around from east to west with 0.25 in the -Z direction, 0.5 in -X, 0.75 at
+Z and back to 1.0 at +X. See the "frequency" and "phase" pigment
modifiers below for examples.


5.4.1.4 Image Maps

When all else fails and none of the above pigment pattern types meets your
needs, you can use an image map to wrap a 2-D bit-mapped image around your
3-D objects.


5.4.1.4.1 Specifying an image map.

The syntax for image_map is...

pigment {
image_map {
FILE_TYPE "filename"
MODIFIERS...
}
}

Where FILE_TYPE is one of the following keywords "gif", "tga", "iff" or
"dump". This is followed by the name of the file in quotes. Several
optional modifiers may follow the file specification. The modifiers are
described below. Note: Earlier versions of POV-Ray allowed some modifiers
before the FILE_TYPE but that syntax is being phased out in favor of the
syntax described here.

Filenames specified in the image_map statements will be searched for in the
home (current) directory first, and if not found, will then be searched for
in directories specified by any "-L" (library path) options active. This
would facilitate keeping all your image maps files in a separate
subdirectory, and giving an "-L" option on the command line to where your
library of image maps are.

By default, the image is mapped onto the X-Y plane. The image is
"projected" onto the object as though there were a slide projector
somewhere in the -Z direction. The image exactly fills the square area
from x,y coordinates (0,0) to (1,1) regardless of the image's original size
in pixels. If you would like to change this default, you may translate,
rotate or scale the pigment or texture to map it onto the object's surface
as desired.

In the section 5.4.1.2 above when we explained checker pigment patterns, we
described the checks as solid cubes of colored clay from which objects are
carved. With image maps you should imagine that each pixel is a long,
thin, square, colored rod that extends parallel to the Z axis. The image
is made from rows and columns of these rods bundled together and the object
is then carved from the bundle.

If you would like to change this default orientation, you may translate,
rotate or scale the pigment or texture to map it onto the object's surface
as desired.


5.4.1.4.2 The "once" option.

Normally there are an infinite number of repeating images created over
every unit square of the X-Y plane like tiles. By adding the keyword
"once" after a file name, you can eliminate all other copies of the image
except the one at (0,0) to (1,1). Areas outside this unit square are
treated as fully transparent.

Note: The "once" keyword may also be used with bump_map and material_map
statements.


5.4.1.4.3 The "map_type" option.

The default projection of the image onto the X-Y plane is called a "planar
map type". This option may be changed by adding the "map_type" keyword
followed by a number specifying the way to wrap the image around the
object.

A "map_type 0" gives the default planar mapping already described.

A "map_type 1" is a spherical mapping. It assumes that the object is a
sphere of any size sitting at the origin. The Y axis is the north/south
pole of the spherical mapping. The top and bottom edges of the image just
touch the pole regardless of any scaling. The left edge of the image
begins at the positive X axis and wraps the image around the sphere from
"west" to "east" in a -Y rotation. The image covers the sphere exactly
once. The "once" keyword has no meaning for this type.

With "map_type 2" you get a cylindrical mapping. It assumes that a
cylinder of any diameter lies along the Y axis. The image wraps around the
cylinder just like the spherical map but the image remains 1 unit tall from
y=0 to y=1. This band of color is repeated at all heights unless the
"once" keyword is applied.

Finally "map_type 5" is a torus or donut shaped mapping. It assumes that a
torus of major radius 1 sits at the origin in the X-Z plane. The image is
wrapped around similar to spherical or cylindrical maps. However the top
and bottom edges of the map wrap over and under the torus where they meet
each other on the inner rim.

Types 3 and 4 are still under development.

Note: The "map_type" option may also be applied to bump_map and
material_map statements.


5.4.1.4.4 The "filter" options.

To make all or part of an image map transparent, you can specify filter
values for the color palette/registers of GIF or IFF pictures (at least for
the modes that use palettes/color maps). You can do this by adding the
keyword "filter" following the filename. The keyword is followed by two
numbers. The first number is the palette/register number value and 2nd is
the amount of transparency. The values should be separated by a comma. For
example:

image_map {
gif "mypic.gif"
map_type 0
filter 0, 0.5 // Make color 0 50% transparent
filter 5, 1.0 // Make color 5 100% transparent
filter 8, 0.3 // Make color 8 30% transparent
}

You can give the entire image a filter value using "filter all VALUE". For
example:

image_map {
gif "stnglass.gif"
map_type 0
filter all 0.9
}

NOTE: Transparency works by filtering light by its original color. Adding
"filter" to the color black still leaves you with black no matter how high
the filter value is. If you want a color to be clear, add filter 1 to the
color white.


5.4.1.4.5 The "interpolate" option.

Adding the "interpolate" keyword can smooths the jagged look of an image or
bump map. When POV-Ray asks a color or bump amount for an image or bump
map, it often asks for a point that is not directly on top of one pixel,
but sort of between several different colored pixels. Interpolations
returns an "in-between" value so that the steps between the pixels in the
image or bump map will look smoother.

There are currently two types of interpolation:

Normalized Distance -- interpolate 4
Bilinear -- interpolate 2

Default is no interpolation. Normalized distance is the slightly faster of
the two, bilinear does a better job of picking the between color. Normally,
bilinear is used.

If your bump or image map looks jaggy, try using interpolation instead of
going to a higher resolution image. The results can be very good. For
example:

image_map {
gif "mypic.gif"
map_type 0
interpolate 2
}


5.4.1.5 Pigment Modifiers

After specifying the pigment type such as marble, wood etc and adding an
optional color map, you may add any of several modifiers.


5.4.1.5.1 Turbulence

The keyword "turbulence" followed by a float or vector may be used to stir
up the color pattern. Typical values range from the default 0.0 which is
no turbulence to 1.0 which is very turbulent. If a vector is specified
then different amounts of turbulence are applied in the x, y and z
directions. For example "turbulence <1.0, 0.6, 0.1>" has much turbulence
in the x direction, a moderate amount in the y direction and a small amount
in the z direction.

Turbulence uses a noise function called DNoise. This is sort of like noise
used in the bozo pattern except that instead of giving a single value it
gives a direction. You can think of it as the direction that the wind is
blowing at that spot.

Turbulence which uses DNoise to push a point around a few times. We locate
the point we want to color (P), then push it around a bit using turbulence
to get to a final point (Q) then look up the color of point Q in our
ordinary boring textures. That's the color that's used for the point P.

It in effect says "Don't give me the color at this spot... take a few
random steps in a different direction and give me that color. Each step is
typically half as long as the one before. For example:

P ------------------------->
First Move /
/
/
/Second
/ Move
/
______/
\
\
Q - Final point.


The magnitude of these steps is controlled by the turbulence value.


5.4.1.5.2 Octaves

The number of steps used by turbulence is controlled by the "octaves"
value. The values may range from 1 up to 10. The default value of
"octaves 6" is fairly close to the upper limit; you won't see much change
by setting it to a higher value because the extra steps are too small. You
can achieve some very interesting wavy effects by specifying lower values.
Setting octaves higher can slow down rendering because more steps are
computed.


5.4.1.5.3 Omega

The keyword "omega" followed by a float value may be added to change the
turbulence calculations. Each successive octave of turbulence is
multiplied by the omega value. The default "omega 0.5" means that each
octave is 1/2 the size of the previous one. Higher omega values mean that
2nd, 3rd, 4th and up octaves contribute more turbulence giving a sharper,
"krinkly" look while smaller omegas give a fuzzy kind of turbulence that
gets blury in places.


5.4.1.5.4 Lambda

The lambda parameter controls how statistically different the random move
of an octave is compared to its previous octave. The default value for
this is "lambda 2". Values close to lambda 1.0 will straighten out the
randomness of the path in the diagram above. Higher values can look more
"swirly" under some circumstances. More tinkering by POV-Ray users may
lead us to discover ways to make good use of this parameter. For now just
tinker and enjoy.


5.4.1.5.5 Quick_color

When developing POV-Ray scenes its often useful to do low quality test runs
that render faster. The +Q command line switch can be used to turn off
some time consuming color pattern and lighting calculations to speed things
up. However all settings of +Q5 or lower turns off pigment calculations
and creates gray objects.

By adding a "quick_color" to a pigment you tell POV-Ray what solid color to
use for quick renders instead of a patterned pigment. For example:

pigment {
gradient x
color_map{
[0 color Yellow][0.3 color Cyan][0.6 color Magenta][1 color Cyan]
}
turbulence 0.5 lambda 1.5 omega 0.75 octaves 8
quick_color Neon_Pink
}

This tells POV-Ray to use solid Neon_Pink for test runs at quality +Q5 or
lower but to use the turbulent gradient pattern for rendering at +Q6 and
higher.

Note that solid color pigments such as:

pigment {color Magenta}

automatically set the quick_color to that value. You may override this if
you want. Suppose you have 10 spheres on the screen and all are Yellow.
If you want to identify them individually you could give each a different
quick_color like this:

sphere {<1,2,3>,4 pigment {color Yellow quick_color Red}}

sphere {<-1,-2,-3>,4 pigment {color Yellow quick_color Blue}}

...and so on. At +Q6 or higher they will all be Yellow but at +Q5 or
lower each would be different colors so you could identify them.


5.4.1.5.6 Frequency and Phase

The frequency and phase keywords were originally intended for the normal
patterns ripples and waves discussed in the next section. With version 2.0
they were extended to pigments to make the radial and mandel pigment
pattern easier to use. As it turned out it was simple to make them apply
to any color map pattern.

The frequency keyword adjusts the number of times that a color map repeats
over one cycle of a pattern. For example gradient x covers color map
values 0 to 1 over the range x=0 to x=1. By adding "frequency 2" the color
map repeats twice over that same range. The same effect can be achieved
using "scale x*0.5" so the frequency keyword isn't that useful for patterns
like gradient.

However the radial pattern wraps the color map around the +Y axis once. If
you wanted two copies of the map (or 3 or 10 or 100) you'd have to build a
bigger map. Adding "frequency 2" causes the color map to be used twice per
revolution. Try this:

sphere {<0,0,0>,3
pigment {
radial
color_map{[0.5 color Red][0.5 color White]}
frequency 6
}
rotate -x*90
}

The result is 6 sets of red and white radial stripes evenly spaced around
the sphere.

Note "frequency -1" reverses the entries in a color_map.

The phase keyword takes values from 0.0 to 1.0 and rotates the color map
entries. In the example above if you render successive frames at phase 0
then phase 0.1, phase 0.2 etc you could create an animation that rotates
the stripes. The same effect can be easily achieved by rotating the radial
pigment using "rotate y*Angle" but there are other uses where phase can be
handy.

Sometimes you create a great looking gradient or wood color map but you
want the grain slightly adjusted in or out. You could re-order the color
map entries but that's a pain. A phase adjustment will shift everything
but keep the same scale. Try animating a mandel pigment for a color
palette rotation effect.


5.4.1.5.7 Transforming pigments

You may modify pigment patterns with "translate", "rotate" and "scale"
commands. Note that placing these transforms inside the texture but
outside the pigment will transform the entire texture. However placing
them inside the pigment transforms just the pigment. For example:

sphere {<0,0,0>,3
texture {
pigment {
checker color Red color White
scale <2,1,3> // affects pigment only... not normal
}
normal {
bumps 0.3
scale 0.4 // affects bump normal only... not pigment
}
finish {Shiny}
translate 5*x // affects entire texture
}
translate y*2 // affects object and texture
}

Note that transforms affect the entire pigment regardless of the ordering
of other parameters. For example:

This... ...is the same as this...

pigment { pigment {
bozo bozo
turbulence 0.3 scale 2
scale 2 turbulence 0.3
} }

The scaling before or after turbulence makes no difference. In general it
is best to put all transformations last for the sake of clarity.


5.4.2 NORMAL

Ray tracing is known for the dramatic way it depicts reflection, refraction
and lighting effects. Much of our perception depends on the reflective
properties of an object. Ray tracing can exploit this by playing tricks on
our perception to make us see complex details that aren't really there.

Suppose you wanted a very bumpy surface on the object. It would be very
difficult to mathematically model lots of bumps. We can however simulate
the way bumps look by altering the way light reflects off of the surface.
Reflection calculations depend on a vector called a "surface normal"
vector. This is a vector which points away from the surface and is
perpendicular to it. By artificially modifying (or perturbing) this normal
vector you can simulate bumps.

The "normal {...}" statement is the part of a texture which defines the
pattern of normal perturbations to be applied to an object. Like the
pigment statement, you can omit the surrounding texture block to save
typing. Do not forget however that there is a texture implied. For
example...

this... can be shortened to this...

object { object {
My_Object My_Object
texture { pigment {color Purple}
pigment {color Purple} normal {bumps 0.3}
normal {bumps 0.3} }
}
}

Note that attaching a normal pattern does not really modify the surface.
It only affects the way light reflects or refracts at the surface so that
it looks bumpy.

The most complete form for defining a normal is as follows:

normal {
NORMAL_IDENTIFIER
NORMAL_PATTERN_TYPE
NORMAL_MODIFIERS
TRANSFORMATIONS...
}

Each of the items in a normal are optional but if they are present, they
should be in the order shown above to insure that the results are as
expected. Any items after the NORMAL_IDENTIFIER modify or override
settings given in the IDENTIFIER. If no identifier is specified then the
items modify the normal values in the current default texture.
TRANSFORMATIONs are translate, rotate and scale statements. They apply
only to the normal and not to other parts of the texture. They should be
specified last.

There are 6 different NORMAL_PATTERN_TYPEs discussed below. They are
bumps, dents, ripples, waves, wrinkles and bump_map.


5.4.2.1 Bumps

A smoothly rolling random pattern of bumps can be created with the "bumps"
normal pattern. Bumps uses the same random noise function as the bozo and
spotted pigment patterns, but uses the derived value to perturb the surface
normal or, in other words, make the surface look bumpy. This gives the
impression of a "bumpy" surface, random and irregular, sort of like an
orange.

After the bumps keyword, you supply a single floating point value for the
amount of surface perturbation. Values typically range from 0.0 (No Bumps)
to 1.0 or greater (Extremely Bumpy). For example:

sphere {
<0, 1, 2>, 2
texture {
pigment {color Yellow}
normal {bumps 0.4 scale 0.2}
finish {phong 1}
}
}

This tells POV-Ray to use a bump pattern to modify the surface normal. The
value 0.4 controls the apparent depth of the bumps. Usually the bumps are
about 1 unit wide which doesn't work very well with a sphere of radius 2.
The scale makes the bumps 1/5th as wide but does not affect their depth.


5.4.2.2 Dents

The "dents" pattern is especially interesting when used with metallic
textures, it gives impressions into the metal surface that look like dents
have been beaten into the surface with a hammer. A single value is supplied
after the dents keyword to indicate the amount of denting required. Values
range from 0.0 (Showroom New) to 1.0 (Insurance Wreck). Scale the pattern
to make the pitting more or less frequent.


5.4.2.3 Ripples

The ripples bump pattern make a surface look like ripples of water. The
ripples option requires a value to determine how deep the ripples are.
Values range from 0.0 to 1.0 or more. The ripples radiate from 10 random
locations inside the unit cube area <0,0,0> to <1,1,1>. Scale the pattern
to make the centers closer or farther apart.

The frequency keyword changes the spacing between ripples. The phase
keyword can be used to move the ripples outwards for realistic animation.


5.4.2.4 Waves

This works in a similar way to ripples except that it makes waves with
different frequencies. The effect is to make waves that look more like deep
ocean waves. The waves option requires a value to determine how deep the
waves are. Values range from 0.0 to 1.0 or more. The waves radiate from
10 random locations inside the unit cube area <0,0,0> to <1,1,1>. Scale
the pattern to make the centers closer or farther apart.

The frequency keyword changes the spacing between waves. The phase keyword
can be used to move the waves outwards for realistic animation.


5.4.2.5 Wrinkles

This is sort of a 3-D bumpy granite. It uses a similar 1/f fractal noise
function to perturb the surface normal in 3-D space. With a transparent
color pattern, could look like wrinkled cellophane. Requires a single value
after the wrinkles keyword to indicate the amount of wrinkling desired.
Values from 0.0 (No Wrinkles) to 1.0 (Very Wrinkled) are typical.


5.4.2.6 Bump_map

When all else fails and none of the above normal pattern types meets your
needs, you can use a bump map to wrap a 2-D bit-mapped bump pattern around
your 3-D objects.

Instead of placing the color of the image on the shape like an image_map,
bump_map perturbs the surface normal based on the color of the image at
that point. The result looks like the image has been embossed into the
surface. By default, bump_map uses the brightness of the actual color of
the pixel. Colors are converted to gray scale internally before calculating
height. Black is a low spot, white is a high spot. The image's index
values may be used instead (see use_index) below.


5.4.2.6.1 Specifying a bump map.

The syntax for bump_map is...

normal {
bump_map {
FILE_TYPE "filename"
MODIFIERS...
}
}

Where FILE_TYPE is one of the following keywords "gif", "tga", "iff" or
"dump". This is followed by the name of the file in quotes. Several
optional modifiers may follow the file specification. The modifiers are
described below. Note: Earlier versions of POV-Ray allowed some modifiers
before the FILE_TYPE but that syntax is being phased out in favor of the
syntax described here.

Filenames specified in the bump_map statements will be searched for in the
home (current) directory first, and if not found, will then be searched for
in directories specified by any "-L" (library path) options active. This
would facilitate keeping all your bump maps files in a separate
subdirectory, and giving an "-L" option on the command line to where your
library of bump maps are.

By default, the bump is mapped onto the X-Y plane. The bump is "projected"
onto the object as though there were a slide projector somewhere in the -Z
direction. The bump exactly fills the square area from x,y coordinates
(0,0) to (1,1) regardless of the bump's original size in pixels. If you
would like to change this default, you may translate, rotate or scale the
normal or texture to map it onto the object's surface as desired.

If you would like to change this default orientation, you may translate,
rotate or scale the normal or texture to map it onto the object's surface
as desired.


5.4.2.6.2 Bump_size

The relative bump_size can be scaled using bump_size modifier. The
bump_size number can be any number other than 0. Valid numbers are 2, .5,
-33, 1000, etc. For example:

normal {
bump_map {
gif "stuff.gif"
bump_size 5
}
}


5.4.2.6.3 Use_index & use_color

Usually the bump_map converts the color of the pixel in the map to a
grayscale intensity value in the range 0.0 to 1.0 and calculates the bumps
based on that value. If you specify use_index, bump_map uses the color's
palette number to compute as the height of the bump at that point. So,
color #0 would be low and color #255 would be high. The actual color of the
pixels doesn't matter when using the index. The "use_color" keyword may be
specified to explicitly note that the color methods should be used instead.


5.4.2.6.4 The "once" option.

Normally there are an infinite number of repeating bump_maps created over
every unit square of the X-Y plane like tiles. By adding the keyword
"once" after a file name, you can eliminate all other copies of the
bump_map except the one at (0,0) to (1,1). Areas outside this unit square
are treated as fully transparent.

Note: The "once" keyword may also be used with image_map and material_map
statements.


5.4.2.6.5 The "map_type" option.

The default projection of the bump onto the X-Y plane is called a "planar
map type". This option may be changed by adding the "map_type" keyword
followed by a number specifying the way to wrap the bump around the object.

A "map_type 0" gives the default planar mapping already described.

A "map_type 1" is a spherical mapping. It assumes that the object is a
sphere of any size sitting at the origin. The Y axis is the north/south
pole of the spherical mapping. The top and bottom edges of the bump_map
just touch the pole regardless of any scaling. The left edge of the
bump_map begins at the positive X axis and wraps the pattern around the
sphere from "west" to "east" in a -Y rotation. The pattern covers the
sphere exactly once. The "once" keyword has no meaning for this type.

With "map_type 2" you get a cylindrical mapping. It assumes that a
cylinder of any diameter lies along the Y axis. The bump pattern wraps
around the cylinder just like the spherical map but remains 1 unit tall
from y=0 to y=1. This band of bumps is repeated at all heights unless the
"once" keyword is applied.

Finally "map_type 5" is a torus or donut shaped mapping. It assumes that a
torus of major radius 1 sits at the origin in the X-Z plane. The bump
pattern is wrapped around similar to spherical or cylindrical maps.
However the top and bottom edges of the map wrap over and under the torus
where they meet each other on the inner rim.

Types 3 and 4 are still under development.

Note: The "map_type" option may also be applied to image_map and
material_map statements.


5.4.2.6.6 The "interpolate" option.

Adding the "interpolate" keyword can smooths the jagged look of a bump map.
When POV-Ray asks bump amount for a bump map, it often asks for a point
that is not directly on top of one pixel, but sort of between several
different colored pixels. Interpolations returns an "in-between" value so
that the steps between the pixels in the bump map will look smoother.

There are currently two types of interpolation:

Normalized Distance -- interpolate 4
Bilinear -- interpolate 2

Default is no interpolation. Normalized distance is the slightly faster of
the two, bilinear does a better job of picking the between color. Normally,
bilinear is used.

If your bump map looks jaggy, try using interpolation instead of going to a
higher resolution image. The results can be very good.


5.4.2.7 Normal Modifiers

After specifying the normal type such as bumps, dents etc you may add any
of several modifiers.


5.4.2.7.1 Turbulence

The keyword "turbulence" followed by a float or vector may be used to stir
up the color pattern. Typical values range from the default 0.0 which is
no turbulence to 1.0 which is very turbulent. If a vector is specified
then different amounts of turbulence is applied in the x, y and z
directions. For example "turbulence <1.0, 0.6, 0.1>" has much turbulence
in the x direction, a moderate amount in the y direction and a small amount
in the z direction.

A complete discussion of turbulence is given under Pigment Modifiers in
section 5.4.1.5 above. The "octaves", "omega", and "lambda" options are
also available as normal modifiers. They discussed under that section as
well.


5.4.2.7.2 Frequency and Phase

Both waves and ripples respond to a parameter called phase. The phase
option allows you to create animations in which the water seems to move.
This is done by making the phase increment slowly between frames. The range
from 0.0 to 1.0 gives one complete cycle of a wave.

The waves and ripples textures also respond to a parameter called
frequency. If you increase the frequency of the waves, they get closer
together. If you decrease it, they get farther apart.

Bumps, dents, wrinkles and bump_map do not respond to frequency or phase.


5.4.2.7.3 Transforming normals

You may modify normal patterns with "translate", "rotate" and "scale"
commands. Note that placing these transforms inside the texture but
outside the normal will transform the entire texture. However placing them
inside the normal transforms just the normal. See section 5.4.1.5.7
Transforming Pigments for examples:


5.4.3 FINISH

The finish properties of a surface can greatly affect its appearance. How
does light reflect? What happens when light passes through? What kind of
highlights are visible. To answer these questions you need a finish
statement.

The "finish {...}" statement is the part of a texture which defines the
various finish properties to be applied to an object. Like the pigment or
normal statement, you can omit the surrounding texture block to save
typing. Do not forget however that there is a texture implied. For
example...

this... can be shortened to this...

object { object {
My_Object My_Object
texture { pigment {color Purple}
pigment {color Purple} finish {phong 0.3}
finish {phong 0.3} }
}
}

The most complete form for defining a finish is as follows:

finish {
FINISH_IDENTIFIER
FINISH_ITEMS...
}

The FINISH_IDENTIFIER is optional but should proceed all other items. Any
items after the FINISH_IDENTIFIER modify or override settings given in the
IDENTIFIER. If no identifier is specified then the items modify the finish
values in the current default texture. Note that transformations are not
allowed inside a finish because finish items cover the entire surface
uniformly.


5.4.3.1 Diffuse Reflection Items

When light reflects off of a surface, the laws of physics say that it
should leave the surface at the exact same angle it came in. This is
similar to the way a billiard ball bounces off a bumper of a pool table.
This perfect reflection is called "specular" reflection. However only very
smooth polished surfaces reflect light in this way. Most of the time,
light reflects and is scattered in all directions by the roughness of the
surface. This scattering is called "diffuse reflection" because the light
diffuses or spreads in a variety of directions. It accounts for the
majority of the reflected light we see.

In the real world, light may come from any of three possible sources. 1)It
can come directly from actual light sources which are illuminating an
object. 2)It can bounce from other objects such as mirrors via specular
reflection. For example shine a flashlight onto a mirror. 3)It can bounce
from other objects via diffuse reflections. Look at some dark area under a
desk or in a corner. Even though a lamp may not directly illuminate that
spot you can still see a little bit because light comes from diffuse
reflection off of nearby objects.


5.4.3.1.1 Diffuse

POV-Ray and most other ray tracers can only simulate directly, one of these
three types of illumination. That is the light which comes directly from
the light source which diffuses in all directions. The keyword "diffuse"
is used in a finish statement to control how much light of this direct
light is reflected via diffuse reflection. For example:

finish {diffuse 0.7}

means that 70% of the light seen comes from direct illumination from light
sources. The default value is diffuse 0.6.


5.4.3.1.2 Brilliance

The amount of direct light that diffuses from an object depends upon the
angle at which it hits the surface. When light hits at a shallow angle it
illuminates less. When it is directly above a surface it illuminates more.
The "brilliance" keyword can be used in a finish statement to vary the way
light falls off depending upon the angle of incidence. This controls the
tightness of the basic diffuse illumination on objects and slightly adjusts
the appearance of surface shininess. Objects may appear more metallic by
increasing their brilliance. The default value is 1.0. Higher values from
3.0 to about 10.0 cause the light to fall off less at medium to low angles.
There are no limits to the brilliance value. Experiment to see what works
best for a particular situation. This is best used in concert with
highlighting.


5.4.3.1.3 Crand Graininess

Very rough surfaces, such as concrete or sand, exhibit a dark graininess in
their apparent color. This is caused by the shadows of the pits or holes
in the surface. The "crand" keyword can be added to cause a minor random
darkening the diffuse reflection of direct illumination. Typical values
range from "crand 0.01" to "crand 0.5" or higher. The default value is 0.
For example:

finish {crand 0.05}

The grain or noise introduced by this feature is applied on a pixel-by-
pixel basis. This means that it will look the same on far away objects as
on close objects. The effect also looks different depending upon the
resolution you are using for the rendering. For these reasons it is not a
very accurate way to model the rough surface effect, but some objects still
look better with a little crand thrown in.

In previous versions of POV-Ray there was no "crand" keyword. Any lone
float value found inside a texture{...} which was not preceded by a keyword
was interpreted as a randomness value.

NOTE: This should not be used when rendering animations. This is the one
of a few truly random features in POV-Ray, and will produce an annoying
flicker of flying pixels on any textures animated with a "crand" value.


5.4.3.1.4 Ambient

The light you see in dark shadowed areas comes from diffuse reflection off
of other objects. This light cannot be directly modeled using ray tracing.
However we can use a trick called "ambient lighting" to simulate the light
inside a shadowed area.

Ambient light is light that is scattered everywhere in the room. It bounces
all over the place and manages to light objects up a bit even where no
light is directly shining. Computing real ambient light would take far too
much time, so we simulate ambient light by adding a small amount of white
light to each texture whether or not a light is actually shining on that
texture.

This means that the portions of a shape that are completely in shadow will
still have a little bit of their surface color. It's almost as if the
texture glows, though the ambient light in a texture only affects the shape
it is used on.

The default value is very little ambient light (0.1). The value can range
from 0.0 to 1.0. Ambient light affects both shadowed and non-shadowed
areas so if you turn up the ambient value you may want to turn down the
diffuse value.

Note that this method doesn't account for the color of surrounding objects.
If you walk into a room that has red walls, floor and ceiling then your
white clothing will look pink from the reflected light. POV-Ray's ambient
shortcut doesn't account for this. There is also no way to model specular
reflected indirect illumination such as the flashlight shining in a mirror.


5.4.3.2 Specular Reflection Items

When light does not diffuse and it DOES reflect at the same angle as it
hits an object, it is called "specular reflection". Such mirror-like
reflection is controlled by the "reflection" keyword in a finish statement.
For example:

finish {reflection 1.0 ambient 0 diffuse 0}

This gives the object a mirrored finish. It will reflect all other
elements in the scene. The value can range from 0.0 to 1.0. By default
there is no reflection.

Adding reflection to a texture makes it take longer to render because an
additional ray must be traced.

NOTE: Although such reflection is called "specular" it is not controlled by
the POV-Ray "specular" keyword. That keyword controls a "specular"
highlight.


5.4.3.3 Highlights

A highlights are the bright spots that appear when a light source reflects
off of a smooth object. They are a blend of specular reflection and
diffuse reflection. They are specular-like because they depend upon
viewing angle and illumination angle. However they are diffuse-like
because some scattering occurs. In order to exactly model a highlight you
would have to calculate specular reflection off of thousands of microscopic
bumps called micro facets. The more that micro facets are facing the
viewer, the shinier the object appears, and the tighter the highlights
become. POV-Ray uses two different models to simulate highlights without
calculating micro facets. They are the specular and phong models.

Note that specular and phong highlights are NOT mutually exclusive. It is
possible to specify both and they will both take effect. Normally, however,
you will only specify one or the other.


5.4.3.3.1 Phong Highlights

The "phong" keyword controls the amount of Phong highlighting on the
object. It causes bright shiny spots on the object that are the color of
the light source being reflected.

The Phong method measures the average of the facets facing in the mirror
direction from the light sources to the viewer.

Phong's value is typically from 0.0 to 1.0, where 1.0 causes complete
saturation to the light source's color at the brightest area (center) of
the highlight. The default phong 0.0 gives no highlight.

The size of the highlight spot is defined by the phong_size value. The
larger the phong_size, the tighter, or smaller, the highlight and the
shinier the appearance. The smaller the phong_size, the looser, or larger,
the highlight and the less glossy the appearance.

Typical values range from 1.0 (Very Dull) to 250 (Highly Polished) though
any values may be used. Default phong_size is 40 (plastic) if phong_size is
not specified. For example:

finish {phong 0.9 phong_size 60}

If "phong" is not specified then "phong_size" has no effect.


5.4.3.3.2 Specular Highlight

A specular highlight is very similar to Phong highlighting, but uses
slightly different model. The specular model more closely resembles real
specular reflection and provides a more credible spreading of the
highlights occur near the object horizons.

Specular's value is typically from 0.0 to 1.0, where 1.0 causes complete
saturation to the light source's color at the brightest area (center) of
the highlight. The default specular 0.0 gives no highlight.

The size of the spot is defined by the value given for roughness. Typical
values range from 1.0 (Very Rough -- large highlight) to 0.0005 (Very
Smooth -- small highlight). The default value, if roughness is not
specified, is 0.05 (Plastic).

It is possible to specify "wrong" values for roughness that will generate
an error when you try to render the file. Don't use 0 and if you get
errors, check to see if you are using a very, very small roughness value
that may be causing the error. For example:

finish {specular 0.9 roughness 0.02}

If "specular" is not specified then "roughness" has no effect.


5.4.3.3.3 Metallic Highlight Modifier

The keyword "metallic" may be used with phong or specular highlights. This
keyword indicates that the color of the highlights will be filtered by the
surface color instead of directly using the light_source color. Note that
the keyword has no numeric value after it. You either have it or you
don't. For example:

finish {phong 0.9 phong_size 60 metallic}

If "phong" or "specular" is not specified then "metallic" has no effect.


5.4.3.4 Refraction

When light passes through a surface either into or out of a dense medium,
the path of the ray of light is bent. Such bending is called refraction.
Normally transparent or semi-transparent surfaces in POV-Ray do not refract
light. Adding "refraction 1.0" to the finish statement turns on
refraction.

Note: It is recommended that you only use "refraction 0" or "refraction 1".
Values in between will darken the refracted light in ways that do not
correspond to any physical property. Many POV-Ray scenes were created with
intermediate refraction values before this "bug" was discovered so the
"feature" has been maintained. A more appropriate way to reduce the
brightness of refracted light is to change the "filter" value in the colors
specified in the pigment statement. Note also that "refraction" does not
cause the object to be transparent. Transparency is only occurs if there
is a non-zero "filter" value in the color.

The amount of bending or refracting of light depends upon the density of
the material. Air, water, crystal, diamonds all have different density and
thus refract differently. The "index of refraction" or "ior" value is used
by scientists to describe the relative density of substances. The "ior"
keyword is used in POV-Ray to specify the value. For example:

texture {
pigment { White filter 0.9 }
finish {
refraction 1
ior 1.5
}
}

The default ior value of 1.0 will give no refraction. The index of
refraction for air is 1.0, water is 1.33, glass is 1.5, and diamond is 2.4.
The file IOR.INC pre-defines several useful values for ior.

NOTE: If a texture has a filter component and no value for refraction and
ior are supplied, the renderer will simply transmit the ray through the
surface with no bending. In layered textures, the refraction and ior
keywords MUST be in the last texture, otherwise they will not take effect.


5.4.4 SPECIAL TEXTURES

Most textures consist of a single pigment, normal and finish specification
which applies to the entire surface. However two special textures have
been implemented that extend the "checker" and "image_map" concepts to
cover not just pigment but the entire texture.


5.4.4.1 Tiles

This first special texture is the "tiles" texture. It works just like the
"checker" pigment pattern except it colors the blocks with entire textures
rather than solid colors.

The syntax is:

texture{
tiles {
texture {... put in a texture here ... }
tile2
texture {... this is the second tile texture }
}
// Optionally put translate, rotate or scale here
}

For example:

texture{
tiles {
texture { Jade }
tile2
texture { Red_Marble }
}
}

The textures used in each tile may be any type of texture including more
tiles or regular textures made from pigment, normal and finish statements.
Note that no other pigment, normal or finish statements may be added to the
texture. This is illegal:

texture {
tiles {
texture {T1}
tile2
texture {T2}
}
finish {phong 1.0}
}

The finish must be individually added to each texture.

Note that earlier versions of POV-Ray used only the pigment parts of the
textures in the tiles. Normals and finish were ignored. Also layered
textures were not supported. In order to correct these problems the above
restrictions on syntax were necessary. This means some POV-Ray 1.0 scenes
using tiles many need minor modifications that cannot be done automatically
with the version compatibility mode.

The textures within a tiles texture may be layered but tiles textures do
don't work as part of a layered texture.


5.4.4.2 Material_Map

The "material_map" special texture extends the concept of "image_map" to
apply to entire textures rather than solid colors. A material_map allows
you to wrap a 2-D bit-mapped texture pattern around your 3-D objects.

Instead of placing a solid color of the image on the shape like an
image_map, an entire texture is specified based on the index or color of
the image at that point. You must specify a list of textures to be used
like a "texture palette" rather than the usual color palette.

When used with mapped file types such as GIF, the index of the pixel is
used as an index into the list of textures you supply. For unmapped file
types such as TGA, the 8 bit value of the red component in the range 0-255
is used as an index.

If the index of a pixel is greater than the number of textures in your list
then the index is taken modulo N where N is the length of your list of
textures.


5.4.4.2.1 Specifying a material map.

The syntax for material_map is...

texture {
material_map {
FILE_TYPE "filename"
MODIFIERS...
texture {...} // First used for index 0
texture {...} // Second texture used for index 1
texture {...} // Third texture used for index 2
texture {...} // Fourth texture used for index 3
// and so on for however many used.
}
TRANSFORMATION...
}

If particular index values are not used in an image then it may be
necessary to supply dummy textures. It may be necessary to use a paint
program or other utility to examine the map file's palette to determine how
to arrange the texture list.

In the syntax above, FILE_TYPE is one of the following keywords "gif",
"tga", "iff" or "dump". This is followed by the name of the file in
quotes. Several optional modifiers may follow the file specification. The
modifiers are described below. Note: Earlier versions of POV-Ray allowed
some modifiers before the FILE_TYPE but that syntax is being phased out in
favor of the syntax described here.

Filenames specified in the material_map statements will be searched for in
the home (current) directory first, and if not found, will then be searched
for in directories specified by any "-L" (library path) options active.
This would facilitate keeping all your material maps files in a separate
subdirectory, and giving an "-L" option on the command line to where your
library of material maps are.

By default, the material is mapped onto the X-Y plane. The material is
"projected" onto the object as though there were a slide projector
somewhere in the -Z direction. The material exactly fills the square area
from x,y coordinates (0,0) to (1,1) regardless of the material's original
size in pixels. If you would like to change this default, you may
translate, rotate or scale the normal or texture to map it onto the
object's surface as desired.

If you would like to change this default orientation, you may translate,
rotate or scale the texture to map it onto the object's surface as desired.

Note that no other pigment, normal or finish statements may be added to the
texture outside the material_map. This is illegal:

texture {
material_map {
gif "matmap.gif"
texture {T1}
texture {T2}
texture {T3}
}
finish {phong 1.0}
}

The finish must be individually added to each texture.

Note that earlier versions of POV-Ray allowed such specifications but they
were ignored. The above restrictions on syntax were necessary for various
bug fixes. This means some POV-Ray 1.0 scenes using material_maps many
need minor modifications that cannot be done automatically with the version
compatibility mode.

The textures within a material_map texture may be layered but material_map
textures do don't work as part of a layered texture. To use a layered
texture inside a material_map you must declare it as a texture identifier
and invoke it in the texture list.


5.4.4.2.2 Material_map options.

The "once" and "map_type" options may be used with material_maps exactly
like image_map or bump_map. The "interpolate" keyword also is allowed but
it interpolates the map indices rather than the colors. In most cases this
results in a worse image instead of a better image. Future versions will
fix this problem.


5.4.5 LAYERED TEXTURES

It is possible to create a variety of special effects using layered
textures. A layered texture is one where several textures that are
partially transparent are laid one on top of the other to create a more
complex texture. The different texture layers show through the transparent
portions to create the appearance of one texture that is a combination of
several textures.

You create layered textures by listing two or more textures one right after
the other. The last texture listed will be the top layer, the first one
listed will be the bottom layer. All textures in a layered texture other
than the bottom layer should have some transparency. For example:

object {
My_Object
texture {T1} // the bottom layer
texture {T2} // a semi-transparent layer
texture {T3} // the top semi-transparent layer
}

In this example T2 shows only where T3 is transparent and T1 shows only
where T2 and T3 are transparent.

The color of underlying layers is filtered by upper layers but the results
do not look exactly like a series of transparent surfaces. If you had a
stack of surfaces with the textures applied to each, the light would be
filtered twice: once on the way in as the lower layers are illuminated by
filtered light and once on the way out. Layered textures do not filter the
illumination on the way in. Other parts of the lighting calculations work
differently as well. The result look great and allow for fantastic looking
textures but they are simply different from multiple surfaces. See
STONES.INC in the standard include files for some magnificent layered
textures.

Note layered textures must use the "texture{...}" wrapped around any
pigment, normal or finish statements. Do not use multiple pigment, normal
or finish statements without putting them inside the texture statement.

Layered textures may be declared. For example:

#declare Layered_Examp=
texture {T1}
texture {T2}
texture {T3}

Then invoke it as follows:

object {
My_Object
texture {
Layer_Examp
// Any pigment, normal or finish here
// modifies the bottom layer only.
}
}


5.4.6 DEFAULT TEXTURE

POV-Ray creates a default texture when it begins processing. You may
change those defaults as described below. Every time you specify a
"texture{...}" statement, POV-Ray creates a copy of the default texture.
Anything items you put in the texture statement override the default
settings. If you attach a pigment, normal or finish to an object without
any texture statement then POV-Ray checks to see if a texture has already
been attached. If it has a texture then the pigment, normal or finish will
modify that existing texture. If no texture has yet been attached to the
object then the default texture is copied and the pigment, normal or finish
will modify that texture.

You may change the default texture, pigment, normal or finish using the
language directive "#default {...}" as follows:

#default {
texture {
pigment {...}
normal {...}
finish {...}
}
}

Or you may change just part of it like this:

#default {
pigment {...}
}

This still changes the pigment of the default texture. At any time there
is only one default texture made from the default pigment, normal and
finish. The example above does not make a separate default for pigments
alone. Note: Special textures tiles and material_map may not be used as
defaults.

You may change the defaults several times throughout a scene as you wish.
Subsequent #default statements begin with the defaults that were in effect
at the time. If you wish to reset to the original POV-Ray defaults then
you should first save them as follows:

//At top of file
#declare Original_Default = texture {}

later after changing defaults you may restore it with...

#default {texture {Original_Default}}

If you do not specify a texture for an object then the default texture is
attached when the object appears in the scene. It is not attached when an
object is declared. For example:

#declare My_Object=
sphere{<0,0,0>,1} // Default texture not applied

object{My_Object} // Default texture added here

You may force a default texture to be added by using an empty texture
statement as follows:

#declare My_Thing=
sphere{<0,0,0>,1 texture{}} // Default texture applied

The original POV-Ray defaults for all items are given throughout the
documentation under each appropriate section.


5.5 CAMERA
------------

Every scene in POV-Ray has a camera defined. If you do not specify a
camera then a default camera is used. The camera definition describes the
position, angle and properties of the camera viewing the scene. POV-Ray
uses this definition to do a simulation of the camera in the ray tracing
universe and "take a picture" of your scene.

The camera simulated in POV-Ray is a pinhole camera. Pinhole cameras have a
fixed focus so all elements of the scene will always be perfectly in focus.
The pinhole camera is not able to do soft focus or depth of field effects.

A total of 6 vectors may be specified to define the camera but only a few
of those are needed to in most cases. Here is an introduction to simple
camera placement.


5.5.1 LOCATION AND LOOK_AT

Under many circumstances just two vectors in the camera statement are all
you need: location and look_at. For example:

camera {
location <3,5,-10>
look_at <0,2,1>
}

The location is simply the X, Y, Z coordinates of the camera. The camera
can be located anywhere in the ray tracing universe. The default location
is <0,0,0>. The look_at vector tells POV-Ray to pan and tilt the camera
until it is looking at the specified X, Y, Z coordinate. By default the
camera looks at a point one unit in the +Z direction from the location.

The look_at specification should almost always be the LAST item in the
camera statement. If other camera items are placed after the look_at
vector then the camera may not continue to look at the specified point.


5.5.2 THE SKY VECTOR

Normally POV-Ray pans left or right by rotating about the Y axis until it
lines up with the look_at point and then tilts straight up or down until
the point is met exactly. However you may want to slant the camera
sideways like an airplane making a banked turn. You may change the tilt of
the camera using the "sky" vector. For example:

camera {
location <3,5,-10>
sky <1,1,0>
look_at <0,2,1>
}

This tells POV-Ray to roll the camera until the top of the camera is in
line with the sky vector. Imagine that the sky vector is an antenna
pointing out of the top of the camera. Then it uses the "sky" vector as
the axis of rotation left or right and then to tilt up or down in line with
the "sky" vector. In effect you're telling POV-Ray to assume that the sky
isn't straight up. Note that the sky vector must appear before the look_at
vector. The sky vector does nothing on its own. It only modifies the way
the look_at vector turns the camera. The default value for sky is <0,1,0>.


5.5.3 THE DIRECTION VECTOR

The "direction" vector serves two purposes. It tells POV-Ray the initial
direction to point the camera before moving it with look_at or rotate
vectors. It also controls the field of view.

Note that this is only the initial direction. Normally, you will use the
look_at keyword, not the direction vector to point the camera in its actual
direction.

The length of the direction vector tells POV-Ray to use a telephoto or
wide-angle view. It is the distance from the camera location to the
imaginary "view window" that you are looking through. A short direction
vector gives a wide angle view while a long direction gives a narrow,
telephoto view.

This figure illustrates the effect:

|\ |\
| \ | \
| \ | \
| \ | \
Location | | Location | |
*------------> | *--------------------------> |
Direction| | | |
| | | |
| | | |
\ | \ |
\ | \ |
\| \|


Short direction gives wide view... long direction narrows view.

The default value is "direction <0,0,1>".

Be careful with short direction vector lengths like 1.0 and less. You may
experience distortion on the edges of your images. Objects will appear to
be shaped strangely. If this happens, move the location back and make the
direction vector longer.

Wide angle example:
camera {
location <3,5,-10>
direction <0,0,1>
look_at <0,2,1>
}

Zoomed in telephoto example:
camera {
location <3,5,-10>
direction <0,0,8>
look_at <0,2,1>
}


5.5.4 UP AND RIGHT VECTORS

The "up" vector defines the height of the view window. The "right" vector
defines the width of the view window. This figure illustrates the
relationship of these vectors:

--------------------------
| ^ |
| up <0,1,0>| |
| | |
| | |
| | |
| | |
| | |
|------------------------->|
| right<1.33,0,0> |
| | |
| | |
| | |
| | |
| | |
| | |
--------------------------


5.5.4.1 Aspect Ratio

Together these vectors define the "aspect ratio" (height to width ratio) of
the resulting image. The default values "up <0,1,0>" and "right
<1.33,0,0>" results in an aspect ratio of about 4 to 3. This is the aspect
ratio of a typical computer monitor. If you wanted a tall skinny image or
a short wide panoramic image or a perfectly square image then you should
adjust the up and right vectors to the appropriate proportions.

Most computer video modes and graphics printers use perfectly square
pixels. For example Macintosh displays and IBM S-VGA modes 640x480,
800x600 and 1024x768 all use square pixels. When your intended viewing
method uses square pixels then the width and height you set with the +W and
+H switches should also have the same ratio as the right and up vectors.
Note that 640/480=4/3 so the ratio is proper for this square pixel mode.

Not all display modes use square pixels however. For example IBM VGA mode
320x200 and Amiga 320x400 modes do not use square pixels. These two modes
still produce a 4/3 aspect ratio image. Therefore images intended to be
viewed on such hardware should still use 4/3 ratio on their up & right
vectors but the +W and +H settings will not be 4/3.

For example:
camera {
location <3,5,-10>
up <0,1,0>
right <1,0,0>
look_at <0,2,1>
}

This specifies a perfectly square image. On a square pixel display like
SVGA you would use +W and +H settings such as +W480 +H480 or +W600 +H600.
However on the non-square pixel Amiga 320x400 mode you would want to use
values of +W240 +H400 to render a square image.


5.5.4.2 Handedness

The "right" vector also describes the direction to the right of the camera.
It tells POV-Ray where the right side of your screen is. The sign of the
right vector also determines the "handedness" of the coordinate system in
use. The default right statement is:

right <1.33, 0, 0>

This means that the +X direction is to the right. It is called a "left-
handed" system because you can use your left hand to keep track of the
axes. Hold out your left hand with your palm facing to your right. Stick
your thumb up. Point straight ahead with your index finger. Point your
other fingers to the right. Your bent fingers are pointing to the +X
direction. Your thumb now points +Y. Your index finger points +Z.

To use a right-handed coordinate system, as is popular in some CAD programs
and other ray tracers, make the same shape using your right hand. Your
thumb still points up in the +Y direction and your index finger still
points forward in the +Z direction but your other fingers now say the +X is
to the left. That means that the "right" side of your screen is now in the
-X direction. To tell POV-Ray to compensate for this you should use a
negative X value in the "right" vector like this:

right <-1.33, 0, 0>

Some CAD systems, like AutoCAD, also have the assumption that the Z axis is
the "elevation" and is the "up" direction instead of the Y axis. If this is
the case you will want to change your "up" and "direction" as well. Note
that the up, right, and direction vectors must always remain perpendicular
to each other or the image will be distorted.


5.5.5 TRANSFORMING THE CAMERA

The "translate" and "rotate" commands can re-position the camera once
you've defined it.

For example:
camera {
location < 0, 0, 0>
direction < 0, 0, 1>
up < 0, 1, 0>
right < 1, 0, 0>
rotate <30, 60, 30>
translate < 5, 3, 4>
}

In this example, the camera is created, then rotated by 30 degrees about
the X axis, 60 degrees about the Y axis, and 30 degrees about the Z axis,
then translated to another point in space.


5.5.6 CAMERA IDENTIFIERS

You may declare several camera identifiers if you wish. This makes it easy
to quickly change cameras. For example:

#declare Long_Lens=
camera {
location -z*100
direction z*50
}
#declare Short_Lens=
camera {
location -z*50
direction z*10
}

camera {
Long_Lens //edit this line to change lenses
look_at Here
}


5.6 MISC FEATURES
-------------------

Here are a variety of other topics about POV-Ray features.


5.6.1 FOG

POV-Ray includes the ability to render fog. To add fog to a scene, place
the following declaration outside of any object definitions:

fog {
color Gray70 // the fog color
distance 200.0 // distance for 100% fog color
}


The fog color is then blended into the current pixel color at a rate
calculated as:

1-exp(-depth/distance) =
1-exp(-200/200) =
1-exp(-1) =
1-.37... =
0.63...

So at depth 0, the color is pure (1.0) with no fog (0.0). At the fog
distance, you'll get 63% of the color from the object's color and 37% from
the fog color.

Subtle use of fog can add considerable realism and depth cuing to a scene
without adding appreciably to the overall rendering times. Using a black
or very dark gray fog can be used to simulate attenuated lighting by
darkening distant objects.


5.6.2 MAX_TRACE_LEVEL

The "#max_trace_level" directive sets a variable that defines how many
levels that POV-Ray will trace a ray. This is used when a ray is reflected
or is passing through a transparent object. When a ray hits a reflective
surface, it spawns another ray to see what that point reflects, that's
trace level 1. If it hits another reflective surface, then another ray is
spawned and it goes to trace level 2. The maximum level by default is 5.

If max trace level is reached before a non-reflecting surface is found,
then the color is returned as black. Raise max_trace_level if you see black
in a reflective surface where there should be a color.

The other symptom you could see is with transparent objects. For instance,
try making a union of concentric spheres with the Cloud_Sky texture on
them. Make ten of them in the union with radius's from 1-10 then render the
Scene. The image will show the first few spheres correctly, then black.
This is because a new level is used every time you pass through a
transparent surface. Raise max_trace_level to fix this problem. For
example:

#max_trace_level 20

Note: Raising max_trace_level will use more memory and time and it could
cause the program to crash with a stack overflow error. Values for
max_trace_level are not restricted, so it can be set to any number as long
as you have the time and memory.


5.6.3 MAX_INTERSECTIONS

POV-Ray uses a set of internal stacks to collect ray/object intersection
points. The usual maximum number of entries in these "I-Stacks" is 64.
Complex scenes may cause these stacks to overflow. POV-Ray doesn't stop
but it may incorrectly render your scene. When POV-Ray finishes rendering,
a number of statistics are displayed. If you see "I-Stack Overflows"
reported in the statistics, you should increase the stack size. Add a
directive to your scene as follows:

#max_intersections 200

If the "I-Stack Overflows" remain, increase this value until they stop.


5.6.4 BACKGROUND

A background color can be specified if desired. Any ray that doesn't hit
an object will be colored with this color. The default background is
black. The syntax for background is:

background { color SkyBlue }

Using a colored background takes up no extra time for the ray tracer,
making it a very economical, although limited, feature. Only solid colors
can be specified for a background. Textures cannot be used. No shadows
will be cast on it, which makes it very useful, but at the same time, it
has no "roundness", or shading, and can sometimes cause a scene to look
"flat". Use background with restraint. It's often better, although a bit
slower, to use a "sky sphere", but there are times when a solid background
is just what you need.


5.6.5 THE #VERSION DIRECTIVE

Although POV-Ray 2.0 has had significant changes to the language over POV-
Ray 1.0, almost all 1.0 scenes will still work if the compatibility mode is
set to 1.0. The +MV switch described earlier, sets the initial mode. The
default is +MV2.0.

Inside a scene file you may turn compatibility off or on using the
"#version" directive. For example:

#version 1.0
// Put some version 1.0 statements here

#version 2.0
// Put some version 2.0 statements here

Note you may not change versions inside an object or declaration.


The primary purpose of the switch is to turn off float and expression
parsing so that commas are not needed. It also turns off some warning
messages.

Note some changes in tiles and material_maps cannot be fixed by turning the
version compatibility on. It may require hand editing of those statements.
See the special texture section for details.

Future versions of POV-Ray may not continue to maintain full backward
compatibility. We strongly encourage you to phase in 2.0 syntax as much as
possible.


APPENDIX A COMMON QUESTIONS AND ANSWERS
========================================

Q: I get a floating point error on certain pictures. What's wrong?

A: The ray tracer performs many thousands of floating point operations when
tracing a scene. If checks were added to each one for overflow or
underflow, the program would be much slower. If you get this problem, first
look through your scene file to make sure you're not doing something like:

- Scaling something by 0 in any dimension.
Ex: scale <34, 2, 0> will generate a warning.
- Making the look_at point the same as the location in the camera
- Looking straight down at the look_at point
- Defining triangles with two points the same (or nearly the same)
- Using a roughness value of zero (0).

If it doesn't seem to be one of these problems, please let us know. If you
do have such troubles, you can try to isolate the problem in the input
scene file by commenting out objects or groups of objects until you narrow
it down to a particular section that fails. Then try commenting out the
individual characteristics of the offending object.


Q: Are planes 2D objects or are they 3D but infinitely thin?

A: Neither. Planes are 3D objects that divide the world into two half-
spaces. The space in the direction of the surface normal is considered
outside and the other space is inside. In other words, planes are 3D
objects that are infinitely thick. For the plane, plane { y, 0 }, every
point with a positive Y value is outside and every point with a negative Y
value is inside.
^
|
|
| Outside
_______|_______
Inside

Q: I'd like to go through the program and hand-optimize the assembly code
in places to make it faster. What should I optimize?

A: Don't bother. With hand optimization, you'd spend a lot of time to get
perhaps a 5-10% speed improvement at the cost of total loss of portability.
If you use a better ray-surface intersection algorithm, you should be able
to get an order of magnitude or more improvement. Check out some books and
papers on ray tracing for useful techniques. Specifically, check out
"Spatial Subdivision" and "Ray Coherence" techniques.


Q: Objects on the edges of the screen seem to be distorted. Why?

A: If the direction vector of the camera is not very long, you may get
distortion at the edges of the screen. Try moving the location back and
raising the value of the direction vector.


Q: How do you position planar image maps without a lot of trial and error?

A: By default, images will be mapped onto the range 0,0 to 1,1 in the
appropriate plane. You should be able to translate, rotate, and scale the
image from there.


Q: How do you calculate the surface normals for smooth triangles?

A: There are two ways of getting another program to calculate them for
you. There are now several utilities to help with this.

1) Depending on the type of input to the program, you may be able to
calculate the surface normals directly. For example, if you have a program
that converts B-Spline or Bezier Spline surfaces into POV-Ray format files,
you can calculate the surface normals from the surface equations.

2) If your original data was a polygon or triangle mesh, then it's not
quite so simple. You have to first calculate the surface normals of all the
triangles. This is easy to do - you just use the vector cross-product of
two sides (make sure you get the vectors in the right order). Then, for
every vertex, you average the surface normals of the triangles that meet at
that vertex. These are the normals you use for smooth triangles. Look for
the utilities such as RAW2POV. RAW2POV has an excellent bounding scheme
and the ability to specify a smoothing threshold.


Q: When I render parts of a picture on different systems, the textures
don't match when I put them together. Why?

A: The appearance of a texture depends on the particular random number
generator used on your system. POV-Ray seeds the random number generator
with a fixed value when it starts, so the textures will be consistent from
one run to another or from one frame to another so long as you use the same
executables. Once you change executables, you will likely change the random
number generator and, hence, the appearance of the texture. There is an
example of a standard ANSI random number generator provided in IBM.C,
include it in your machine-specific code if you are having consistency
problems.


Q: I created an object that passes through its bounding volume. At times, I
can see the parts of the object that are outside the bounding volume. Why
does this happen?

A: Bounding volumes are not designed to change the shape of the object.
They are strictly a realtime improvement feature. The ray tracer trusts you
when you say that the object is enclosed by a bounding volume. The way it
uses bounding volumes is very simple: If the ray hits the bounding volume
(or the ray's origin is inside the bounding volume),when the object is
tested against that ray. Otherwise, we ignore the object. If the object
extends beyond the bounding volume, anything goes. The results are
undefined. It's quite possible that you could see the object outside the
bounding volume and it's also possible that it could be invisible. It all
depends on the geometry of the scene. If you want this effect use a
clipped_by volume instead of bounded_by or use clipped_by { bounded_by } if
you wish to clip and bound with the same object.


APPENDIX B TIPS AND HINTS
==========================

B.1 SCENE DESIGN
------------------

There are a number of excellent shareware CAD style modelers available on
the DOS platform now that will create POV-Ray scene files. The online
systems mentioned elsewhere in this document are the best places to find
these.

Hundreds of special-purpose utilities have been written for POV-Ray; data
conversion programs, object generators, shell-style "launchers", and more.
It would not be possible to list them all here, but again, the online
systems listed will carry most of them. Most, following the POV-Ray spirit,
are freeware or inexpensive shareware.

Some extremely elaborate scenes have been designed by drafting on graph
paper. Raytracer Mike Miller recommends graph paper with a grid divided in
tenths, allowing natural decimal conversions.

Start out with a "boilerplate" scene file, such as a copy of BASICVUE.POV,
and edit that. In general, place your objects near and around the "origin"
(0, 0, 0) with the camera in the negative z direction, looking at the
origin. Naturally, you will break from this rule many times, but when
starting out, keep things simple.

For basic, boring, but dependable lighting, place a light source at or near
the position of the camera. Objects will look flat, but at least you will
see them. From there, you can move it slowly into a better position.


B.2 SCENE DEBUGGING TIPS
--------------------------

To see a quick version of your picture, render it very small. With fewer
pixels to calculate the ray tracer can finish more quickly. -w160 -h100 is
a good size.

Use the +Q "quality" switch when appropriate.

If there is a particular area of your picture that you need to see in high
resolution, perhaps with anti-aliasing on (perhaps a fine-grained wood
texture), use the +SC, +EC. +SR, and +ER switches to isolate a "window".

If your image contains a lot of inter-reflections, set max_trace_level to a
low value such as 1 or 2. Don't forget to put it back up when you're
finished!

"Turn off" any unnecessary lights. Comment out extended light and
spotlight keywords when not needed for debugging. Again, don't forget to
put them back in before you retire for the night with a final render
running!

If you've run into an error that is eluding you by visual examination, it's
time to start bracketing your file. Use the block comment characters (
/* ... */ ) to disable most of your scene and try to render again. If you
no longer get an error, the problem naturally lies somewhere within the
disabled area. Slow and methodical testing like this will eventually get
you to a point where you will either be able to spot the bug, or go quietly
insane. Maybe both.

If you seem to have "lost" yourself or an object (a common experience for
beginners) there are a few tricks that can sometimes help:

1) Move your camera way back to provide a long range view.
This may not help with very small objects which tend to
be less visible at a distance, but it's a nice trick to keep
up your sleeve.

2) Try setting the ambient value to 1.0 if you suspect that
the object may simply be hidden from the lights. This will
make it self-illuminated and you'll be able to see it even
with no lights in the scene.

3) Replace the object with a larger, more obvious "stand-in"
object like a large sphere or box. Be sure that all the
same transformations are applied to this new shape so that
it ends up in the same spot.


B.3 ANIMATION
---------------

When animating objects with solid textures, the textures must move with the
object, i.e. apply the same rotate or translate functions to the texture as
to the object itself. This is now done automatically if the transformations
are placed _after_ the texture block.

Example:
shape { ...
pigment { ... }
scale < ... >
}
Will scale the shape and pigment texture by the same amount.

While:
shape { ...
scale < ... >
pigment { ... }
}
Will scale the shape, but not the pigment.

Constants can be declared for most of the data types in the program
including floats and vectors. By writing these to #include files, you can
easily separate the parameters for an animation into a separate file.

Some examples of declared constants would be:
#declare Y_Rotation = 5.0 * clock
#declare ObjectRotation = <0, Y_Rotation, 0>
#declare MySphere = sphere { <0, 0, 0>, 1.1234 }

Other examples can be found scattered throughout the sample scene files.

DOS users: Get ahold of DTA.EXE (Dave's Targa Animator) for
creating .FLI/.FLC animations. AAPLAY.EXE and PLAY.EXE are common viewers
for this type of file.

When moving the camera in an animation (or placing one in a still image,
for that matter) avoid placing the camera directly over the origin. This
will cause very strange errors. Instead, move off center slightly and
avoid hovering directly over the scene.


B.4 TEXTURES
--------------

Wood is designed like a "log", with growth rings aligned along the z axis.
Generally these will look best when scaled down by about a tenth (to a
unit-sized object). Start out with rather small value for the turbulence,
too (around 0.05 is good for starters).

The marble texture is designed around a pigment primitive that is much like
an x-gradient. When turbulated, the effect is different when viewed from
the "side" or from the "end". Try rotating it by 90 degrees on the y axis
to see the difference.

You cannot get specular highlights on a totally black object. Try using a
very dark gray, say Gray10 or Gray15, instead.


B.5 HEIGHT FIELDS
-------------------

Try using POV-Ray itself to create images for height_fields:

camera { location <0, 0, -2> }
plane { z, 0
finish { ambient 1 } // needs no light sources
pigment { bozo } // or whatever. Experiment.
}

That's all you'll need to create a .tga file that can then be used as a
height field in another image!


B.6 FIELD-OF-VIEW
-------------------

By making the direction vector in the camera longer, you can achieve the
effect of a tele-photo lens. Shorter direction vectors will give a kind of
wide-angle affect, but you may see distortion at the edges of the image.
See the file "fov.inc" in the \POVRAY\INCLUDE directory for some predefined
field-of-view values.

If your spheres and circles aren't round, try increasing the direction
vector slightly. Often a value of 1.5 works better than the 1.0 default
when spheres appear near the edge of the screen.


B.7 CONVERTING "HANDEDNESS"
-----------------------------

If you are importing images from other systems, you may find that the
shapes are backwards (left-to-right inverted) and no rotation can make them
correct.

Often, all you have to do is negate the terms in the right vector of the
camera to flip the camera left-to-right (use the "right-hand" coordinate
system). Some programs seem to interpret the coordinate systems
differently, however, so you may need to experiment with other camera
transformations if you want the y and z vectors to work as POV-Ray does.



APPENDIX C SUGGESTED READING
=============================

First, a shameless plug for two books that are specifically about POV-Ray:

The Waite Group's Ray Tracing Creations
By Drew Wells & Chris Young
ISBN 1-878739-27-1
Waite Group Press
1993
and
The Waite Group's Image Lab
By Tim Wegner
ISBN 1-878739-11-5
Waite Group Press
1992

Image Lab by Tim Wegner contains a chapter about POV-Ray. Tim is the co-
author of the best selling book, Fractal Creations, also from the Waite
Group.

Ray Tracing Creations by Drew Wells and Chris Young is an entire book about
ray tracing with POV-Ray.

This section lists several good books or periodicals that you should be
able to locate in your local computer book store or your local university
library.

"An Introduction to Ray tracing"
Andrew S. Glassner (editor)
ISBN 0-12-286160-4
Academic Press
1989

"3D Artist" Newsletter
("The Only Newsletter about Affordable
PC 3D Tools and Techniques")
Publisher: Bill Allen
P.O. Box 4787
Santa Fe, NM 87502-4787
(505) 982-3532

"Image Synthesis: Theory and Practice"
Nadia Magnenat-Thalman and Daniel Thalmann
Springer-Verlag
1987

"The RenderMan Companion"
Steve Upstill
Addison Wesley
1989

"Graphics Gems"
Andrew S. Glassner (editor)
Academic Press
1990

"Fundamentals of Interactive Computer Graphics"
J. D. Foley and A. Van Dam
ISBN 0-201-14468-9
Addison-Wesley
1983

"Computer Graphics: Principles and Practice (2nd Ed.)"
J. D. Foley, A. van Dam, J. F. Hughes
ISBN 0-201-12110-7
Addison-Wesley,
1990

"Computers, Pattern, Chaos, and Beauty"
Clifford Pickover
St. Martin's Press

"SIGGRAPH Conference Proceedings"
Association for Computing Machinery
Special Interest Group on Computer Graphics

"IEEE Computer Graphics and Applications"
The Computer Society
10662, Los Vaqueros Circle
Los Alamitos, CA 90720

"The CRC Handbook of Mathematical Curves and Surfaces"
David von Seggern
CRC Press
1990

"The CRC Handbook of Standard Mathematical Tables"
CRC Press
The Beginning of Time


APPENDIX D LEGAL INFORMATION
=============================

The following is legal information pertaining to the use of the Persistence
of Vision Ray Tracer a.k.a POV-Ray. It applies to all POV-Ray source files,
executable (binary) files, scene files, documentation files contained in
the official POV archives. (Certain portions refer to custom versions of
the software, there are specific rules listed below for these versions
also.) All of these are referred to here as "the software".

THIS NOTICE MUST ACCOMPANY ALL OFFICIAL OR CUSTOM PERSISTENCE OF VISION
FILES. IT MAY NOT BE REMOVED OR MODIFIED. THIS INFORMATION PERTAINS TO ALL
USE OF THE PACKAGE WORLDWIDE. THIS DOCUMENT SUPERSEDES ALL PREVIOUS
LICENSES OR DISTRIBUTION POLICIES.


IMPORTANT LEGAL INFORMATION

Permission is granted to the user to use the Persistence of Vision
Raytracer and all associated files in this package to create and render
images. The use of this software for the purpose of creating images is
free. The creator of a scene file and the image created from the scene
file, retains all rights to the image and scene file they created and may
use them for any purpose commercial or non-commercial.

The user is also granted the right to use the scenes files and include
files distributed in the INCLUDE and DEMO sub-directories of the POVDOC
archive when creating their own scenes. Such permission does not extend to
files in the POVSCN archive. POVSCN files are for your enjoyment and
education but may not be the basis of any derivative works.

This software package and all of the files in this archive are copyrighted
and may only be distributed and/or modified according to the guidelines
listed below. The spirit of the guidelines below is to promote POV-Ray as
a standard ray tracer, provide the full POV-Ray package freely to as many
users as possible, prevent POV-Ray users and developers from being taken
advantage of, enhance the life quality of those who come in contact with
POV-Ray. This legal document was created so these goals could be realized.
You are legally bound to follow these rules, but we hope you will follow
them as a matter of ethics, rather than fear of litigation.

No portion of this package may be separated from the package and
distributed separately other than under the conditions specified in the
guidelines below.

This software may be bundled in other software packages according to the
conditions specified in the guidelines below.

This software may be included in software-only compilations using media
such as, but not limited to, floppy disk, CD-ROM, tape backup, optical
disks, hard disks, or memory cards. There are specific rules and
guidelines listed below for the provider to follow in order to legally
offer POV-Ray with a software compilation.

The user is granted the privilege to modify and compile the source for
their own personal use in any fashion they see fit. What you do with the
software in your own home is your business.

If the user wishes to distribute a modified version of the software (here
after referred to as a "custom version") they must follow the guidelines
listed below. These guidelines have been established to promote the growth
of POV-Ray and prevent difficulties for users and developers alike. Please
follow them carefully for the benefit of all concerned when creating a
custom version.

You may not incorporate any portion of the POV-Ray source code in any
software other than a custom version of POV-Ray. However authors who
contribute source to POV-Ray may still retain all rights to use their
contributed code for any purpose as described below.

The user is encouraged to send enhancements and bug fixes to the POV-Team,
but the team is in no way required to utilize these enhancements or fixes.
By sending material to the POV-Team, the contributor asserts that he owns
the materials or has the right to distribute these materials. He
authorizes the POV-Team to use the materials any way they like. The
contributor still retains rights to the donated material, but by donating
you grant equal rights to the POV-Team. The POV-Team doesn't have to use
the material, but if we do, you do not acquire any rights related to POV-
Ray. We will give you credit if applicable.


GENERAL RULES FOR ALL DISTRIBUTION

The permission to distribute this package under certain very specific
conditions is granted in advance, provided that the above and following
conditions are met.

These archives must not be re-archived using a different method without the
explicit permission of the POV-Team. You may rename the archives only to
meet the file name conventions of your system or to avoid file name
duplications but we ask that you try to keep file names as similar to the
originals as possible. (For example:POVDOC.ZIP to POVDOC20.ZIP)

You must distribute a full package of archives as described in the next
section.

Non-commercial distribution (such as a user copying the software for a
personal friend or colleague and not charging money or services for that
copy) has no other restrictions. This does not include non-profit
organizations or computer clubs. These groups should use the
Shareware/Freeware distribution company rules below.

The POV-Team reserves the right to withdraw distribution privileges from
any group, individual, or organization for any reason.


DEFINITION OF "FULL PACKAGE"

POV-Ray is contained in 4 archives for each hardware platform. 1) An
executable archive, 2) A documentation archive, 3) Sample scene archives,
4) Source code archive.

A "full package" is defined as one of the following bundle options:
1 All archives (executable, docs, scenes, source)
2 User archives (executable, docs, scenes but no source)
3 Programmer archives (source, docs, scenes but no executable)

POV-Ray is officially distributed for IBM-PC compatibles running MS-Dos;
Apple Macintosh; and Commodore Amiga. Other systems may be added in the
future.

Distributors need not support all platforms but for each platform you
support you must distribute a full package. For example an IBM-only BBS
need not distribute the Mac versions.


CONDITIONS FOR DISTRIBUTION OF CUSTOM VERSIONS

You may distribute custom compiled versions only if you comply with the
following conditions.

Mark your version clearly on all modified files stating this to be a
modified and unofficial version.
Make all of your modifications to POV-Ray freely and publicly
available.
You must provide all POV-Ray support for all users who use your custom
version. The POV-Ray Team is not obligated to provide you or your users
any technical support.
You must provide documentation for any and all modifications that you
have made to the program that you are distributing.
Include clear and obvious information on how to obtain the official
POV-Ray.
Include contact and support information for your version. Include
this information in the DISTRIBUTION_MESSAGE macros in the source file
FRAME.H and insure that the program prominently displays this information.
Include all credits and credit screens for the official version.
Include a copy of this document.


CONDITIONS FOR COMMERCIAL BUNDLING

Vendors wishing to bundle POV-Ray with commercial software or with
publications must first obtain explicit permission from the POV-Ray Team.
This includes any commercial software or publications, such as, but not
limited to, magazines, books, newspapers, or newsletters in print or
machine readable form.

The POV-Ray Team will decide if such distribution will be allowed on a
case-by-case basis and may impose certain restrictions as it sees fit. The
minimum terms are given below. Other conditions may be imposed.

Purchasers of your product must not be led to believe that they are
paying for POV-Ray. Any mention of the POV-Ray bundle on the box, in
advertising or in instruction manuals must be clearly marked with a
disclaimer that POV-Ray is free software and can be obtained for free or
nominal cost from various sources.
Include clear and obvious information on how to obtain the official
POV-Ray.
Include a copy of this document.
You must provide all POV-Ray support for all users who acquired POV-
Ray through your product. The POV-Ray Team is not obligated to provide you
or your customers any technical support.
Include a credit page or pages in your documentation for POV-Ray.
If you modify any portion POV-Ray for use with your hardware or
software, you must follow the custom version rules in addition to these
rules.
Include contact and support information for your product.
Must include official documentation with product.


CONDITIONS FOR SHAREWARE/FREEWARE DISTRIBUTION COMPANIES

Shareware and freeware distribution companies may distribute the archives
under the conditions following this section.

You must notify us that you are distributing POV-Ray and must provide us
with information on how to contact you should any support issues arise.

No more than five dollars U.S. ($5) can be charged per disk for the copying
of this software and the media it is provided on. Space on each disk must
used completely. The company may not put each archive on a separate disk
and charge for three disks if all three archives will fit on one disk. If
more than one disk is needed to store the archives then more than one disk
may be used and charged for.

Distribution on high volume media such as backup tape or CD-ROM is
permitted if the total cost to the user is no more than $0.10 per megabyte
of data. For example a CD-ROM with 600 meg could cost no more than $60.00.


CONDITIONS FOR ON-LINE SERVICES AND BBS'S

On-line services and BBS's may distribute the POV-Ray archives under the
conditions following this section.

The archives must be all be easily available on the service and should be
grouped together in a similar on-line area.

It is strongly requested that BBS operators remove prior versions of POV-
Ray to avoid user confusion and simplify or minimize our support efforts.

The on-line service or BBS may only charge standard usage rates for the
downloading of this software. A premium may not be charged for this
package. I.E. CompuServe or America On-Line may make these archives
available to their users, but they may only charge regular usage rates for
the time required to download. They must also make the all of the archives
available in the same forum, so they can be easily located by a user.

DISCLAIMER

This software is provided as is without any guarantees or warranty.
Although the authors have attempted to find and correct any bugs in the
package, they are not responsible for any damage or losses of any kind
caused by the use or misuse of the package. The authors are under no
obligation to provide service, corrections, or upgrades to this package.


APPENDIX E CONTACTING THE AUTHORS
==================================

We love to hear about how you're using and enjoying the program. We also
will do our best try to solve any problems you have with POV-Ray and
incorporate good suggestions into the program.

If you have a question regarding commercial use of, distribution of, or
anything particularly sticky, please contact Chris Young, the development
team coordinator. Otherwise, spread the mail around. We all love to hear
from you!

The best method of contact is e-mail through CompuServe for most of us.
America On-Line and Internet can now send mail to CompuServe, also, just
use the Internet address and the mail will be sent through to CompuServe
where we read our mail daily.

Please do not send large files to us through the e-mail without asking
first. We pay for each minute on CompuServe and large files can get
expensive. Send a query before you send the file, thanks!

Chris Young
(Team Coordinator. Worked on everything.)
CIS: 76702,1655
Internet [email protected]
US Mail:
3119 Cossell Drive
Indianapolis, IN 46224 U.S.A.


Drew Wells
(Former team leader. Worked on everything.)
CIS: 73767,1244
Internet: [email protected]
AOL: Drew Wells
Prodigy: SXNX74A (Not used often)


Other authors and contributors in alphabetical order:
-----------------------------------------------------
David Buck
(Original author of DKBTrace)
(Primary developer, quadrics, docs)
INTERNET:(preferred) [email protected]
CIS: 70521,1371

Aaron Collins
(Co-author of DKBTrace 2.12)
(Primary developer, IBM-PC display code,phong)
CIS: 70324,3200

Alexander Enzmann
(Primary developer, Blobs, quartics, boxes, spotlights)
CIS: 70323,2461
INTERNET: [email protected]

Dan Farmer
(Primary developer, docs, scene files)
CIS:70703,1632

Douglas Muir
(Bump maps and height fields)
CIS: 76207,662
Internet:[email protected]

Bill Pulver
(Time code and IBM-PC compile)
CIS: 70405,1152

Charles Marslette
(IBM-PC display code)
CIS: 75300,1636

Mike Miller
(Artist, scene files, stones.inc)
CIS: 70353,100

Jim Nitchals
(Mac version, scene files)
CIS: 73117,3020
AppleLink: jimn8
Internet: [email protected]

Eduard Schwan
(Mac version, docs)
CIS: 71513,2161
AppleLink: JL.Tech
Internet: [email protected]

Randy Antler
(IBM-PC display code enhancements)
CIS: 71511,1015

David Harr
(Mac balloon help)
CIS: 72117,1704

Scott Taylor
(Leopard and Onion textures)
CIS: 72401,410

Chris Cason
(colour X-Windows display code)
CIS: 100032,1644

Dave Park
(Amiga support; added AGA video code)
CIS: 70004,1764


 December 14, 2017  Add comments

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)