Setting up C94

Download and explode the two tar files
into data and source directories
The distribution consists of two large gzip'd tar files, one containing the
source and the other containing the
atomic
data that the code must access. The two files must be
uncompressed with gzip, WinZip, or a compatible program. The resulting tar
files are exploded with the command
tar -xvf name.tar
where name is the name of the file. The files will explode into the
subdirectory you are in when the tar command is given. It would be best if
each went into its own subdirectory, perhaps all the data into one called
data and the source into source. The source directory will
contain a readme.htm file.

Edit path.c to point to where the data files will
live
There are vast amounts of atomic data that the code must reference in order
to compute a model. These are the *.dat files that come in the data tar
file. The code can be executed from any other location if it knows where
to find the data files.
Edit the routine path.c and change the line
char chDataPath[81]="c:\\projects\\cloudy\\c94\\data\\" ;
to the location of the data files. The string must be in double quotes
- the double backslash is only needed on NT systems - on Unix it would be a
single forward slash. The string also MUST end with the proper directory
mark - a "/" in Unix, a "\" in Windows, and a "]" in VMS. Note that the line
must end in a semicolon (because this is C). If the string is longer than
80 characters increase the length of the variable chDataPath. (Remember
that in C a string must have one extra byte for storing the end-of-string
sentinel).
NB - If this step is not done it will be necessary to always tell the code
where the data live by using the set path
command. If you store your *.ini initialization files in the same
directory as the data files it will never be necessary to set the path - the ini
files will be automatically found.

Compile options should be set so that the machine throws an exception on
division by zero or overflow. Underflow is inevitable and should not be
trapped. The code must be compiled and linked with the __STDC__ flag set,
since it checks to make sure that the compiler is an ANSI C compiler. This
can be done on various machines as follows:

GNU
Note: Versions before 2.1 of the gcc library glibc has problems
with floating point precision on Intel processors. Make sure that you have
version 2.1 or later if you plan to work on an Intel box. There is a
major bug in the math library that comes with gcc on Dec Alphas.
If you are running Linux on an Alpha please read this, and tell me
how to fix the problem.
Even more important note: MAKE SURE YOU DON'T HAVE
VERSION 2.96 of gcc. You can obtain the version with the command "gcc -v".
This version is a mistake by RedHat. If you have version 2.96 then you
probably also have kcc, a version of gcc that can compile the Linux kernel.
Use that instead, or complain to your system administrator.
debug compile
gcc -ansi -Wall -g -c -O2 *.c
gcc -o cloudy.exe *.o -lm
fast compile
gcc -ansi -c -O3 -funroll-loops -Wall *.c
gcc -o cloudy.exe *.o -lm
Note that this will, by itself, produce code that does not crash on divide by
zero or overflow under Linux.
Update on setting the floating point environment (FPE):
Version C94.01 has a commented-out section of code that will set the proper
floating point environment for gcc under Linux on an i386. Edit routine
setfpenv.c and remove the two lines that are indicated by the comments.
(There are a pair of #if 0 / #endif lines.) If your system has the header
file fpu_control.h then it will compile OK and set the proper floating point
environment. Not all distributions of gcc include fpu_control.h (the
Cygwin distribution that I use does not) so
this piece of code is not enabled by default. (All of this suggests, as
Robin Williams points out, that the problem is in the math libraries and not gcc
itself).
Update to the update on the floating point environment.
Peter van Hoof developed code to set the proper FPE, and this is included in the
distribution for C96. The revised version of setfpenv.c is
here. Try replacing the version of
setfpenv.c in the distribution with this version. Thanks to Julian Pittard
for advice.

Dec Alpha
debug compile
cc -g -std1 -trapuv -c *.c
cc -o cloudy.exe *.o -lm
This will trap uninitialized variables and floating point exceptions.
optimized compile:
cc -std1 -c -fast *.c
cc -o cloudy.exe *.o -lm

Sparc
debug compile:
cc -v -Xc -lm -g -c *.c
cc -g -o cloudy.exe *.o -lm
optimized compile:
cc -v -Xc -lm -fast -c *.c
cc -fast -o cloudy.exe *.o -lm
The -Xc option tells the compiler to expect "maximally conformant ANSI C
code". There are reports that some older versions of the C libraries have
a non-standard sprintf function that returns a pointer to the string buffer
instead of an int containing the number of bytes written. If your compiler
falls in this category then prtcomment.c will fail. Update the compiler,
or use gcc.

SGI
debug compile:
cc -ansi -c -w -g -DEBUG:trap_uninitialized
-TARG:exc_min=OZV *.c
cc -IPA -o cloudy.exe *.o -lm
To include array bounds checking, change the DEBUG statement to the
following:
-DEBUG:trap_uninitialized:subscript_check
optimized compile:
cc -ansi -c -Ofast -w -TARG:exc_max=OZV
-TARG:exc_min=OZV *.c
cc -o cloudy.exe -IPA *.o -lm

HP SPP Exemplar
debug compile:
cc -c -g -Aa +e +z *.c
cc +FPZO -o cloudy.exe *.o -lm
optimized compile
cc -Aa -O -c +z +e *.c
cc +FPZO -o cloudy.exe *.o -lm

If for some reason you cannot set the compiler options that then set the __STDC__
flag, the code that checks for this is located in cdinit.c. Cloudy will
probably produce incorrect results if a K&R compiler is used. The
GNU gcc compiler is free and very Cloudy-friendly, if your system does not have
an ANSI C compiler already.

Run the simple test
Execute the code with the single command line
test
and then examine the last line of output. If it says that Cloudy ended
OK then things are set up OK. This test does many internal checks for
sanity.
Check that it crashes on errors
During initial testing the code should be set to crash on divide by zero,
overflow, using NaN, and a failed assert. Compile as directed above, and
run the following four tests:
title confirm it crashes on overflow
crash overflow
title confirm it crashes on divide by zero
crash zero
title confirm it crashes when using NaN
crash NaN
title confirm it crashes with a failed assert
crash assert
Note that the failed assert will only occur when the code is compiled in
debug mode - asserts do not exist in optimized code. If the code does not
crash on each of these tests, there is a problem since your system is quite
happy with bad floating point math.
Run the test cases
The code uses extensive self-checking to insure that the results are valid.
Many of these tests include assert commands in the input stream which allow the
code to determine whether it has found the correct answer. The current
version of the test suite can be downloaded from
here.
If anything goes wrong the code will announce this at the end of the
calculation.
The test distribution includes Perl scripts to run all the test cases and
then check for problems. These are files that have names you will find
with listing *.pl.
Many of these checks are not performed when the ANSI standard macro NDEBUG is
true. This is explicitly set true with the option -DNDEBUG on the compile
command line, and probably implicitly set true when optimizer options are used.
To make the strongest validation of the code you should leave this flag
undefined (compile and test with the debug mode) and run the test cases that
come with the distribution. Once validated the optimized compile can be
done, and the code will run faster.

From the command line the code would be executed as follows, if the
executable is called cloudy.exe:
cloudy.exe < input_file > output_file
Commands are read in from the file input_file, and results are sent to
output_file A typical input file is the series of commands written
one per line in free format:
title typical input stream
blackbody 120,000K
luminosity 37
radius 17
hden 4

Often the most insight is gained from producing a large number of models with
various input parameters changing, to see how predicted quantities change. To do
this you want to write your own main program, and delete the one that comes with
the distribution.
Delete the old main program:
In the distribution this is the file maincl.c. Delete this file (or
rename it to something like maincl.old) , and also delete maincl.o if you
compiled the entire distribution.
Compile the new main program and link it with the rest of Cloudy.
This is done with the compile options described above. You will need to
compile all the rest of the code, generating *.o files, then compile the new
main program, and link it all together. Note that in C the main program
must be called main, but it can live in a file with any name. All of the
routines you need to access are declared in the header file cddrive.h.
Include this file with your main. That header also describes how the
various driving routines should be called.



Last changed 04/05/03.
Return to the Cloudy Home
Page.
Copyright 1978-2003 Gary J. Ferland