"qnodes | grep -B2 researcj|less" to get a list of available nodes
"qstat" to see current status of torque job queue
Create directory for a new grid run on Thor
Copy three files to the directory:
- cloudy input file from "cloudy-agn/scripts/cloudy/", rename to "mpi_grid.in"
- torque submission script "cloudy.pbs" from "cloudy-agn/scripts/thor"
- SED table from "cloudy-agn/sed/"
Edit "cloudy.pbs" to update the working directory and the name of the run
Edit "mpi_grid.in" as relevant to this run
"qsub cloudy.pbs" submits the script to the scheduler
"qstat" to see status of your run, or "watch qstat" to watch the status (ctrl-c to exit watch)
Wait for run to complete.
Copy mpi_grid.out to a directory of your choice on your local machine
Run "cloudy-agn/scripts/operations/package_tables.sh <directory>" where the command line argument <directory> points at the directory where you stored mpi_grid.out
This should generate the flux tables in that directory under the subdirectory "fortfiles", with a .tar.gz archive in the main directory.
Generally, the software is intended to be run by a single command while working in a directory that contains several cloudy grid output files in subdirectories. This command is the script "scripts/meta/process_gridoutputs.sh". You can also use the "scripts/operations/package_tables.sh" script, described below. The cloudy output files must be named "mpi_grid.out". For instance, the structure I used looks like this:
where each directory "cldn_##.##" contains a file called "mpi_grid.out". The script doesn't care what your directory structure looks like, and will use it to name the fortfile archives if it can.
If I ran the process_outputs.sh in "." above, it would find all mpi_grid.out files in that structure, and would attempt to produce tables from each using the line list contained in "linelist.c17" in the "reference" directory. It will archive them into .tar.gz files by using the directory structure to create the filenames, but it ignores directories named "grids", so this would produce, e.g., "fortfiles_mehdipour.solar.4thdex.cldn_24.00.tar.gz", and so on.
As of the first version of c17, this approach works quite well. New cloudy versions often produce changes in the output file, so this approach will become less reliable as time goes on. Ultimately, it would be more reliable to use the internal workings of cloudy to produce what we want, and moving forward I will attempt to do that. As it stands, the tables are collated from the intrinsic line list, and the emergent line list is ignored. The last entry found in the emission line output is used for any particular emission line, to avoid conflicts with default continuum bins.
Currently, the software reads the line list from the file called "linelist.c17" in the directory called "reference". The program ignores comment lines beginning with # and blank lines, but otherwise will interpret the line as containing an emission line identifier that should be read from the cloudy output file. The identifier can be copied and pasted directly from the cloudy output file or from a line label file output during a cloudy run. The program knows to only look at the relevant number of characters, so anything after that on the line can be interpretted as a user comment. For example,
tells the program to compile a flux table for the emission line identified by "FeKa 1.78000A", and the fact that it is the total intensity of the K-alpha line is just information meant for the user.
"operations" contains scripts that do bulk operations, and g
"file" contains scripts that operate on various files, such as tables stored in fortfiles or a cloudy output file. Generally, these scripts aren't intented to be run by the user directly, but are setup this way for troubleshooting. Most of these are obsolete now and will probably get removed, soon, because they've been baked into my primary c++ program.
This is the main script for creating fort files from cloudy grid output files. The use of this script is described at the beginning of the document. Go to a directory that contains at least one subdirectory with a file called "mpi_grid.out". It will attempt to compile flux tables and save them as a set of fort files. The syntax requires no command line arguments, just cd to the appropriate directory, then run the script from its location in the scripts/meta directory.
This is the script that packages fortfiles for a particular cloudy grid output file. This isn't intended for the user to run directly, but sometimes I use it directly when troubleshooting. The syntax is:
package_tables.sh <dir>
where <dir> is a directory containing a cloudy grid output file called "mpi_grid.out".
The script will create a directory called "fortfiles" under which it will put "raw" fortfiles and "interpolated" fort files, after calling the interpolation c++ program from the "bin" directory, if it exists. The raw files have been sufficient since c17, because most of the convergence issues have been solved, it seems.
all files found under "src". A standard GCC c++ compiler should be sufficient to compile all of these programs, although the build script looks for additional libraries. All of these programs have debugging output that is toggled by changing the "debug" booleans at the top of agn.hpp before compiling. Sorry, I know that's terrible, but it was quick and easy!
The primary header for the project. This file defines containers and methods for storing cloudy grid results. The routine for reading the cloudy file is defined here, read_cloudy_grid(), and it's a mess. This is because the most efficient approach was to read the entire grid at once, collating lines as we go. This approach increased efficiency, but it's ugly, and I'm not sure when I'll have time to rewrite it. This will surely complicate troubleshooting.
This header contains definitions and utilities for storing and operating on emission lines and flux tables. The bulk of the code run by "create_fort_files.cpp" is included in this file, besides reading the cloudy file itself.
The operational program for creating fort files from a cloudy grid output file. The syntax is:
./create_fort_files <cloudy output file> <line list file>
This reads the line list from the line list file, then compiles tables for each line from the cloudy output file. The fort files will be created in your current working directory. This program is rarely intended to be run on its own, and is called by the "scripts/operations/package_tables.sh" script, which is turn called by the "scripts/meta/process_outputs.sh" script.
This program runs a cubic spline-based smoothing operation along lines in constant hydrogen density over a flux table file. It returns another table with the fixed values. The syntax is:
OLD. Used to extract slices along constant hden or phi from a flux table, mostly for examination during debugging. Lots of easy ways to do this in graphing software and what have you. This may not even work anymore and I'll probably remove it soon.