Go to file
2017-08-28 00:10:02 -04:00
notes added note about SED cutoff for future program 2017-08-28 00:10:02 -04:00
papers mehdipour paper 2017-08-07 21:52:28 -04:00
reference updated mehdipour script 2017-08-12 06:58:04 -04:00
scripts added note about SED cutoff for future program 2017-08-28 00:10:02 -04:00
sed added note about SED cutoff for future program 2017-08-28 00:10:02 -04:00
src cleaned up scripts 2017-08-17 19:53:04 -04:00
.gitignore Added gitignore and simple build script. Some changes to line list. 2016-05-12 16:22:43 -04:00
README update, working on powerlaw spline 2017-08-08 09:41:55 -04:00

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Quick Reference
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

───────────────────────────────────────────────────────────────────
basic work-flow to create a thor run and generate flux tables
───────────────────────────────────────────────────────────────────

    This is covered in detail in "cloudy-agn/notes/thor_guide"

    "ssh -X kkorista@thor.cs.wmich.edu" to connect

    "qnodes | grep -B2 researcj|less" to get a list of available nodes

    "qstat" to see current status of torque job queue

    Create directory for a new grid run on Thor

    Copy three files to the directory:
        - cloudy input file from "cloudy-agn/scripts/cloudy/", rename to "mpi_grid.in"
        - torque submission script "cloudy.pbs" from "cloudy-agn/scripts/thor"
        - SED table from "cloudy-agn/sed/"

    Edit "cloudy.pbs" to update the working directory and the name of the run

    Edit "mpi_grid.in" as relevant to this run

    "qsub cloudy.pbs" submits the script to the scheduler

    "qstat" to see status of your run, or "watch qstat" to watch the status (ctrl-c to exit watch)

    Wait for run to complete.

    Copy mpi_grid.out to a directory of your choice on your local machine

    Run "cloudy-agn/scripts/operations/package_tables.sh <directory>" where the command line argument <directory> points at the directory where you stored mpi_grid.out

    This should generate the flux tables in that directory under the subdirectory "fortfiles", with a .tar.gz archive in the main directory.


───────────────────────────────────────────────────────────────────
compiling cloudy on Thor
───────────────────────────────────────────────────────────────────
    "module avail" to check available modules
    "module list" to list loaded modules
    "module load <module>" to load a module

    Load one of the openmpi modules before compiling the mpi version of cloudy

    move to the relevant source directory, e.g., "c17.00/source/sys_mpi_gcc"

    "make" to build the program, which compiles to cloudy.exe


───────────────────────────────────────────────────────────────────
getting the latest version of my software
───────────────────────────────────────────────────────────────────

rsync -aac kkorista@159.203.46.10:~/cloudy.agn <local dir>

    <local dir> command line argument here is where you want to save the project on your local machine, use "." for current directory

───────────────────────────────────────────────────────────────────
use SSH keys for authentication
───────────────────────────────────────────────────────────────────
    This creates a public key and a private key and sends the public key to the remote server, which is sufficient for secure authentication.

    "ssh-keygen" to create and save a key
    "ssh-copy-id kkorista@thor.cs.wmich.edu" to set up key authentication on the remote server





━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Program Philosophy
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Generally, the software is intended to be run by a single command while working in a directory that contains several cloudy grid output files in subdirectories. This command is the script "scripts/meta/process_gridoutputs.sh". You can also use the "scripts/operations/package_tables.sh" script, described below. The cloudy output files must be named "mpi_grid.out". For instance, the structure I used looks like this:
.
├── magdziarz
│   └── grids
│       ├── halfsolar
│       │   └── 4thdex
│       │       ├── cldn_22.00
│       │       ├── cldn_23.00
│       │       ├── cldn_24.00
│       │       └── nonitrogen
│       └── solar
│           └── 4thdex
│               ├── cldn_22.00
│               ├── cldn_22.50
│               ├── cldn_23.00
│               ├── cldn_23.50
│               └── cldn_24.00
└── mehdipour
    └── grids
        └── solar
            └── 4thdex
                ├── cldn_22.00
                ├── cldn_22.50
                ├── cldn_23.00
                ├── cldn_23.50
                ├── cldn_24.00
                └── newcldn22.59

where each directory "cldn_##.##" contains a file called "mpi_grid.out". The script doesn't care what your directory structure looks like, and will use it to name the fortfile archives if it can.

If I ran the process_outputs.sh in "." above, it would find all mpi_grid.out files in that structure, and would attempt to produce tables from each using the line list contained in "linelist.c17" in the "reference" directory. It will archive them into .tar.gz files by using the directory structure to create the filenames, but it ignores directories named "grids", so this would produce, e.g., "fortfiles_mehdipour.solar.4thdex.cldn_24.00.tar.gz", and so on.

As of the first version of c17, this approach works quite well. New cloudy versions often produce changes in the output file, so this approach will become less reliable as time goes on. Ultimately, it would be more reliable to use the internal workings of cloudy to produce what we want, and moving forward I will attempt to do that. As it stands, the tables are collated from the intrinsic line list, and the emergent line list is ignored. The last entry found in the emission line output is used for any particular emission line, to avoid conflicts with default continuum bins.

───────────────────────────────────────────────────────────────────
The Line List
───────────────────────────────────────────────────────────────────

Currently, the software reads the line list from the file called "linelist.c17" in the directory called "reference". The program ignores comment lines beginning with # and blank lines, but otherwise will interpret the line as containing an emission line identifier that should be read from the cloudy output file. The identifier can be copied and pasted directly from the cloudy output file or from a line label file output during a cloudy run. The program knows to only look at the relevant number of characters, so anything after that on the line can be interpretted as a user comment. For example,

FeKa      1.78000A              total intensity of K-alpha line

tells the program to compile a flux table for the emission line identified by "FeKa      1.78000A", and the fact that it is the total intensity of the K-alpha line is just information meant for the user.


━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Explanations of Files
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

───────────────────────────────────────────────────────────────────
scripts
───────────────────────────────────────────────────────────────────
all files found under subdiretories of "scripts" 

scripts/
├── analysis
├── cloudy
├── file
├── meta
├── operations
├── thor
├── util
└── xmgrace

─────────────
script subdirs
─────────────
"meta" contains scripts that are intended to be run by the user

"operations" contains scripts that do bulk operations, and g

"file" contains scripts that operate on various files, such as tables stored in fortfiles or a cloudy output file. Generally, these scripts aren't intented to be run by the user directly, but are setup this way for troubleshooting. Most of these are obsolete now and will probably get removed, soon, because they've been baked into my primary c++ program.

"cloudy" contains the input scripts used for our grid runs

"thor" contains scripts used to submit jobs to thor with the torque scheduling system

"util" contains scripts that run procedures used by other scripts and are not intended to be run by the user

"xmgrace" contains parameter scripts for plotting various quantities

"analysis" give information about emission lines and the continuum, but these are not updated for c17, yet.

"sed" contains various formats of the SEDs we've analyzed, and the .tab files can be used as cloudy input files for the "table sed" command

─────────────
process_gridoutputs.sh
─────────────
in scripts/meta

This is the main script for creating fort files from cloudy grid output files. The use of this script is described at the beginning of the document. Go to a directory that contains at least one subdirectory with a file called "mpi_grid.out". It will attempt to compile flux tables and save them as a set of fort files. The syntax requires no command line arguments, just cd to the appropriate directory, then run the script from its location in the scripts/meta directory.


─────────────
package_tables.sh
─────────────
in scripts/operations

This is the script that packages fortfiles for a particular cloudy grid output file. This isn't intended for the user to run directly, but sometimes I use it directly when troubleshooting. The syntax is:

package_tables.sh <dir>

where <dir> is a directory containing a cloudy grid output file called "mpi_grid.out".

The script will create a directory called "fortfiles" under which it will put "raw" fortfiles and "interpolated" fort files, after calling the interpolation c++ program from the "bin" directory, if it exists. The raw files have been sufficient since c17, because most of the convergence issues have been solved, it seems.


───────────────────────────────────────────────────────────────────
source files
───────────────────────────────────────────────────────────────────
all files found under "src". A standard GCC c++ compiler should be sufficient to compile all of these programs, although the build script looks for additional libraries. All of these programs have debugging output that is toggled by changing the "debug" booleans at the top of agn.hpp before compiling. Sorry, I know that's terrible, but it was quick and easy!

To build one of these programs, cd to the "src" directory and use the "build" script, e.g.,

> cd src
> ./build create_fort_files

This will compile create_fort_files.cpp and move the binary to the "bin" directory, where the various scripts look for binaries.

─────────────
agn.hpp
─────────────
The primary header for the project. This file defines containers and methods for storing cloudy grid results. The routine for reading the cloudy file is defined here, read_cloudy_grid(), and it's a mess. This is because the most efficient approach was to read the entire grid at once, collating lines as we go. This approach increased efficiency, but it's ugly, and I'm not sure when I'll have time to rewrite it. This will surely complicate troubleshooting.

─────────────
spectral_lines.hpp
─────────────
This header contains definitions and utilities for storing and operating on emission lines and flux tables. The bulk of the code run by "create_fort_files.cpp" is included in this file, besides reading the cloudy file itself.

─────────────
sed.hpp
─────────────
This header contains definitions and utilities for generating an SED, typically output to a .tab file for use in a cloudy run.

─────────────
create_fort_files.cpp
─────────────
The operational program for creating fort files from a cloudy grid output file. The syntax is:

./create_fort_files <cloudy output file> <line list file>

This reads the line list from the line list file, then compiles tables for each line from the cloudy output file. The fort files will be created in your current working directory. This program is rarely intended to be run on its own, and is called by the "scripts/operations/package_tables.sh" script, which is turn called by the "scripts/meta/process_outputs.sh" script.

─────────────
interpolation_fix.cpp
─────────────
This program runs a cubic spline-based smoothing operation along lines in constant hydrogen density over a flux table file. It returns another table with the fixed values. The syntax is:

interpolation_fix <input file> <output file>

─────────────
subtract_fortfiles.cpp
─────────────
This program takes two flux tables and returns a new table of their difference to standard output, which can be streamed to a file. The syntax is:

subtract_fortfiles <file1> <file2> <4 char header>

where the difference will be file1 - file2, and the new table's header will start with the 4 characters in the third command line argument.

─────────────
table_powerlaw_spline_sed.cpp
─────────────
Saves an SED in an ascii table using several powerlaw regions connected with cubic splines, including empirically sampled points. The syntax is

generate_spline_sed <sample table> <powerlaw coords table> <output filename>

<sample table> should be a table of samples in eV and νFν, e.g.:

0.001   1e26
10e1    1e45
...     ...

<powerlaw table> must include 5 coordinates, in this order, with high and low points relative to the energy axis:
    ir_high_point
    uv_low_point
    uv_high_point
    xray_low_point
    xray_high_point
    gammy_low_point

The slopes are hardcoded at either end of the continuum. This approach allows the user to define both the x and y breadth of each power law.

<output filename> is anything, such as "magdziarz1997.tab"

─────────────
generate_powlaw_sed.cpp
─────────────
Generates a power law SED with the built-in parameters from 2015.

─────────────
convert_sed_ryd_to_ev.cpp
─────────────
OLD. A simple program for converting SEDs in rydbergs to ev. Cloudy can do this on its own, so this program is redundant.

─────────────
save_table2d_slice.cpp
─────────────
OLD. Used to extract slices along constant hden or phi from a flux table, mostly for examination during debugging. Lots of easy ways to do this in graphing software and what have you. This may not even work anymore and I'll probably remove it soon.