SPCTH: Difference between revisions

From ParaQ Wiki
Jump to navigationJump to search
Line 66: Line 66:
   (int* MAX_DUMPS) DumpCycle
   (int* MAX_DUMPS) DumpCycle
   (double* MAX_DUMPS) DumpTime
   (double* MAX_DUMPS) DumpTime
   (double* MAX_DUMPS) DumpDT (no idea what this is)
   if ( File version >= 102 ) {
    (double* MAX_DUMPS) DumpDT (no idea what this is)
  }
   (double* MAX_DUMPS) DumpOffset
   (double* MAX_DUMPS) DumpOffset
  Okay if (num_dumps == MAX_DUMPS) then loop back to step 1
  Okay if (num_dumps == MAX_DUMPS) then loop back to step 1

Revision as of 17:52, 20 February 2006

What we think we know about SPCTH (Spy) file formats

  • Brian Wylie, David Karelitz, Andy Cedilnik
  • This are updates to version 102

Overview:

SPCTH format is the format written out during CTH simulation runs for the purpose of doing visualization at runtime. SPCTH can store rectilinear grids with regions of adaptive resolution (AMR). SPCTH supports the storage of temporally varying simulation results (i.e. stores many time steps). The topological information is implicit (as expected) and the format supports compression through the use of some custom run length encoding. Obviously the run-length encoding scheme needs to be documented for a reader to read in the scalar field data. Hopefully coming soon. The general structure of an SPCTH file consists of a file header, group headers, dump headers, tracers and then data blocks.

Issues:

All inputs that are not strings are stored in big-endian format. Any reader will need to take this into account. The information below was acquired by looking at some existing source code that reads in spcth for spyplot, so some incorrect interpretation may have occurred.

Nomenclature:

Dump = time step
Cycles = computational cycles (indirectly a measure of time)
Group header = information about all the dumps
Dump header = information about a particular ‘dump’/time step.

File Header:

The file header contains geometric information, variable names, and some other misc.

(#) = indicates the byte offset into file

(0-7) (char*8) First 8 bytes is magic string “spydata” with \0 as string terminator at byte 8.
(8-135) (char*128) Next 128 bytes is the file title
(136) (int) File version
if ( File version >= 102 ) {
  (140) (int) Size of offsets (default is 32, but 102 adds suppoort for 64)
}
(140) (int) Compression flag 
(144) (int) Processor id
(148) (int) Number of processors
(152) (int) IGM (I have no idea)
(156) (int) Ndim (Assuming dimensions of the dataset)
(160) (int) Nmat (Assuming number of materials)
(164) (int) MaxMat (Assuming maximum number of materials)
(168) (double*3) Gmin (Gmin and Gmax are the ‘bounds’ of the data)
(192) (double*3) Gmax (Gmin and Gmax are the ‘bounds’ of the data)
(216) (int) n_blocks (Assuming number of blocks)
(220) (int) max_level (Assuming the max level of adaption?)
(224) (int) num_cells (Number of cell fields)

Okay now the byte offset will differ depending on the number of cell fields so the remaining file description will not have byte offsets.

For the number_of_cell_fields {
 (char*30) Field id
 (char*80) Field comment
 (int) Field_int (only for file_version > 101)
}
(int) Number of material field
For the number_of_mat_fields {
 (char*30) Mat_Field id
 (char*80) Mat_Field comment
 (int) Mat_Field_int (only for file_version > 101)
}

Group Headers:

The group header contains some meta-data about the all of the ‘dumps’ including time, cycles, and file pointer into the data block for each time step. Note: MAX_DUMPS = 100.

(double) File Offset (why ‘double’??)  (step 1)
 Now jump to file offset
 (int) num_dumps (Number of Dumps)
 (int* MAX_DUMPS) DumpCycle
 (double* MAX_DUMPS) DumpTime
 if ( File version >= 102 ) {
   (double* MAX_DUMPS) DumpDT (no idea what this is)
 }
 (double* MAX_DUMPS) DumpOffset
Okay if (num_dumps == MAX_DUMPS) then loop back to step 1


Note: You have to read in a certain number of bytes even if you aren’t interested in the data because the ‘explicit’ offsets are not stored anywhere.

So for each dump, loop around and read these four things (even if you don’t need them)

For the number of dumps 
{
 Dump Headers (you have a file index for this (dump offset from above))
 Tracers 
 Histograms
 Data Blocks
}

Dump Headers: The dump header contains information specific to each dump.

For the number of dumps {
 Jump to dump offset
 (int) num_vars (Number of saved variables)
 (int * num_vars) Saved variables
 (double * num_vars) Saved variables offsets
}

Tracers:

 (int) num_tracers;
 if num_tracers > 0 {
    for 7 {
      (int) block_size;
      (char * block_size)block;
    }
 }

Histograms:

 (int)num_indicators;
 if num_indicators > 0 {
   (int)some_number;
   For num_indicators {
     (int)some_number;
     (int)num_bins;
     if num_bins > 0 {
       (int)some_size;
       (char * some_size)some_data;
     }
   }
 }

Data Blocks:

The blocks actually hold the data (both geometric and scalar).

(int) num_blocks (number of blocks for this dump)
For num_blocks {
 (int) Nx (x dimension of the block)
 (int) Ny (y dimension of the block)
 (int) Nz (z dimension of the block)
 (int) allocated (is the block allocated)
 (int) active (is the block active)
 (int) level (which refinement level?)
}


(Or like this Andy 16:02, 17 Jan 2006 (EST)):

For the number of dumps {
 For the number_of_saved_vars {
   Jump to saved variable offset
   (int) number_of_bytes (length of the array)
   (number_of_bytes) data
 }
}