Hi Ken,<br><br>I'm having some performance issues with a fairly large NetCDF file using the vtkNetCDFCFReader. The dimensions of it are 768 lat, 1152 lon and 9855 time steps (no elevation dimension). It has one float variable with these dimensions -- pr(time, lat, lon). This results in a file around 33 GB. I'm running on hopper and for small amounts of processes (at most 24 which is the number of cores per node) and the run time seems to increase dramatically as I add more processes. The tests I did read in the first 2 time steps and did nothing else. The results are below but weren't done too rigorously:<br>
<br>numprocs -- time<br>1 -- 1:22<br>2 -- 1:52<br>4 -- 7:52<br>8 -- 5:34<br>16 -- 10:46<br>22 -- 10:37<br>24 -- didn't complete on hopper's "regular" node with 32 GB of memory but I was able to run it in a reasonable amount of time on hopper's big memory nodes with 64 GB of memory.<br>
<br>I have the data in a reasonable place on hopper. I'm still playing around with settings (things get a bit better if I set DVS_MAXNODES -- <a href="http://www.nersc.gov/users/computational-systems/hopper/performance-and-optimization/hopperdvs/">http://www.nersc.gov/users/computational-systems/hopper/performance-and-optimization/hopperdvs/</a>) but this seems a bit weird as I'm not having any problems like this on a data set that has spatial dimensions of 17*768*1152 with 324 time steps.<br>
<br>Any quick thoughts on this? I'm still investigating but was hoping you could point out if I'm doing anything stupid.<br><br>Thanks,<br>Andy<br><br><br>