<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">Hi Andy,<br>
      <br>
      do you have a strong reason for using the global scratch fs? if
      not you may have better luck using hopper's dedicated lustre
      scratch. Spec quote &gt; 2x bandwidth[*]. In reality I'm sure it
      depends on the number of user's hammering it at the time in
      question. may help to use lustre scratch while you're working on
      parallelization of the netcdf readers.<br>
      <br>
      Burlen<br>
      <br>
      * <a
href="http://www.nersc.gov/users/computational-systems/hopper/file-storage-and-i-o/">http://www.nersc.gov/users/computational-systems/hopper/file-storage-and-i-o/</a><br>
      <br>
      On 02/06/2013 03:35 PM, Andy Bauer wrote:<br>
    </div>
    <blockquote
cite="mid:CAMaOp+FxA37RbdRPZ4-k5poHHxomaqhCiSK52v4Q3j7_t7hq6w@mail.gmail.com"
      type="cite">Hi Ken,<br>
      <br>
      I think it's more than just a file contention issue. On
      hopper@nersc I did set DVS_MAXNODES to 14 and that helped out a
      lot. Without that set before I was able to run with 480 processes
      accessing the same data file (the <span>17*768*1152 with 324 time
        steps data set) but with the "bad" one that was </span><span>768*1152
        with 9855 time steps</span> I had problems with just 24
      processes.<br>
      <br>
      I have some things which I want to try out but I think you're
      right that using a parallel netcdf library should help a lot, if
      it doesn't cause conflicts.<br>
      <br>
      Thanks,<br>
      Andy<br>
      <br>
      <div class="gmail_quote">
        On Wed, Feb 6, 2013 at 5:20 PM, Moreland, Kenneth <span
          dir="ltr">&lt;<a moz-do-not-send="true"
            href="mailto:kmorel@sandia.gov" target="_blank">kmorel@sandia.gov</a>&gt;</span>
        wrote:<br>
        <blockquote class="gmail_quote" style="margin:0 0 0
          .8ex;border-left:1px #ccc solid;padding-left:1ex">
          <div
style="font-size:14px;font-family:Calibri,sans-serif;word-wrap:break-word">
            <div>
              <div>
                <div>This does not surprise me. &nbsp;The current version of
                  the netCDF reader only uses the basic interface for
                  accessing files, which is basically a serial
                  interface. &nbsp;You are probably getting a lot of file
                  request contention.</div>
                <div><br>
                </div>
                <div>At the time I wrote the netCDF reader, parallel
                  versions were just coming online. &nbsp;I think it would be
                  relatively straightforward to update the reader to use
                  collective parallel calls from a parallel netCDF
                  library. &nbsp;Unfortunately, I have lost track on the
                  status of the parallel netCDF library and file
                  formats. &nbsp;Last I looked, there were actually two
                  parallel netCDF libraries and formats. &nbsp;One version
                  directly added collective parallel calls to the
                  library. &nbsp;The other changed the format to use hdf5
                  under the covers and use the parallel calls therein.
                  &nbsp;These two libraries use different formats for the
                  files and I don't think are compatible with each
                  other. &nbsp;Also, it might be the case for one or both
                  libraries that you cannot read the data in parallel if
                  it was not written in parallel or written in an older
                  version of netCDF.</div>
                <div><br>
                </div>
                <div>-Ken</div>
                <div>
                  <div><br>
                  </div>
                </div>
              </div>
            </div>
            <span>
              <div style="border-right:medium
                none;padding-right:0in;padding-left:0in;padding-top:3pt;text-align:left;font-size:11pt;border-bottom:medium
                none;font-family:Calibri;border-top:#b5c4df 1pt
                solid;padding-bottom:0in;border-left:medium none">
                <span style="font-weight:bold">From: </span>Andy Bauer
                &lt;<a moz-do-not-send="true"
                  href="mailto:andy.bauer@kitware.com" target="_blank">andy.bauer@kitware.com</a>&gt;<br>
                <span style="font-weight:bold">Date: </span>Wednesday,
                February 6, 2013 10:38 AM<br>
                <span style="font-weight:bold">To: </span>"<a
                  moz-do-not-send="true"
                  href="mailto:paraview@paraview.org" target="_blank">paraview@paraview.org</a>"
                &lt;<a moz-do-not-send="true"
                  href="mailto:paraview@paraview.org" target="_blank">paraview@paraview.org</a>&gt;,
                Kenneth Moreland &lt;<a moz-do-not-send="true"
                  href="mailto:kmorel@sandia.gov" target="_blank">kmorel@sandia.gov</a>&gt;<br>
                <span style="font-weight:bold">Subject: </span>[EXTERNAL]
                vtkNetCDFCFReader parallel performance<br>
              </div>
              <div>
                <div class="h5">
                  <div><br>
                  </div>
                  <blockquote style="BORDER-LEFT:#b5c4df 5
                    solid;PADDING:0 0 0 5;MARGIN:0 0 0 5">
                    <div>
                      <div>Hi Ken,<br>
                        <br>
                        I'm having some performance issues with a fairly
                        large NetCDF file using the vtkNetCDFCFReader.
                        The dimensions of it are 768 lat, 1152 lon and
                        9855 time steps (no elevation dimension). It has
                        one float variable with these dimensions --
                        pr(time, lat, lon). This results in a file
                        around 33 GB. I'm running on hopper and for
                        small amounts of processes (at most 24 which is
                        the number of cores per node) and the run time
                        seems to increase dramatically as I add more
                        processes. The tests I did read in the first 2
                        time steps and did nothing else. The results are
                        below but weren't done too rigorously:<br>
                        <br>
                        numprocs -- time<br>
                        1&nbsp; -- 1:22<br>
                        2 -- 1:52<br>
                        4 -- 7:52<br>
                        8 -- 5:34<br>
                        16 -- 10:46<br>
                        22 -- 10:37<br>
                        24 -- didn't complete on hopper's "regular" node
                        with 32 GB of memory but I was able to run it in
                        a reasonable amount of time on hopper's big
                        memory nodes with 64 GB of memory.<br>
                        <br>
                        I have the data in a reasonable place on hopper.
                        I'm still playing around with settings (things
                        get a bit better if I set DVS_MAXNODES --
                        <a moz-do-not-send="true"
href="http://www.nersc.gov/users/computational-systems/hopper/performance-and-optimization/hopperdvs/"
                          target="_blank">
http://www.nersc.gov/users/computational-systems/hopper/performance-and-optimization/hopperdvs/</a>)
                        but this seems a bit weird as I'm not having any
                        problems like this on a data set that has
                        spatial dimensions of 17*768*1152 with 324 time
                        steps.<br>
                        <br>
                        Any quick thoughts on this? I'm still
                        investigating but was hoping you could point out
                        if I'm doing anything stupid.<br>
                        <br>
                        Thanks,<br>
                        Andy<br>
                        <br>
                        <br>
                      </div>
                    </div>
                  </blockquote>
                </div>
              </div>
            </span>
          </div>
        </blockquote>
      </div>
      <br>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Powered by <a class="moz-txt-link-abbreviated" href="http://www.kitware.com">www.kitware.com</a>

Visit other Kitware open-source projects at <a class="moz-txt-link-freetext" href="http://www.kitware.com/opensource/opensource.html">http://www.kitware.com/opensource/opensource.html</a>

Please keep messages on-topic and check the ParaView Wiki at: <a class="moz-txt-link-freetext" href="http://paraview.org/Wiki/ParaView">http://paraview.org/Wiki/ParaView</a>

Follow this link to subscribe/unsubscribe:
<a class="moz-txt-link-freetext" href="http://www.paraview.org/mailman/listinfo/paraview">http://www.paraview.org/mailman/listinfo/paraview</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>