<div dir="ltr">I see. Thanks for the info.</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, May 5, 2014 at 3:04 PM, Burlen Loring <span dir="ltr"><<a href="mailto:burlen.loring@gmail.com" target="_blank">burlen.loring@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
If you care at all about performance you're going to want to switch
to appended unencoded. This is the fastest most efficient mode and
generally gives smaller files than the others.<div><div class="h5"><br>
<br>
On 05/05/2014 02:41 PM, Mohammad Mirzadeh wrote:<br>
<blockquote type="cite">
<div dir="ltr">Thanks for the reference. What exactly is the
benefit of having the data in appended mode? I guess right now
I'm using binary mode</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Mon, May 5, 2014 at 11:46 AM, Burlen
Loring <span dir="ltr"><<a href="mailto:bloring@lbl.gov" target="_blank">bloring@lbl.gov</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Ahh, right, I assumed
you were using VTK's classes to write the data, although
if you're writing them yourself, you'll still want to
emulate the "unencoded appended" format to get the best
performance. see SetModeTo* and EncodeAppendedData* funcs
(<a href="http://www.vtk.org/doc/nightly/html/classvtkXMLWriter.html" target="_blank">http://www.vtk.org/doc/nightly/html/classvtkXMLWriter.html</a>)<br>
<br>
There's also some essential info in the file format doc( <a href="http://www.vtk.org/VTK/img/file-formats.pdf" target="_blank">http://www.vtk.org/VTK/img/file-formats.pdf</a>).
Search for "Appended".<br>
<br>
One way to get a handle on what's happening with the
various modes and options is to examine the files you can
produce in PV. For example open up PV, create a sphere
source(or if you prefer some unstructured data), under
file menu save the data and chose one of the pvt* options
Compare the files produced for binary and appended modes
etc...
<div>
<div><br>
<br>
<div>On 05/05/2014 11:34 AM, Mohammad Mirzadeh wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div>Burlen,</div>
<div><br>
</div>
<div>Thanks a lot for your comments. </div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> This
is not an answer to your question but
there is a usage caveat w/ VTK XML files
that I want to make sure you're aware of.
When you use that format make sure you set
mode to "appended" and "encode" off. This
is the combination to produce binary files
which are going to be faster and very
likely smaller too. You probably already
know that, but just in case ...<br>
<br>
</div>
</blockquote>
<div> </div>
<div>I write the data itself as binary inside
the .vtu file itself. Is this what you mean
by appended mode? I cannot see any 'mode'
keyword the xml file. Same as for encode; I
don't have it in the xml file.</div>
<div><br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> now
to get to your question:
<div><br>
<blockquote type="cite">1) Go with a
single parallel HDF5 file that
includes data for all time-steps. This
makes it all nice and portable except
there are two issues. i) It looks like
doing MPI-IO might not be as efficient
as separate POSIX IO, especially on
large number of processors. ii)
ParaView does not seem to be able to
read HDF5 files in parallel</blockquote>
</div>
comment: If I were you I'd avoid putting
all time steps in a single file, or any
solution where files get too big. Once
files occupy more than ~80% of a tape
drive you'll have very hard time getting
them on and off archive systems. see this:
<a href="http://www.nersc.gov/users/data-and-file-systems/hpss/storing-and-retrieving-data/mistakes-to-avoid/" target="_blank">http://www.nersc.gov/users/data-and-file-systems/hpss/storing-and-retrieving-data/mistakes-to-avoid/</a>
My comment assumes that you actually use
such systems. But you probably will need
to if you generate large datasets at
common HPC centers.<br>
<br>
</div>
</blockquote>
<div><br>
</div>
<div>That's actually a very good point I was
not thinking of! Thanks for sharing.</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> I've
seen some AMR codes get elaborate in their
HDF5 formats and run into serious
performance issues as a result. So my
comment here is that if you go with HDF5,
keep the format as simple as possible! and
of course file sizes small enough to be
archived ;-)<span><font color="#888888"><br>
<br>
Burlen</font></span>
<div>
<div><br>
<br>
<br>
<div>On 05/05/2014 10:48 AM, Mohammad
Mirzadeh wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">They are represented
as unstructured grid. As a sample
run, a 100M grid point on 256 proc
produces almost 8.5G file. We
intent to push the limits close to
1B at most at this time with #
processors up to a few thousands.
However, it would be good to have
something that could scale to
larger problems as well</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Sat,
May 3, 2014 at 1:28 AM, Stephen
Wornom <span dir="ltr"><<a href="mailto:stephen.wornom@inria.fr" target="_blank">stephen.wornom@inria.fr</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Mohammad
Mirzadeh wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<div> Hi I am at a
critical point in
deciding I/O format for
my application. So far
my conclusion is to use
parallel HDF5 for
restart files as they
are quite flexible and
portable across systems.<br>
<br>
When it comes to
visualization, however,
i'm not quite sure. Up
until now I've been
using pvtu along with
vtu files and although
they generally work
fine, one easily gets in
trouble when running big
simulations on large
number of processors as
the number of files can
easily get out of
control and even
simplest utility
commands (e.g. ls) takes
minutes to finish!<br>
<br>
After many thinking I've
come to a point to
decide between two
strategies:<br>
<br>
1) Go with a single
parallel HDF5 file that
includes data for all
time-steps. This makes
it all nice and portable
except there are two
issues. i) It looks like
doing MPI-IO might not
be as efficient as
separate POSIX IO,
especially on large
number of processors.
ii) ParaView does not
seem to be able to read
HDF5 files in parallel<br>
<br>
2) Go with the same
pvtu+vtu strategy except
take precautions to
avoid file explosions. I
can think of two
strategies here: i) use
nested folders to
separate vtu files from
pvtu and also each time
step ii) create an IO
group communicator with
much less processors
that do the actual IO.<br>
<br>
My questions are 1) Is
the second approach
necessarily more
efficient than MPI-IO
used in HDF5? and 2) Is
there any plan to
support parallel IO for
HDF5 files in paraview?<br>
<br>
<br>
</div>
</div>
<div>
_______________________________________________<br>
Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a><br>
<br>
Visit other Kitware
open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
<br>
Please keep messages
on-topic and check the
ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" target="_blank">http://paraview.org/Wiki/ParaView</a><br>
<br>
Follow this link to
subscribe/unsubscribe:<br>
<a href="http://www.paraview.org/mailman/listinfo/paraview" target="_blank">http://www.paraview.org/mailman/listinfo/paraview</a><br>
</div>
</blockquote>
Are your meshes structured or
unstructured? How many
vertices in your meshes?<span><font color="#888888"><br>
<br>
Stephen<br>
<br>
-- <br>
<a href="mailto:stephen.wornom@inria.fr" target="_blank">stephen.wornom@inria.fr</a><br>
2004 route des lucioles -
BP93<br>
Sophia Antipolis<br>
06902 CEDEX<br>
<br>
Tel: 04 92 38 50 54<br>
Fax: 04 97 15 53 51<br>
<br>
</font></span></blockquote>
</div>
<br>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a>
Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a>
Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" target="_blank">http://paraview.org/Wiki/ParaView</a>
Follow this link to subscribe/unsubscribe:
<a href="http://www.paraview.org/mailman/listinfo/paraview" target="_blank">http://www.paraview.org/mailman/listinfo/paraview</a>
</pre>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a>
Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a>
Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" target="_blank">http://paraview.org/Wiki/ParaView</a>
Follow this link to subscribe/unsubscribe:
<a href="http://www.paraview.org/mailman/listinfo/paraview" target="_blank">http://www.paraview.org/mailman/listinfo/paraview</a>
</pre>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div></div></div>
</blockquote></div><br></div>