It seems that you're writting parallel geometry in a serial way. Partitioned grid, as far as I know, must be stored as spatial collections in xdmf format. In other words, each processors stores its own grid portion (a pair xdmf/hdf5 file) and rank0, for instance, writes a xdmf file describing how to "glue" the grid pieces using a spatial collection clause.<br>
<br>[]'s<br><br>Renato.<br><br><div class="gmail_quote">2009/9/2 weaponfire2005 <span dir="ltr"><<a href="mailto:weaponfire2005@163.com">weaponfire2005@163.com</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div>Hi: </div>
<div> Well, I post my .xmf file(the attachment). In fact,the structure of the xmf file is simple. It only contains some <Grid>. I use a "for()" loop to write this file. The raw data is HDF5 data format. Is something wrong?</div>
<div>Â Thanks for help.</div>
<div><br><br>Â </div>
<div></div><br><pre>2009-09-01,"Berk Geveci" <a href="mailto:berk.geveci@kitware.com%3E+%EF%BC%9A%3ECan+you+post+an+example+dataset?+Just+the+xmf+file+would+be+sufficient.%3E%3E-berk%3E%3E" target="_blank">berk.geveci@kitware.com> :
>Can you post an example dataset? Just the xmf file would be sufficient.
>
>-berk
>
></a> Re:Re: [Paraview] Paraview store all data in every server node?
>> Hi:
>> I used vtkXdmfReader to read my data(ParaView version is 3.4.0), and I
>> also found some information about my problem(XDMF bad memory use) from
>> the mail list. But I didn't find a final resolution for it.
>> My problem is, for example:
>> run: mpirun -np 4 ./pvserver
>> after reading data and rendering isosurface, each
>> node(process) consume 2GB memory
>> and run: mpirun -np 16 ./pvserver
>> after reading data and rendering isosurface, each
>> node(process) also consume 2GB memory
>> For this problem, I could't visualize large-scale dataset. Because if a
>> dataset could be visualized by one node, It also could be visualized by 16
>> nodes, but if it could't be visualized by one node, it also could't by 16
>> nodes(or more).
>> Maybe I should try ParaView 3.6.1. I will cry if the problem still exist
>> in version 3.6.1......</pre><pre>>>
>> 2009-08-30,"Andy Bauer" <<a href="mailto:andy.bauer@kitware.com" target="_blank">andy.bauer@kitware.com</a>> :
>>
>> The problem may be the reader that you're using. Some of the readers in VTK
>> are not parallel so even if you run multiple processes each process will
>> still try to load the entire dataset in that case.
>>
>> Andy
>>
>> 2009/8/30 weaponfire2005 <a>weaponfire2005@163.com>:
>>>
>>> Hi all:
>>> First thanks for Dave.Demarle's help.
>>> I use "mpirun -np 4 ./pvserver" to launch a server with four
>>> nodes, visualizing a 800*800*300 grid. It seems that each process consume
>>> too much memory. So I extend the number of nodes from 4 to 8 using "mpirun
>>> -np 8 ./pvserver" , but each process on 8 nodes server consume the same
>>> memory as 4 nodes server. The situation of 16 processes(16 nodes) is same as
>>> above.
>>> I thought that with the number of processes' growth, each process on
>>> server would share less raw data, so the quantity of memory each process
>>> consuming would decline. It proved that I was wrong. Could someone tell me
>>> the reason?
>>> Thanks for your help:)
>>> Pan Wang
>>>
>>>
</a><div class="im">>> _______________________________________________
>>> Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a>
>>>
>>> Visit other Kitware open-source projects at
>>> <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a>
>>>
>>> Please keep messages on-topic and check the ParaView Wiki at:
>>> <a href="http://paraview.org/Wiki/ParaView" target="_blank">http://paraview.org/Wiki/ParaView</a>
>>>
>>> Follow this link to subscribe/unsubscribe:
>>> <a href="http://www.paraview.org/mailman/listinfo/paraview%3E%3E%3E%3E%3E%3E%3E%3E%3E" target="_blank">http://www.paraview.org/mailman/listinfo/paraview
>>>
>>
>>
>>
</a>>> Visit other Kitware open-source projects at
>> <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a>
>>
>> Please keep messages on-topic and check the ParaView Wiki at:
>> <a href="http://paraview.org/Wiki/ParaView" target="_blank">http://paraview.org/Wiki/ParaView</a>
>>
>> Follow this link to subscribe/unsubscribe:
>> <a href="http://www.paraview.org/mailman/listinfo/paraview" target="_blank">http://www.paraview.org/mailman/listinfo/paraview</a>
>>
>>
</div></pre><br><br><span title="neteasefooter"></span><hr>
<a href="http://www.yeah.net/?from=footer" target="_blank">没有广告的终身å…费邮箱,www.yeah.net</a>
<br>_______________________________________________<br>
Powered by <a href="http://www.kitware.com" target="_blank">www.kitware.com</a><br>
<br>
Visit other Kitware open-source projects at <a href="http://www.kitware.com/opensource/opensource.html" target="_blank">http://www.kitware.com/opensource/opensource.html</a><br>
<br>
Please keep messages on-topic and check the ParaView Wiki at: <a href="http://paraview.org/Wiki/ParaView" target="_blank">http://paraview.org/Wiki/ParaView</a><br>
<br>
Follow this link to subscribe/unsubscribe:<br>
<a href="http://www.paraview.org/mailman/listinfo/paraview" target="_blank">http://www.paraview.org/mailman/listinfo/paraview</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>Renato N. Elias<br>===================================<br>High Performance Computing Center (NACAD)<br>Federal University of Rio de Janeiro (UFRJ)<br>Rio de Janeiro, Brazil<br>
<br>Sent from Rio De Janeiro, RJ, Brazil