<div>Hi Berk,</div><div><br></div><div>I thought D3 wouldn't repartition the data but it was my mistake. I did some tests here and D3 worked properly, however, I was expecting that it would be possible to remove the internal surfaces, produced by D3, with "clean to grid" filter. I also tried to play with the "Boundary Mode" options with no success. The internal faces are still there.</div>
<div><br></div><div>Some further comments:</div><div><br></div><div>I used a Xdmf reader in a temporal collection (12 time steps) of spatial collections (4 partitions) in a small unstructured mesh. The reader loads everything as an unique multiblock. It'll surely blows out the memory for a larger number of time steps and/or spatial partitions (I'm copying this message to xdmf list).</div>
<div><br></div><div>ParaView 3.10.0-RC1 does not seem robust (yet) :-( </div><div><br></div><div>it crashes all of a sudden and stop responding in random actions (I'll try to list some scenarios and let my dataset available to reproduce the errors).</div>
<div><br></div><div>The client machine is a Windows 7 x64 connected to an offscreen MPI session with a Altix-ICE server (Linux x86_64). </div><div><br></div><div>[]'s</div><div><br></div><div>Renato.</div><br><div class="gmail_quote">
On Wed, Mar 2, 2011 at 12:58 PM, Berk Geveci <span dir="ltr"><<a href="mailto:berk.geveci@kitware.com">berk.geveci@kitware.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
I am not sure that I follow. The output of D3 should be a dataset<br>
re-partitioned to be load balanced whether the input is distributed or<br>
not. Are you saying that the output of D3 is different based on<br>
whether the input is distributed?<br>
<br>
As expected, D3 will probably produce a different partitioning than<br>
the input. But that shouldn't be a problem, right?<br>
<div><div></div><div class="h5"><br>
On Wed, Mar 2, 2011 at 7:54 AM, Renato Elias <<a href="mailto:rnelias@gmail.com">rnelias@gmail.com</a>> wrote:<br>
> Hi Berk,<br>
> I already did such test. It really works but the dataset must be serial and<br>
> loaded in a parallel session. In this case, D3 will take care of the data<br>
> distribution, load balance and ghost information. However, if the dataset<br>
> read is already partitioned, D3 only creates a new partition and let the<br>
> original distribution (in my case, performed by Metis) untouched. D3 is not<br>
> able to (re)partition unstructured grids (at least in my tests... or I'm<br>
> doing something wrong).<br>
> []'s<br>
> Renato.<br>
><br>
> On Tue, Mar 1, 2011 at 1:09 PM, Berk Geveci <<a href="mailto:berk.geveci@kitware.com">berk.geveci@kitware.com</a>> wrote:<br>
>><br>
>> Great. Ping me in a 2-3 months - we should have started making changes<br>
>> to the ghost level stuff by then. Until then, you should be able to<br>
>> use D3 to redistribute data and generate (cell) ghost levels as<br>
>> needed. So the following should work<br>
>><br>
>> reader -> D3 -> extract surface<br>
>><br>
>> -berk<br>
>><br>
>> On Tue, Mar 1, 2011 at 10:57 AM, Renato Elias <<a href="mailto:rnelias@gmail.com">rnelias@gmail.com</a>> wrote:<br>
>> >> Berk: Do you have the ability to mark a node as "owned" by one<br>
>> >> partition<br>
>> >> and as "ghost" on<br>
>> >> other partitions?<br>
>> > Yes! We classify processes as master and slaves according to their<br>
>> > ranking<br>
>> > numbers. After this we can assign the shared node to a master (which<br>
>> > will<br>
>> > take care of shared computations) and tell the slave process that this<br>
>> > node<br>
>> > is being shared with a master (and the slave process will consider it as<br>
>> > a<br>
>> > "ghost" for computations).<br>
>> > The ideas we used in our parallel solver were taken from the following<br>
>> > article:<br>
>> > Karanam, A. K., Jansen, K. E. and Whitinig, C. H., "Geometry Based<br>
>> > Pre-processor for Parallel Fluid Dynamic Simulations Using a<br>
>> > Hierarchical<br>
>> > Basis", Engineering with Computers (24):17-26, 2008.<br>
>> > This article was made available from the author<br>
>> > in <a href="http://www.scorec.rpi.edu/REPORTS/2007-3.pdf" target="_blank">http://www.scorec.rpi.edu/REPORTS/2007-3.pdf</a> (Figures 7 to 9 explain<br>
>> > the<br>
>> > communication method between processes),<br>
>> > Regards<br>
>> > Renato.<br>
>> ><br>
>> > On Tue, Mar 1, 2011 at 12:24 PM, Berk Geveci <<a href="mailto:berk.geveci@kitware.com">berk.geveci@kitware.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> Hi Renato,<br>
>> >><br>
>> >> > I think I'm missing something.... you said cells only?!<br>
>> >> > If I understood this subject correctly, a cell should be considered<br>
>> >> > ghost if<br>
>> >> > it's held by more than one partition/process, isn't it?! In this<br>
>> >> > case,<br>
>> >> > there'll be an overlapped layer of elements. The problem is that my<br>
>> >> > MPI<br>
>> >> > solver does not make use of this overlapped layer of cells/elements.<br>
>> >><br>
>> >> Yep. You understood correctly. Ghost cells are very common for finite<br>
>> >> difference calculations but not as common for finite elements.<br>
>> >><br>
>> >> > It only<br>
>> >> > has nodes/points that are shared by processes. This explains why I<br>
>> >> > asked<br>
>> >> > by<br>
>> >> > a ghost node (shared node would be more appropriated to define such<br>
>> >> > kind<br>
>> >> > of<br>
>> >> > node).<br>
>> >> > Can I consider a cell as ghost if it touches the parallel interface<br>
>> >> > (without<br>
>> >> > overlapping)? Would it work?<br>
>> >><br>
>> >> Nope. Then you'd start seeing gaps. The right thing to do is for<br>
>> >> ParaView to support ghost points (nodes) better. However, this is<br>
>> >> non-trivial in some cases. For removing internal interfaces, it is<br>
>> >> sufficient to mark points as ghosts. However, for accurately<br>
>> >> performing statistics, you need to make sure that you count all points<br>
>> >> only once, which requires assigning ghost nodes to processes. So a<br>
>> >> replicated node would be marked as ghost (a better word is shared) and<br>
>> >> also owned by a particular process. We are going to improve VTK's<br>
>> >> ghost level support. This is something we'll support. However, it will<br>
>> >> be up to the simulation to produce the write output. Do you have the<br>
>> >> ability to mark a node as "owned" by one partition and as "ghost" on<br>
>> >> other partitions?<br>
>> >><br>
>> >> Best,<br>
>> >> -berk<br>
>> ><br>
>> ><br>
>> ><br>
>> > --<br>
>> > Renato N. Elias<br>
>> > =============================================<br>
>> > Professor at Technology and Multilanguage Department (DTL)<br>
>> > Federal Rural University of Rio de Janeiro (UFRRJ)<br>
>> > Nova Iguaçu, RJ - Brazil<br>
>> > =============================================<br>
>> > Researcher at High Performance Computing Center (NACAD)<br>
>> > Federal University of Rio de Janeiro (UFRJ)<br>
>> > Rio de Janeiro, Brazil<br>
>> ><br>
>> ><br>
><br>
><br>
><br>
> --<br>
> Renato N. Elias<br>
> =============================================<br>
> Professor at Technology and Multilanguage Department (DTL)<br>
> Federal Rural University of Rio de Janeiro (UFRRJ)<br>
> Nova Iguaçu, RJ - Brazil<br>
> =============================================<br>
> Researcher at High Performance Computing Center (NACAD)<br>
> Federal University of Rio de Janeiro (UFRJ)<br>
> Rio de Janeiro, Brazil<br>
><br>
><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Renato N. Elias<div>=============================================</div><div>Professor at Technology and Multilanguage Department (DTL)</div><div>Federal Rural University of Rio de Janeiro (UFRRJ)</div>
<div>Nova Iguaçu, RJ - Brazil</div><div><br>=============================================<br>Researcher at High Performance Computing Center (NACAD)<br>Federal University of Rio de Janeiro (UFRJ)<br>Rio de Janeiro, Brazil<br>
</div><br>