<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi,<br>
<br>
I found a solution to this problem. I had to insert an
UpdatePipeline() call just before the writer update :<br>
<blockquote><i>myLastFilter.UpdatePipeline()<br>
writer.UpdatePipeline()</i><br>
<br>
</blockquote>
This is a strange behavior. Maybe due to MPICH2 usage ? I don't
know... However it runs well like that.<br>
<br>
Thanks for your help<br>
<br>
Yves<br>
<blockquote><br>
Le mardi 29 janvier 2013 15:56:32, Utkarsh Ayachit a écrit :<br>
</blockquote>
<blockquote type="cite"><br>
Yves,<br>
<br>
1. I don't see the oddity with the block number and BlockID when<br>
running in parallel, as you describe. For me, the BlockID in
indeed<br>
Index for block + 1 whether I run in parlalel or not.<br>
2. Running your python script as a single process, I get 1:18
minutes,<br>
while in parallel (2 procs), I get 1:22 mins. Given that your
dataset<br>
very small and the computation done also very trivial (you are<br>
transforming 50 points!), it's not surprising that the overheads
of<br>
parallel processing (interprocessing communication for various
steps<br>
in the pipeline execution) make things slower. Try doing the same
with<br>
a multiblock having thousands of points in each block and then
you'll<br>
see the different<br>
<br>
Utkarsh<br>
<br>
<br>
<br>
On Tue, Jan 15, 2013 at 11:17 AM, Yves Rogez<br>
<a class="moz-txt-link-rfc2396E" href="mailto:yves.rogez@obs.ujf-grenoble.fr"><yves.rogez@obs.ujf-grenoble.fr></a> wrote:<br>
<blockquote type="cite"><br>
Ok, trying ProcessIdScalars filter obliged me to see the results
more in<br>
details (with spreadsheet view).<br>
<br>
My input : 50 blocks containing each 1 polydata constituted by 1
point and 1<br>
vertex cell<br>
<br>
With a single process :<br>
I get 50 blocks containing 1 point and 1 vertex cell, with PID =
0,<br>
block number = ( index of the block + 1 )<br>
So it is OK<br>
<br>
With MPI (2 processes) :<br>
I get 50 blocks containing 2 points and 2 vertices cells, each
with<br>
strange block numbers :<br>
block id 0 -> pt 1 has BN=2, pt 2 has BN=3<br>
block id 1 -> BN=(5,6)<br>
block id 2 -> BN=(8,9)<br>
block id 3 -> BN=(11,12) and so on...<br>
It seems :<br>
BN for Point1 = ( ( BlockID + 1 ) * 3 ) - 1<br>
BN for Point2 = ( BlockID + 1 ) * 3<br>
<br>
Point 1 of a block has always PID=0 and point 2 PID=1<br>
<br>
Maybe I made something wrong when generating my input (please
find attached<br>
a zip) ?<br>
<br>
Yves Rogez<br>
<br>
IPAG<br>
Institut de Planétologie et d'Astrophysique de Grenoble<br>
Bat D de Physique - BP. 53 - 38041 Grenoble - FRANCE<br>
<br>
tel : +33 (0)4 76 63 52 80<br>
lab : +33 (0)4 76 63 52 89<br>
Le 15/01/2013 16:25, Utkarsh Ayachit a écrit :<br>
<br>
Just to make sure, your ParaView is built with MPI support
enabled,<br>
right? XMLMultiBlockDataReader does distribute the blocks to
read<br>
among the processes. Try apply a "ProcessIdScalars" filter in
the<br>
middle and then look at the ProcessId assigned to the blocks in
the<br>
data. They should show how the blocks were distributed.<br>
<br>
Utkarsh<br>
<br>
On Tue, Jan 15, 2013 at 7:20 AM, Yves Rogez<br>
<a class="moz-txt-link-rfc2396E" href="mailto:yves.rogez@obs.ujf-grenoble.fr"><yves.rogez@obs.ujf-grenoble.fr></a> wrote:<br>
<br>
Hello,<br>
<br>
I'm trying to parallelize a process using pvbatch and MPI, with
MultiBlock<br>
data set; thus using the vtk composite pipeline.<br>
I made a sample python program that is representative of what I
have to do :<br>
<br>
--------------------------------------------------------------------------------------------------<br>
<br>
from paraview.simple import *<br>
<br>
r = servermanager.sources.XMLMultiBlockDataReader()<br>
r.FileName = "input.vtm"<br>
<br>
# Defining a sample fake data processing<br>
nbTs = 1000<br>
ts = {}<br>
for tIndex in range( 0, nbTs ):<br>
ts[tIndex] = servermanager.filters.Transform()<br>
if tIndex == 0:<br>
ts[tIndex].Input = r<br>
else:<br>
ts[tIndex].Input = ts[tIndex - 1]<br>
ts[tIndex].Transform.Scale = [1.01,1.01,1.01]<br>
<br>
w = servermanager.writers.XMLMultiBlockDataWriter()<br>
w.Input = ts[nbTs - 1]<br>
w.FileName = "output.vtm"<br>
<br>
w.UpdatePipeline()<br>
<br>
--------------------------------------------------------------------------------------------------<br>
<br>
I launch that using "mpiexec -np 4 pvbatch myscript.py"<br>
All run well but I get a longer time using MPI than using only
"pvbatch<br>
myscript.py".<br>
<br>
By monitoring RAM, I noticed that it seems the data is loaded on
time by MPI<br>
process, and (maybe) all the MPI processes do exactly the same
job,<br>
computing four times all the data.<br>
<br>
Why my blocks in MultiBlock data set aren't dispatched over the
MPI<br>
processes ?<br>
What am I doing wrong ?<br>
<br>
Many thanks for any help,<br>
<br>
Yves<br>
<br>
_______________________________________________<br>
Powered by <a class="moz-txt-link-abbreviated" href="http://www.kitware.com">www.kitware.com</a><br>
<br>
Visit other Kitware open-source projects at<br>
<a class="moz-txt-link-freetext" href="http://www.kitware.com/opensource/opensource.html">http://www.kitware.com/opensource/opensource.html</a><br>
<br>
Please keep messages on-topic and check the ParaView Wiki at:<br>
<a class="moz-txt-link-freetext" href="http://paraview.org/Wiki/ParaView">http://paraview.org/Wiki/ParaView</a><br>
<br>
Follow this link to subscribe/unsubscribe:<br>
<a class="moz-txt-link-freetext" href="http://www.paraview.org/mailman/listinfo/paraview">http://www.paraview.org/mailman/listinfo/paraview</a><br>
<br>
<br>
<pre class="moz-signature" cols="72">--
Yves Rogez
IPAG / CNRS
</pre>
</blockquote>
</blockquote>
</body>
</html>