View Issue Details Jump to Notes ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0006035ParaView(No Category)public2007-11-09 14:282009-05-13 13:59
ReporterDavid Karelitz 
Assigned ToBerk Geveci 
PriorityhighSeveritycrashReproducibilityalways
StatusclosedResolutionduplicate 
PlatformOSOS Version
Product Version 
Target Version3.4Fixed in Version3.4 
Summary0006035: (MARS-immediate) 0th process using too much memory
DescriptionWhen opening and performing operations on a large CTH AMR dataset, the 0th node uses way too much memory. Order 3-4 GB on the 0th node vs 640 MB on other nodes. There was a CTH reader, CellToPoint, Contour, Calculator, TimeToText, and TextSource in the pipeline and we tried to save a movie and disconnect.
TagsNo tags attached.
Project
Topic Name
Type
Attached Files

 Relationships
duplicate of 0006773closedUtkarsh Ayachit Use Broadcast to propagate commands from root sever node to other server nodes 

  Notes
(0009690)
David Karelitz (reporter)
2007-11-15 16:45

There appear to be 2 issues with respect to memory usage. These are from a separate run than that in the description

1. When I clip by scalar value with an incorrect value (that results in more elements than the correct value), the memory is not released for the first incorrect clip value.

2. When I choose save and disconnect, memory usage on the 0th node jumps from 33% to 85% and uses an additional 2-4% per frame. This is on an 8 GB machine.

The dataset is AMRCTH running on 120 nodes
(0009691)
David Karelitz (reporter)
2007-11-15 16:50

It's 4 GB of memory per node, not 8GB
(0010104)
David Karelitz (reporter)
2008-01-10 11:11

When we switched the 0th node to a node with 12GB of memory, performance was an order of magnitude faster. The rest of the nodes were the same as before
(0011159)
Ken Moreland (manager)
2008-04-08 13:02

This problem is really caused by a bug in OpenMPI and the way that messages are passed from node 0 to the rest of the nodes. I opened a new bug that more specifically addresses the problem.
(0011188)
Alan Scott (manager)
2008-04-08 17:56

I agree with ken. This is an open mpi bug, not related to the bug description listed below. Solution is th change MPI_Send to MPI_Ssend, until the bug in the OpenMPI library is fixed. I believe the file is MPICommunication.cxx or something like that.

 Issue History
Date Modified Username Field Change
2007-11-09 14:28 David Karelitz New Issue
2007-11-15 16:45 David Karelitz Note Added: 0009690
2007-11-15 16:50 David Karelitz Note Added: 0009691
2007-11-19 18:19 David Karelitz Summary 0th process using too much memory => (MARS-immediate) 0th process using too much memory
2007-12-05 13:13 Alan Scott Status backlog => tabled
2007-12-05 13:13 Alan Scott Assigned To => Berk Geveci
2008-01-10 11:11 David Karelitz Note Added: 0010104
2008-03-07 07:56 Berk Geveci Target Version => MARS
2008-03-07 08:03 Berk Geveci Category => 3.4
2008-04-08 13:01 Ken Moreland Relationship added duplicate of 0006773
2008-04-08 13:02 Ken Moreland Duplicate ID 0 => 6773
2008-04-08 13:02 Ken Moreland Status tabled => @80@
2008-04-08 13:02 Ken Moreland Resolution open => duplicate
2008-04-08 13:02 Ken Moreland Note Added: 0011159
2008-04-08 17:56 Alan Scott Status @80@ => closed
2008-04-08 17:56 Alan Scott Note Added: 0011188
2009-05-13 13:58 Utkarsh Ayachit Target Version MARS => 3.4
2009-05-13 13:59 Utkarsh Ayachit Fixed in Version => 3.4
2011-06-16 13:10 Zack Galbreath Category => (No Category)


Copyright © 2000 - 2018 MantisBT Team