MantisBT - ParaView
View Issue Details
0012168ParaViewBugpublic2011-05-06 19:372011-08-17 20:28
Alan Scott 
Robert Maynard 
normalminoralways
closedfixed 
 
3.123.12 
Sandia
12168-Cleanup-ResetCache-ExodusII
incorrect functionality
0012168: Cleanup ResetCache for Exodus reader
We need to understand and cleanup the Exodus reader ResetCache code. This code is problematic with ParaView, and when turned off, has been problematic with Pat at Sandia. We really should try to figure out what this code does, and turn it off in a way that doesn't hurt Pat.
No tags attached.
related to 0011982closed Robert Maynard 3.10.0 has a fatal memory leak 
Issue History
2011-05-06 19:37Alan ScottNew Issue
2011-07-21 10:32Ken MorelandRelationship addedrelated to 0011982
2011-07-21 10:35Ken MorelandNote Added: 0027036
2011-07-28 13:25Alan ScottProject => Sandia
2011-07-28 13:25Alan ScottType => incorrect functionality
2011-07-28 13:25Alan ScottStatusbacklog => todo
2011-07-28 14:06Alan ScottNote Added: 0027093
2011-07-28 14:07Alan ScottNote Edited: 0027093bug_revision_view_page.php?bugnote_id=27093#r368
2011-07-28 14:54Ken MorelandNote Added: 0027095
2011-08-03 09:00Utkarsh AyachitAssigned To => Robert Maynard
2011-08-03 09:00Utkarsh AyachitTarget Version => 3.12
2011-08-03 09:04Robert MaynardStatustodo => active development
2011-08-03 13:51Robert MaynardNote Added: 0027216
2011-08-03 14:03Ken MorelandNote Added: 0027217
2011-08-04 11:32Robert MaynardNote Added: 0027225
2011-08-10 12:57Robert MaynardTopic Name => 12168-Cleanup-ResetCache-ExodusII
2011-08-10 12:57Robert MaynardStatusactive development => gatekeeper review
2011-08-10 12:57Robert MaynardFixed in Version => 3.12
2011-08-10 12:57Robert MaynardResolutionopen => fixed
2011-08-11 15:11Robert MaynardNote Added: 0027342
2011-08-12 14:23Utkarsh AyachitStatusgatekeeper review => customer review
2011-08-12 14:23Utkarsh AyachitNote Added: 0027351
2011-08-17 20:28Alan ScottNote Added: 0027386
2011-08-17 20:28Alan ScottStatuscustomer review => closed

Notes
(0027036)
Ken Moreland   
2011-07-21 10:35   
Implicit in this bug is that we should not re-introduce the memory footprint problems of bug 0011982, which may happen if we simply blindly turn the cache back on. Before closing this bug it should be checked that 0011982 still cannot be replicated.
(0027093)
Alan Scott   
2011-07-28 14:06   
(edited on: 2011-07-28 14:07)
The other problem (reverse of Ken's note above) is that we have reintroduced bug number 0006770. In other words, reading of datasets with large numbers of blocks is slow.

Please use many.e to test. It has been pushed to the VTKLargeData repository.

(0027095)
Ken Moreland   
2011-07-28 14:54   
Also, please make a regression test that loads many.e so we don't reintroduce the problem yet again.
(0027216)
Robert Maynard   
2011-08-03 13:51   
My Proposal is to do the following:

Allow vtkPExodusIIReader to specify a cache limit per MPI process and than divide that cache size equally among all vtkExodusIIReaders. We than expose the cache limit option as a GUI setting of the reader.

So if the user sets a cache of 1GB, and we have 10 readers per process each reader would have a cache size of 102.4MB
(0027217)
Ken Moreland   
2011-08-03 14:03   
It was my understanding (and I may be wrong) that the cache is used mostly during the RequestData pass and is pretty useless after the fact. In some rare circumstances, like when the user adds or removes a block, the cached fields could be reused. But when something like the time step changes, all the caches become invalid anyway. I think that's why we weren't seeing a big effect when turning off the cache of single block, many time step files.

If that is the case, allow me to make a counter proposal. Rather than split the cache among the vtkExodusIIReaders, let only one have a cache at any one time. To do this, have the vtkExodusIIReader free its cache once it's finished reading its file (right before it returns from its own RequestData or whatever the execute method is called).

The vtkExodusIIReaders are run in sequential order, right? So the first one would be called. It would use all the allotted cache. It would read its file. It would free its cache. Then the next reader would be called which could again use all the cache.

This would have two advantages. Advantage 1: Every reader gets all the cache. Advantage 2: when vtkPExodusIIReader returns from RequestData, none of these caches are hanging around.
(0027225)
Robert Maynard   
2011-08-04 11:32   
So after your comment Ken I went back to see how the cache is used. We are both right in essence the cache stores information based on the timestep, the issue is that alot of information like GLOBAL_ELEMENT_ID, GLOBAL_NODE_ID, ELEM_MAP, FACE_MAP are stored at timestep -1 as they are constant for all timesteps, while the NODAL_COORDS and properties are stored per time step that they change on, so if the NODAL_COORDS never change for the entire time range it will only have a single cache entry.
(0027342)
Robert Maynard   
2011-08-11 15:11   
I have updated the code based on our discussion during the ParaView meeting.
(0027351)
Utkarsh Ayachit   
2011-08-12 14:23   
Merged into git-master
(0027386)
Alan Scott   
2011-08-17 20:28   
I will accept this as good enough for now. I did observe the following behavior:
* many.e loaded within about 5 seconds. Excellent.
* Whipple shield had memory grow when I toggled variables on and off, by about 100 mbytes. However, it then became stable.
* Whipple shield had memory grow when I ran through all timesteps by about 300 mbytes. Upon returning to timestep 0, this memory growth did not go away (thus, I assume it is caching). However, after about 5 or so timesteps, it stopped growing. Further, the next time i went through time, it did not grow.

Tested Linux, development, local server.