Talk:Time Support: Difference between revisions

From ParaQ Wiki
Jump to navigationJump to search
No edit summary
No edit summary
Line 1: Line 1:
OK here is my approach to this  
OK here is my approach to this  


First finalize buyin on the basic approach. There are two main approaches. The first is that a request for multiple time steps is handled by returning a multiblock dataset, one block for each time step. The second approach is to only fullfill request for one time step at a a time. The first approach would require changes to the multiblock executive. It would tend to use a larger memory footprint (many timesteps in memory at once). The second approach is simpler in that it can work with minor changes to the current pipeline. My suggestion is to go with the second approach.
First finalize buyin on the basic approach. There are two main approaches. The first is that a request for multiple time steps is handled by returning a multiblock dataset, one block for each time step. The second approach is to only fullfill requests for one time step at a a time. The first approach would require changes to the multiblock executive. It would tend to use a larger memory footprint (many timesteps in memory at once). The second approach is simpler in that it can work with minor changes to the current pipeline. My suggestion is to go with the second approach.


To that end the following are tasks to be completed:
To that end the following are tasks to be completed:

Revision as of 13:42, 24 July 2006

OK here is my approach to this

First finalize buyin on the basic approach. There are two main approaches. The first is that a request for multiple time steps is handled by returning a multiblock dataset, one block for each time step. The second approach is to only fullfill requests for one time step at a a time. The first approach would require changes to the multiblock executive. It would tend to use a larger memory footprint (many timesteps in memory at once). The second approach is simpler in that it can work with minor changes to the current pipeline. My suggestion is to go with the second approach.

To that end the following are tasks to be completed:

Add in time step reporting and querying using the pipeline mechanisms to the critical readers as defined by Sandia if they currently lack that support. It looks right now that the key readers do report available time steps using TIME_STEPS and it appears they also honor UPDATE_TIME_INDEX (mostly there)

Add in a method in Algorithm to specify the UpdateTimeIndex so that time can be pulled from the end of a pipeline. (easy)

Create a time interpolating filter. You specify the start time, end time, number of timesteps. The filter will retrieve input data as needed to produce the interpolated output data. Additionally it would have time shift and scale ivars so that two time varying datasets could be aligned even though their time units may have different origins or scales. (moderate)

Add an ability to request a single cell. This will be a third form of update extent. This will be useful for getting one cell across many time steps. (could be nasty, lots of interacitons with the pipeline and existing readers and extent types)

If needed, provide a pipeline hint that many time step requests will be coming. Bascially a hint that readers can use to preload the data once. (easy to add the hint, have to have smart readers reference it)

Could add a time data cache, that caches the last N time/data requests. (fairly easy)