ParaView Summit, June 4-5, 2007
Time | Length | Item | Who | Slides |
---|---|---|---|---|
8:30 | 0.5 hr | Coffee and knock-knock jokes | All | |
9:00 | 1.5 hr | InfoVis
|
Wylie | |
10:30 | 0.5 hr | Review of representation/view architecture | Ayachit | |
11:00 | 1.5 hr | Lunch | All | |
12:30 | 1.0 hr | More Time Support
|
Geveci | |
1:30 | 1.0 hr | Rollout strategy | Rogers | |
2:30 | 1.0 hr | Making an impact | Rogers | |
3:30 | Party Time! | All |
Time | Length | Item | Who | Slides |
---|---|---|---|---|
8:30 | 0.5 hr | Coffee and knock-knock jokes | All | |
9:00 | 0.5 hr | Software methodology | Greenfield | |
9:30 | 0.5 hr | Data Centric Scripting Interface | Geveci | |
10:00 | 1.0 hr | User focus | ParaView Users | |
11:00 | 1.5 hr | Lunch | All | |
12:30 | 2.0 hr | Monthly deliverables for 3.2 and 3.4 | Greenfield | |
2:30 | 0.5 hr | Porting to Red Storm | Karelitz | |
3:00 | 0.5 hr | Exodus multi-block | Geveci | |
3:30 | 0.5 hr | Browser Icons | Moreland | |
4:00 | 0.5 hr | Default Color Map | Moreland |
Fast Path for Temporal Data
We had a conversation about a fast past for grabbing temporal plots from the exodus reader. The issue is that the current plot over time filter does the most generic thing by asking for time steps sequentially from upstream filters, which load in the entire data each time. The exodus reader, however, is capable for getting data for all time steps for a single point or cell variable. Thus, the current method for getting data over time is a huge waste. We need some special mechanism for getting this type of request to the exodus reader and the result back to the filter. Several approaches were considered.
Small Spatial Requests
Right now, spatial extents tend to be large and sometimes vague chucks (piece 1 of n) and temporal extents tend to be small and specific (timestep x). We could define a different type of extent where you specify point x or cell y and then a range of timesteps. In this case, the exodus reader would, say, output a rectilinear grid with data over time.
The issue with this approach is that it could cause thrashing with the pipeline. It is inevitable that the output of exodus will be sent to some filters that are using its unstructured grid spatial output. When this one filter generates a rectilinear grid, the data will have to be reread and executed for the unstructured grid filters. Once those filters rexecute, a change for the plot will cause a reread/execute again, and so forth.
Second Pipeline
One idea is to build a second pipeline when adding this special temporal plot filter. This second pipeline will mimic the first pipeline, run through the same filter types. From the GUI, this will simply look like a single filter.
The issue with this approach is that it creates a heavy burden on the GUI code. The GUI has to identify the pipeline sequences that will support this special fast path temporal plot, then construct this second pipeline, and then maintain the two pipelines.
Pipeline Extensions
Now that we have the new executor pipeline mechanism, we should be able to implement this with pipeline extensions. A reader such as exodus that supports this special point/cell value over time query can advertise this capability by sending a special key in its output or request object during the update extent phase of the pipeline execution. Filters that can pass this type of data correctly (such as calculator and perhaps programmable filter) can pass this key whereas by default it will stop at filters. The extract over time filter can then see this key and send a special request upstream for the data over time.
The only question here is how the actual data gets passed back down the pipeline. It could be shoved into the field data so that it can be easily delivered in the existing request data pipeline request. Or, we could change the executor so that there is a special update phase for this other type of data.
The advantage of this approach is that once it is in place, it can just work without some special hacking on the GUI code or the user's part. The fast path will automatically be used when available, and not otherwise. The only effect on the user's part is change in performance. To help the user, we may want to add an icon in the GUI that warns the user when the fast path will not be used.
Rollout Strategy
Dave R.'s lofty goal is to have 100 users per month by next year. To get this, as we roll out we need to identify chunks of users in new markets. Let's try to get 20 people at a time. For example, Jon Goldman is working with a plotting application with the electrical folks. Every user of this new tool will be a new user for ParaView.
Another issue is that our customer priority has flipped on us. In the past, the Alegra code was our focus. Now we are told that Sandia's electrical work really distinguishes from other labs, and is therefore more important. The Sierra suite of codes is also very important. The family of analysis is becoming an important part of our work. Scripting is also important. This can generate a lot of new users.