Setting up a ParaView Server: Difference between revisions

From ParaQ Wiki
Jump to navigationJump to search
Line 36: Line 36:


== Pitfalls ==
== Pitfalls ==
Here we capture the most common problems people run into with setting up client/server.
=== Specifying multiple MPI include directories ===
You can add multiple directories to the MPI_INCLUDE_PATH CMake variable by separating them with semicolons (<tt>;</tt>).  See the [[#Compiling]] section for more details.
=== Specifying multiple MPI libraries ===
You can use both the MPI_LIBRARY and MPI_EXTRA_LIBRARY CMake variables for specifying MPI libraries.  You can also add multiple libraries to MPI_LIBRARY by separating the files with semicolons (<tt>;</tt>).  See the [[#Compiling]] section for more details.


=== ParaView crashes on large data ===
=== ParaView crashes on large data ===

Revision as of 16:36, 17 July 2007

ParaView is designed to work well in client/server mode. In this way, users can have the full advantage of using a shared remote high-performance rendering cluster without leaving their offices. This document is designed to help get you started with build and setting up your own ParaView server. It also serves as a collection point for the "tribal knowledge" acquired to make parallel rendering and other aspects of parallel and client/server processing most efficient.

Compiling

Ideally, we would like to provide precompiled binaries of ParaView for all of our users to make installing it more convenient. Unfortunately, the large variety hardware, operating systems, and MPI implementations makes this task impossible. Thus, if you wish to use ParaView on a parallel server, you will have to compile ParaView from source.

After downloading ParaView, follow the Building and Installation instructions. When following these instructions, be sure to compile in MPI support by setting the PARAVIEW_USE_MPI CMake flag to ON and setting the appropriate paths to the MPI include directory and libraries.

One problem many people face when compiling with MPI is that their MPI implementation provides multiple libraries, many of which are required when compiling ParaView. If there are only two such libraries, you can add them separately in the MPI_LIBRARY and MPI_EXTRA_LIBRARY CMake variables. If you need to link in more than two libraries, you can specify multiple libraries in the MPI_LIBRARY variable by separating them with semicolons (;). You can apply the same trick to the MPI_INCLUDE_PATH to specify several include directories.

OSMesa support

Running the Server

pvserver vs. pvrenderserver and pvdataserver

X Connections

One of the most common problems people have with setting up the ParaView server is allowing the server processes to open windows on the graphics card on each process's node. When ParaView needs to do parallel rendering, each process will create a window that it will use to render. This window is necessary because you need the X window before you can create an OpenGL context on the graphics hardware.

There is a way around this. If you are using the Mesa as your OpenGL implementation, then you can also use the supplemental OSMesa library to create an OpenGL context without an X window. However, Mesa is strictly a CPU rendering library so, use the OSMesa solution if and only if your server hardware does not have rendering hardware. If your cluster does not have graphics hardware, then compile ParaView with OSMesa support and use the --use-offscreen-rendering flag when launching the server.

Assuming that your cluster does have graphics hardware, you will need to establish the following three things.

  1. Have xdm run on each cluster node at startup. Although xdm is almost always run at startup on workstation installations, it is not as commonplace to be run on cluster nodes. Talk to your system administrators for help in setting this up.
  2. Disable all security on the X server. That is, allow any process to open a window on the x server without having to log in. Again, talk to your system administrators for help.
  3. Make sure each process is run with the DISPLAY environment variable set to localhost:0 (or just :0).

To enable the last condition, it is often helpful to use the env program in the mpirun call. So you would have something like

miprun -np 4 /bin/env DISPLAY=localhost:0 ./pvserver

An easy way to test your setup is to use the glxgears program. Unlike pvserver, it will quickly tell you (or, rather, fail to start) if it cannot connect to the local server.

mpirun -np 4 /bin/env DISPLAY=localhost:0 /usr/X11R6/bin/glxgears

Pitfalls

Here we capture the most common problems people run into with setting up client/server.

Specifying multiple MPI include directories

You can add multiple directories to the MPI_INCLUDE_PATH CMake variable by separating them with semicolons (;). See the #Compiling section for more details.

Specifying multiple MPI libraries

You can use both the MPI_LIBRARY and MPI_EXTRA_LIBRARY CMake variables for specifying MPI libraries. You can also add multiple libraries to MPI_LIBRARY by separating the files with semicolons (;). See the #Compiling section for more details.

ParaView crashes on large data