Parallel computation is a well-established and essential tool for large-scale scientific computation. Adaptive computation, where effort is concentrated in "interesting" parts of the domain, has been recognized as a means of providing reliable results for demanding problems with minimal resources. Adaptivity requires explicit parallelization and frequent dynamic load balancing. Load balancing procedures must be fast, must provide a high-quality partitioning, and should be incremental. The Zoltan library for partitioning and load balancing provides a common interface to a suite of state-of-the-art partitioning and dynamic load balancing algorithms. The talk will describe a typical parallel adaptive computation and the distributed data structures and dynamic load balancing procedures used.
Such computations are performed on computers ranging from single-node multiprocessors, networks of workstations and small clusters to large, tightly-coupled supercomputers and widely distributed computational grids. Software is often developed to run efficiently in a specific parallel computing environment. When the software is moved to other systems, efficiency may be sacrificed to save the effort needed to optimize it for the new environment. The Dynamic Resource Utilization Model, which provides capabilities to tailor partitions to a dynamic and heterogeneous computational environment, will be discussed.