Application Level Resource Management in Sequential and Parallel Scientific Codes Andreas Stathopoulos College of William and Mary http://www.cs.wm.edu/~andreas Many research groups nowadays, rely increasingly on medium or even small size clusters of workstations to perform their scientific computations. These clusters usually involve networks with much higher overheads than traditional MPPs and they are often multiprogrammed. With the emergence of Grid computing older clusters are often not retired but incorporated in the computational environment. Relying on a batch scheduler or the operating system to manage these resources always provides suboptimal solutions. We explore three different ways that the scientific computing code can manage these resources itself: multigrain parallelism, application-level load balancing, and application-level memory management to avoid thrashing. The first two techniques apply on iterative methods (linear systems and eigenvalue problems), while our application level memory management scheme is more general. Because the application has the ultimate knowledge of its requirements, it can vastly outperform system based solutions.