SCRIPPS OCEANOGRAPHY
  • University of California-San Diego
  • 9500 Gilman Drive
  • La Jolla CA 92093-0532 USA
  • Tel: 1-858-534-0000
  • Fax: 1-858-534-0000

Facility

Introduction

The original COMPAS cluster (funded using the NSF MRI grant) has been leveraged to build a larger computing facility, the COMPAS compute facility. Three other additional large clusters have been purchased with ONR, NOAA and NSF MRI funding through out the compute centers history and the current clusters are housed and managed together at SIO. The resources of the COMPAS clusters are shared among the PIs and students in the COMPAS project The resources in COMPAS are facilitated by the COMPAS Director (originally Detlef Stammer and later Bruce Cornuelle) and the COMPAS System Manager (Caroline Papadopoulos)

Hardware

The current COMPAS compute facility has expanded so that it currently houses over 16,000 CPUs and 144 TB of associated raw NFS storage:

Machine type Total Processors Memory per processor Peak Gflops Network
COMPAS Cluster
3.06 GHz Xeon single core 192 1 GB 1175 1 GB Eth + Myrinet 2000
2.8 GHz Xeon single core 208 1 GB 1164 1 GB Eth + Myrinet 2000
Total for COMPAS 400 400 GB 2339
ATLAS Cluster
2.66 GHz dual Quad core Intel 5430 1024 1 GB 10,895 1 Gbit Ethernet + Myrinet 10G
2.66 GHz dual Quad core Intel 5430 80 2 GB 851 1 Gbit Ethernet
2.66 GHz dual Quad core Intel 5430 8 4 GB 85 1 Gbit Ethernet + Myrinet 10G
2.33 GHz Quad core intel 5520 80 1.5 GB 745 1 Gbit Ethernet + Myrinet 10G
Total for ATLAS 1192 1336 GB 12,576
Mermaid Test Cluster
AMD Opetron 16 1 Gbit Ethernet

Number of NFS servers Raw capacity per server Raid type Network
ATLAS Storage
2 48 TB ZFS Raid 1 Gbit Ethernet + Myrinet 10G
4 876 GB Software Raid 5 1 Gbit Ethernet
1 584 GB Software Raid 5 1 Gbit Ethernet
2 504 GB ZFS Raid 1 Gbit Ethernet
Total for ATLAS 101.1 TB
COMPAS Storage
2 18 TB ZFS Raid 1 Gbit Ethernet
4 876 GB Software Raid 5 1 Gbit Ethernet
2 1600 GB Software Raid 1 1 Gbit Ethernet
1 730 GB Hardware Raid 5 1 Gbit Ethernet
Total for COMPAS 43.4 TB

General Description

Two independent switch fabrics define the two main clusters. The 200 node COMPAS cluster is 32 bit and uses two connected 128 port Myrinet 2000 switches for it's high performance network. The ATLAS cluster is 64 bit and uses a single Myrinet 10G switch for it's high speed network. This heterogeneous collection represents several different major acquisitions over the past several years. The total facility (400 Xeon + 1192 Intel 5400 and 5500 series) has a theoretical peak speed of nearly 15 TeraFlops (TF). The configuration of machines is defined by our targeted workload allowing us to more easily make memory vs. network vs. compute power trade-offs than more general-purpose installations. Our 18 storage servers all have hardware/software RAID-5, RAID-1 or ZFS raid and run the standard NFS (Network File Server) protocol giving adequate performance. The models used by COMPAS researchers have been coded to take advantage of node-local disks to dramatically improve performance. Storage performance is an acknowledged weakness for clusters, but by load balancing the nfs servers we are able to work around this weak link. Our two largest nfs servers use our 10G network to allow for extra performance and bandwidth to help with these issues.

General Network

COMPAS facility has a single 1Gbit/s network connection to the campus backbone. In addition a single 10Gbit/sec connection is available to the OptIPuter network (a campus and national scale research network funded by NSF).

Job Characteristics

Computing jobs that run on the COMPAS facility are generally characterized as long-running, mid-sized parallel applications with processor counts of 32 to 512 CPUs. Jobs using between 64 and 256 CPUs are common as these mark where parallel efficiency begins to drop off in this configuration. Runs are often long-lived, typically several days (3-5+). The COMPAS computing facility assigns these long-lived runs to dedicated processors so jobs run with little intervention for weeks at a time without queue waits. The jobs are distributed to have one process per processor and the process must fit into main memory to attain acceptable performance. We have found that it is quite cost-effective to distribute individual user accounts across medium-sized, gigabit-connected , IO servers and larger 10 gigabit- connected servers. Multiple I/O ”pipes'' mitigate interference of users running different codes. We use hardware/software RAID-5 with hot-swap spares, software raid 1 or ZFS raid on these servers to help minimize data failure. Smaller jobs can also be assigned to the older test cluster in a development environment that sometimes uses queues.

Mid-range facility

COMPAS fits within the hierarchy of computer centers as a mid-range facility. Larger supercomputing centers are important resources, but our usage patterns often conflict with their stated mission. For example, SDSC's web site states: “SDSC's machines are a national resource, allocations are assigned on the basis of scientific merit and on the inability of other, less-capable computing sites to perform the work'' In this context, the COMPAS compute facility is a “less-capable computing site''. Yet, in aggregate, the current facility will be able to deliver nearly 14 Million CPU-hours/year. If COMPAS computational science was shifted to a national resource such as SDSC, then this would detract from larger jobs that required the massive resources that are available at national centers. The codes that run on the COMPAS compute facility are all somewhat similar in their use of computing resources. This has allowed us to choose machine configurations optimally (a specific balance of Flops/disk/memory) to achieve high performance. It has taken considerable human investment to port/tune/develop the codes to run efficiently on the COMPAS clusters.