compute cluster

The Darwin compute cluster was created to model biological diversity and plankton population changes in the upper ocean. The computational core is a 512 core cluster capable of performing 6 TFLOPS with a Terrabyte of memory and half a Petabyte of storage. It has a 250 MegaPixel graphics display wall for visualization of the results connected via a 10 Gb/s link to the US National Lambda Rail.

Systems manager Al Davis managing the cluster. The front-end, login node is named beagle (after the ship H.M.S Darwin that Charles Darwin travelled on during his explorations).

Systems manager Al Davis managing the cluster. The login node is named H.M.S. Beagle after the ship Charles Darwin traveled on during his explorations.

hardware

The dual-core, dual-CPU 3.0Ghz, 8GB compute cluster nodes are IBM x3550 xSeries computers. These are 1U, rack-mounted systems that make a lot of noise and produce a lot of heat. Some details on the IBM system X machines used for the compute cluster can be found icon acrobathere. The front-end, login node for the system is an IBM xSeries x3650 machine. This is the system you access when you first log in to the cluster. It is a 2U equivalent of the compute cluster machines. It is a dual-cpu, dual-core 3.0GHz Woodcrest system with 10GB of main memory.

high speed network

The compute nodes are connected with a Myrinet-10G network. A file server provides 500TB of disk space on a high-performance GPFS parallel filesystem.

software

The compute cluster runs an installation of the ROCKS cluster software. ROCKS is an open-source Linux cluster distribution that enables end users to easily build computational clusters, grid endpoints and visualization tiled-display walls. Learn more about ROCKS from the RocksClusters.org site.

More information about the compute cluster, including help and support and available software packages can be found on the Darwin Computational Facility Wiki.