machine_description Page History
The Olympus cluster consists of 66 execution hosts (i.e. nodes) with two distinct node types. Approximately half of the cluster is based on AMD Opteron 6300 processors, and the other half utilizes Intel Xeon Ex v3 Series processors. Both node types are managed by the same batch scheduler.
|Count||Names||Model Name||CPU Freq. (GHz)||Physical (Logical) cores/node||Memory/node (GB)|
|6||on001-on006||AMD Opteron 6376||2.3||64 (64)||512|
|18||on007-on023,on025||AMD Opteron 6376||2.3||64 (64)||256|
|1||on024||AMD Opteron 6376||2.3||32 (32)||128|
|20||or001-or020||Intel Xeon E5-2695 v3||2.3||28 (56)||256|
|20||or021-or040||Intel Xeon E5-2695 v3||2.3||28 (56)||128|
|1||ol001||Intel Xeon E7-8860 v3||2.2||56 (128)||1024|
The Intel Xeon nodes in Olympus offer three different memory configurations (128GB, 256GB, and 1TB) and feature both Gigabit and Intel OmniPath (100Gb) networking. Due to the high speed of the OmniPath network, these nodes are best suited for tightly-coupled applications using distributed-memory parallelism. In addition, the Olympus Intel nodes access a the
consus BeeGFS filesystem over OmniPath, resulting in very low I/O latency and high streaming performance. This filesystem is mounted at
/mnt/consus and exposed via the
$DATA environment variables by the
Each node has two multi-core processors and each processor has 14 physical compute cores. These processors support Intel's hyper-threading technology, thus each processor can utilize 28 logical compute cores for a total of 56 available cores per node. The SLURM batch scheduler exposes all 56 logical cores for scheduling. Note that it may be advantageous for some applications to schedule double the required number of cores (or the entire node) to ensure that dedicated physical cores are available to the running job. Nodes
r020 offer 128GB of memory, while nodes
r040 offer 256GB of memory.
This node has four multi-core processors and each processor has 16 physical compute cores. These processors support Intel's hyper-threading technology, thus each processor can utilize 32 logical compute cores for a total of 128 available cores per node. The PBS Torque batch scheduler exposes all 128 logical cores for scheduling. Note that it may be advantageous for some applications to schedule double the required number of cores (or the entire node) to ensure that dedicated physical cores are available to the running job. This node features 1TB of physical memory.
Each node has four multi-core processors and each processor has 16 compute cores, for a total of 64 compute cores per node and 1536 compute cores overall. There are three different memory partitions (512GB, 256GB, and 128GB) and a total of 7.75 TB of aggregate memory. See the Job Scheduling page for more information on the execution queues available on olympus.
Each node runs a 64-bit CentOS 7 operating system and all nodes are connected to one another by Gigabit Ethernet. There are multiple shared, cluster filesystems available across all nodes, and each node has locally mounted disk space.
Olympus provides a flexible, multiple-use compute cluster with several specialized partitions to accommodate a variety of workloads including: extremely high memory multi-threaded applications, tightly coupled message-passing applications, and trivially-parallel (or serial) batch jobs.
- These nodes are only accessible through the SLURM system (i.e., users can not directly log into these machines)
- TIP: Use
srun --nodelist=<FQDN of node>,... --pty bashto get an interactive session on a specific node (replace bash with your preferred shell)
- Hostnames (see table above for more details):