Thermal-Engineering.Net

Cluster

The Computing cluster comprised of 17 servers of HP Pro-Liant. The cluster contains 32 AMD quad-core Opetron processors with 8 GB RAM on each. On head node AMD 24 core processor with 32 GB is installed.

Compute Nodes: Cluster of 32 Quad SMPs (AMD Opteron) Servers with cumulative 256GB memory and 2TB storage.

Front-end Node: 12 Core (AMD Opteron) Server with 32GB memory and 2TB consolidated storage

Node-Node Interconnect: Gigabit Ethernet Network

Cluster Provisioning and Management: Virtualization through VMware ESXi and VSphere VCenter

Operating System: SuSE Linux Enterprise Server 10/11 (x64)

Development Environment: MPI Compilers, Fluent, CFX, ICEMCFD, Tecplot, Open Foam, FreeCFD, Fastest.

Cluster System with the following specifications is housed in CTFL

System Integrator: Hewlett Packard

Model: HP Proliant DL 385 G5

Platform: AMD Opteron Quad Core Processors running SuSE Linux

The cluster system has been designed as comprising 17 Nodes out of which 16 have been designated as Compute Nodes for execution of parallel programs or jobs while one node would serve as a Head Node for interactive access and job scheduling. Each of the 17 Nodes is powered by two AMD Opteron Quad Core CPUs running at clock speeds of 2.4 GHz. This gives a cumulative total of 34 Quad Core Processors. The Compute Nodes are equipped with 16 GB high speed ECC type memory and two (02) SAS drives each with a storage capacity 72 GB. To ensure fault tolerance and availability, RAID 1 has been configured on the Compute Nodes. The Head Node is quite similar to the compute nodes except that it has been enhanced with 32GBMemory and increased secondary storage with five (05) SATA Drives each with a storage capacity of 500 GB. This gives a cumulative storage of 2 TB and we exploited the hardware redundancy by configuring RAID 5 on the Head Node.

The Head Node serves as a central hub for scheduling parallelized jobs and subsequent results collection from compute nodes utilizing MPI as a communication mechanism. Our selection of AMD Opteron Quad Core processors has been quite beneficial taking into account the Direct Connect Architecture which eliminates the bottlenecks inherent in traditional front-side bus processor architectures providing increased throughput for the best system scaling. It’s especially designed for multi-threaded and multitasking environments which is the prime focus area of our CFD application. Since our application is compute intensive, we have utilized HP ProCurve Gigabit Ethernet Switches to provide the necessary communication links between the nodes.

A relatively newer concept that we implemented on this cluster system is Virtualization. Virtualization is a proven software technology that is rapidly transforming the IT landscape and fundamentally changing the way that people compute. Given the rapid advancements in hardware technology most modern day machines (Desktops and Servers alike) remain vastly underutilized. Virtualization allows us to run multiple virtual machines (VMs) on a single physical machine, sharing the resources of that single computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer.