Fujitsu announced that it has won an order to provide a new supercomputer system to the Information Technology Center at Nagoya University.
The supercomputer will be designed as a hybrid computation server system comprised of three different computer architectures: the SPARC Enterprise M9000 UNIX server, the HX600 technical computing server, and the FX1 high-end technical computing server. The combined theoretical peak performance of these three systems is 60 teraflops.
The supercomputer will be a shared system used by Nagoya University as well as other research institutions and corporations, and will begin operation in two stages, the first in May 2009 and the second in October 2009.
The Information Technology Center at Nagoya University is an academic research facility available to the university faculty along with researchers from other research institutions and corporations throughout Japan. Up until now, the center has been using Fujitsu’s PRIMEPOWER HPC2500, which was deployed in March 2005 with Japan’s largest memory capacity at the time. In the past few years, however, two different types of needs have emerged: the need for more memory required for massive computations, and the need for a larger number of CPUs required for parallel computations. To accommodate both sets of needs, the Information Technology Center decided to deploy a new hybrid system comprising three different kinds of architectures, enabling it to both boost computation capacity and broaden the reach of parallel computing to more users.
The new supercomputer is a hybrid system comprising three different types of computation servers. It will use Fujitsu’s Parallelnavi HPC middleware, HPC Portal, and Management Portal to integrate the servers so that users experience them as a seamless, single system.
The large-scale symmetric multi-processing (SMP) computation server is the successor to the current PRIMEPOWER HPC2500, using SPARC64 VII quad-core processors developed by Fujitsu. Users of existing systems will be able to easily migrate their applications to the new system. With 3 nodes (96 CPUs, 384 cores), one terabyte of shared memory per node, and 3.84 teraflops theoretical peak performance, the new system can perform extremely large-scale computations.
The System 2 is a Linux-based, large-scale PC cluster based on “open supercomputer” specifications using off-the-shelf technology within an x86 architecture. It features 160 nodes (640 CPUs, 2,560 cores), 10 terabytes of total memory, and 25.6 teraflops theoretical peak performance. As a general-purpose system, it can run a wide range of application software, making it accessible to many researchers and broadening its reach.
The large-scale distributed UNIX computation server features 768 nodes (768 CPUs, 3,072 cores), 24 terabytes of total memory, and 30.72 teraflops theoretical peak performance. The system will be used for developing applications required for future high-performance computing such as multi-core support and high degrees of parallelism.
Additionally, the supercomputer will include an ETERNUS2000 model 200 disk array storage system with 1.15 petabytes of physical storage.