HPE has scored another supercomputing win with the inauguration of the LUMI system at the IT Center for Science, Finland, which as of this month is ranked as Europe’s most powerful supercomputer.
Picture from LUMI data center construction site at CSC’s data center in Kajaani, Finland. Image: Esa Heiskanen, CSC
LUMI – Finnish word for snow – is the first pre-exascale system under the EuroHPC Joint Undertaking and is based on HPE’s Cray EX hardware architecture. The project is installed at the IT Center for Science (CSC) datacenter in Kajaani.
It is owned by the EuroHPC Joint Undertaking (JU) of the EU, and half the total €202 million ($210 million) budget for the beast comes from EU, with a quarter coming from Finland and the rest from the remaining members of the consortium of 10 countries involved.
The system is intended to serve as a platform for international research cooperation and for the development of artificial intelligence and quantum technology, according to the CSC. It is also expected to be used for the usual mix of scientific projects such as climate change simulations and medical research, but 20 percent of its capacity is intended to be available for industrial research and development activities, including small to medium enterprises (SMEs).
“LUMI will help solve societal challenges,” said CSC managing director Kimmo Koski, “including climate, life sciences, medical, and there are of course many others.”
He added that the system will be used for applications involving high performance computing (HPC), AI and data analytics, but also “where these meet and merge”, which has been a common thread among HPC projects for the past several years.
As of 30 May, LUMI has already taken the third spot on the current Top500 list of world’s fastest supercomputers, achieving a High-Performance Linpack (HPL) rating of 151.9 petaflops in benchmarks that were disclosed at the recent ISC22 conference in Hamburg.
However, not all the cabinets have been filled yet, with LUMI’s GPU partition not yet fully installed, after which its performance is expected to grow to about 375 petaflops, with a peak performance potentially exceeding 550 petaflops.
A second pilot phase for selected users is scheduled to start in August, with the complete system expected to be generally available for users in late September.
As well as being intended for research to help tackle climate change, LUMI is also claimed to have green credentials by being run entirely from hydroelectric power, while the waste heat generated by the system contributes to heating nearby homes in the Kajaani area.
The Finnish permanent secretary for Education and Culture Anita Lehikoinen said that LUMI would be hugely beneficial for the volume of scientific research done in the country.
“It is important for Finland to be seen as an attractive destination for science and research,” she said, adding that the country planned to increase spending on research and innovation to 4 percent of GDP by 2030, calling it “a worthwhile investment.”
The HPE Cray EX architecture [PDF] that LUMI is built from is a blade-based, high-density design built with multiple units of a liquid cooled datacenter cabinet. Each cabinet holds eight compute chassis, with each chassis fitting eight blades for up to 64 compute blades and up to 512 processors per cabinet.
Each cabinet can also hold up to eight switch chassis fitted with HPE Slingshot interconnect switch blades.
According to CSC, the CPU-only partition comprises 1,536 dual-socket CPU nodes, each featuring AMD “Milan” Epyc processors and between 256GB and 1,024GB memory.
The GPU partition has 2,560 nodes, each featuring a single custom AMD “Trento” Epyc chip and four AMD MI250X GPUs.
LUMI also has 64 Nvidia A40 GPUs installed for visualization workloads, and sports a partition with large memory nodes with 32TB of memory between them.
The storage layer of LUMI is based on the Cray Clusterstor E1000 system and the Lustre file system, with 8 petabytes of flash and 80 petabytes of hard disk space. LUMI also has 30 petabytes of Ceph-based object storage.
According to CSC, the entire LUMI installation occupies nearly 300 square meters (c 3,229 square feet) of space, about the same area as two tennis courts.
HPE was recently involved in the Venado supercomputer project for the Los Alamos National Laboratory, and is working with European microprocessor designer SiPearl to jointly develop a supercomputer using SiPearl’s Arm-based Rhea processor.
Last month, HPE and Cerebras Systems unveiled a new AI supercomputer in Munich, Germany, using HPE’s Superdome Flex, while HPE itself inaugurated the Champollion supercomputer at HPE’s Center of Excellence in Grenoble, France, using AMD-based Apollo computer nodes and Nvidia GPUs. ®