Addressing the storage needs of researchers in the northeast

NESE Storage for Big Science

October 31, 2020

If NESE is to become a growing and important part of a national cyberinfrastructure, we must be able to usefully provide storage for large-scale world-wide science as well as for smaller science, engineering and educational projects.

Posted by Saul Youssef

Our first demonstration of this kind of work for NESE is our collaboration with the U.S. ATLAS Northeast Tier 2 center (NET2). ATLAS is one of the two largest experiments at the CERN Large Hadron Collider. Computing for ATLAS is done by a tightly coordinated worldwide network of computing centers, providing computing cycles and storage for the world-wide collaboration. In the U.S., the bulk of ATLAS computing is provided by four geographically distributed Tier 2 centers and a single Tier 1 center at Brookhaven National Laboratory. NET2 is one of these four U.S. Tier 2 centers, operated by Boston University in collaboration with Harvard and located at MGHPCC. For NESE, the challenge is whether NESE can provide storage for NET2 that is integrated into the global ATLAS data management system. As explained in past reports, the NESE and NET2 teams have worked together to demonstrate this, using a CephFS deployment within NESE. The storage is connected at 200 Gbps to the NET2 compute cluster and at 100 Gbps to the national research networks via Harvard’s equipment at the MGHPCC NOX. Four Ceph Gateway nodes within NESE run NET2 managed containers which connect the NESE storage to NET2, presenting the NESE storage as a standard endpoint in the global ATLAS data system. Following the MIT funded completion of a fiber ring from MGHPCC to Boston, New York City and Albany NY, MGHPCC in 2015, both NET2 and NESE are well situated to reliably transmit data around the world with easily expanded multi-100 Gbps throughput.

Ramp up of NET2 storage in NESE in the past year. The dashboard shows NESE Ceph monitoring of the 9PB of NET2 space within NESE. The lower part of the figure shows the central ATLAS catalogue of NET2 space as seen from CERN. The NESE storage used storage is shown in blue, rapidly filling towards full capacity as of October 2020.

In the past year, NESE storage for NET2 has worked so well that the NET2 team has made the decision to migrate all of the NET2 main storage into NESE over time. As a result, in the past year, NET2 has made an additional 6 PB buy into NESE Ceph rather than expanding the existing NET2 GPFS deployment. At the moment, slightly more than half of the NET2 storage space is in NESE and slightly less than half remains in GPFS. NET2 is planning to continue this, expanding further into NESE instead of replacing GPFS equipment as it retires.

Looking ahead, the Large Hadron Collider will undergo major luminosity upgrades which will increase the needed world-wide storage by a factor of five to more than 5 EB by 2028. As a result, the NET2 team is also actively working with NESE to enable 50 PB from the new NESE Tape tier to be used both as part of the global ATLAS data catalogue and as additional local storage for NET2 computing resources.

 

Tags:
Previous Post:
Next Post:
MGHPCC Logo