Data Intensive Computing with the Pawsey Supercomputing Centre (PSC)

The Pawsey Supercomputing Research Centre (PSC), a HPC joint venture located in Australia, currently supports around 200 research projects in more than 15 scientific fields, including the field of radio astronomy.

October 11, 2022 /

Isabelle Sitchon


 radio telescope

The Pawsey Supercomputing Research Centre (PSC), a HPC joint venture located in Australia, currently supports around 200 research projects in more than 15 scientific fields, including the field of radio astronomy. The center’s main low-frequency radio telescope, the Murchison Widefield Array, has achieved many advancements in the research of radio astronomy, including studies such as space situational awareness, fast radio bursts and solar imaging, among many more.

Future endeavors for radio astronomy however, may seek higher levels of technological advancement to carry out research. Aiming to “conduct transformational science,” the PSC is one of two sites hosting the Square Kilometre Array (SKA) project, the world’s largest radio telescope.

In a seminar to the HPC Society of Professionals, Ugo Varetto, Chief Technology Officer at the PSC, explained the workflow of generating scientific data from radio telescopes and gave an overview of how data processing pipelines are evolving to support radio astronomy at the scale of the SKA.

The modern radio telescope is an interferometer that allows radio signals to aggregate from clusters of antennas, which are then calibrated and sampled into “dirty images'' for scientific analysis. The current classical workflow, which allows researchers to experiment with novel ways of reconstructing images from raw visibilities, is not automated, requiring scientists to rely on and manage third-party tools. While present data sets are small enough for a classical workflow to handle, it will not scale with the increasing size of data products in the future.

Several projects are currently underway to automate the processes of the radio telescope through machine learning techniques. Vagetto explained the new storage architecture model of the SKA project, which will support the classical workflow and allow data from visibilities to be available externally to researchers around the world. Furthermore, Vagetto discussed a new workflow implemented at the Australian Square Kilometre Array Pathfinder (ASKAP), a survey radio telescope in the Murchison Radio-Astronomy Observatory (MRO). This pipeline has been implemented on top of dedicated ingest nodes as a regular HPC application.

These solutions will work to support the operational pipelines and future workflows for the SKA project, making use of HPC systems to post-process data. However, the PSC faces some challenges to implement these data pipelines. With thousands of external users and new projects underway, the PSC will still continue to assist concurrent heterogeneous workloads and oversee the making and safety of a “diverse” software ecosystem. Moving forward, Vagetto detailed how the center’s hybrid cloud HPC solution will address these challenges with microservices.


News Category
Events
Institute Happenings
UH Data Science News