The National Science Foundation (NSF) and the University of Michigan
(UM) will be investing about $3.5 million dollars on a new computing
center called ConFlux. The high performance computing (HPC) system will
help the school solve complex physics models, simulations and big data
problems. Additionally, the project will look to address common scale
limitations associated with these studies.
ConFlux is designed to have simulations and large datasets interface
with each other during a run. The nodes will include CPUs, GPUs, large
memory and fast interconnections. The storage of the big data will be on
a three-petabyte (that’s 1015, or a 1,000 terabyte) hard drive. These nodes are also designed to allow for data-intensive operations.
The HPC system will allow U.S. researchers to focus on computing
infrastructure and data-driven physics, and to address studies
previously done on supercomputers.
"Big data is typically associated with web analytics, social networks
and online advertising. ConFlux will be a unique facility specifically
designed for physical modeling using massive volumes of data," said
Barzan Mozafari, UM assistant professor of computer science and
engineering and overseer of the project.
As previously mentioned, a common problem with many simulations is
scale. If you are looking into the material property behaviour at the
atomic scale, or problems with complex systems like climate, scale is of
primary importance.
Karthik Duraisamy, UM assistant professor of aerospace engineering at
UM, notes that many of the world’s powerful computers are able to
handle these problems by using approximations. Unfortunately, these
approximations are not always accurate enough to answer some of the
harder engineering and science questions.
"Such a disparity of scales exists in many problems of interest to
scientists and engineers," said Duraisamy. "We need to leverage the
availability of past and present data to refine and improve existing
models."
To address this problem, experiments, measurements and simulations
with limited scope can be used. Algorithms can crunch the data to make
the needed predictions. The machine can learn more algorithms as the
simulations, experiments and measurements improve and increase in
number.
"It will enable a fundamentally new description of material behaviour
— guided by theory, but respectful of the cold facts of the data.
Wholly new materials that transcend metals, polymers or ceramics can
then be designed with applications ranging from tissue replacement to
space travel," said Krishna Garikipati, a professor in the departments
of mechanical engineering and mathematics at UM.
Some research projects that will be using ConFlux include:
- Combining non-invasive imaging with physical models for blood flow to treat cardiovascular disease.
- More accurate turbulent simulations to predict swirls and eddies. This could improve airplane design, weather forecasting and climate science.
- Studying the effects of climate change on clouds and precipitation.
- Simulations of galaxy formations using galaxy-mapping studies to better understand the role of dark matter.
A Center for Data-Driven Computational Physics will be built to
manage the ConFlux HPC system. The project will be funded by a $2.42
million NSF grant and $1.04 million from UM. The project fits in line
with President Barack Obama’s National Strategic Computing Initiative to
use vast data sets and to increase computing power.