Register now for better personalized quote!

HOT NEWS

Jack Dongarra, who made supercomputers usable, awarded 2021 ACM Turing prize

Mar, 30, 2022 Hi-network.com

"Science is driven by simulation," says Dongarra. "It's that match between the hardware capability, and the necessity of the simulations to use that hardware, where my software fits in."

A good chunk of Jack J. Dongarra's life has been spent shuttling between two worlds. In one world, a group of mathematicians sit with pen and paper and imagine things that could be figured out with computers. In another world, a colossus of integrated circuits sits with incredible power but also incredible constraints - speed, memory, energy, cost. 

Bringing those world's together has been a five-decade career.

"It's matching the algorithmic characteristics of more mathematical things with the practicalities of the hardware, you're trying to exploit that architecture to get the highest performance possible," Dongarra explained in an interview withZDNetvia Zoom. 

Wednesday, that career was celebrated by The Association for Computing Machinery, which awarded Dongarra its highest honor, the 2021 ACM Turing award, for "major contributions of lasting importance," the computing industry's equivalent of the Nobel.

Dongarra, who is University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, receives a$1 million prize along with the recognition. Financial support to ACM for the prize is made by Google. 

"I'm really honored, it's hard to believe that this has happened," Dongarra toldZDNetof winning the award. "I have tremendous respect," he said, for his Turing peers. 

"I have used their books, I have used their theorems, and they are really the stars and pillars of our community," said Dongarra. "I only hope I can be a role model for future computer scientists," said Dongarra. 

His Turing peers have probably all used Dongarra's programs and libraries at one time or another, perhaps on a daily basis. 

Those include ubiquitous tools such as LINPACK, a linear algebra program widely used for benchmarking systems performance; BLAS, a workhorse of performing the vector and matrix operations at the heart of scientific computing; and MAGMA, a linear algebra library for GPUs.

In Dongarra's view, his most important contributions amount to three things. 

"One was designing and building numerical software that runs on high-performance machines that gets performance and is portable to other machines and architectures," said Dongarra.

Second was his work on parallel processing mechanisms including the widely used "message-passing interface," or MPI. And third, techniques of performance evaluation to measure how fast a computer runs, which made possible things such as the TOP500 list of supercomputers.

All of that work was "focused on advanced computer architectures, and how to exploit them very effectively."

Dongarra, far left, with colleagues hanging out on his "LINPACK" Ford Pinto, 1979. A decade later, he had a new car with the license plate of the successor program, LAPACK. 

Dongarra earned a B.S. in Mathematics from Chicago State University, an M.S. in Computer Science from the Illinois Institute of Technology, and a Ph.D. in Applied Mathematics from the University of New Mexico. 

The archives contain numerous pictures of Dongarra surrounded by colleagues, often hanging out on one of the many cars that he has owned that usually have a license plate bearing the name of a computer program. A picture from 1979 shows a smiling Dongarra and colleagues leaning on his Ford Pinto, license plate "LINPACK."

"It wasn't just myself, it was a community of people contributing to it," he noted of the various software achievements, including the open source community.

The singular thread of the projects over the years was how to let scientists do the experiments they wanted to do given what the computer would allow.

"Science is driven by simulation," observed Dongarra. "It's that match between the hardware capability, and the necessity of the simulations to use that hardware, where my software fits in."

That is more difficult than it sounds because software innovations always trail hardware, explained Dongarra. 

"We're always in a catch up," said Dongarra. "Software is always following the hardware, is the way I look at it." Better "co-design" is needed, he suggested, where hardware and software designers would find common ground in their plans.

"In fact, the way I really look at it, the hardware architects throw something over the fence, and they expect the software people to adapt their problems to that machine."

In the breach, Dongarra has invented tools to give programmers greater mastery across all kinds of computing. Asked which is his "favorite child" among his many contributions, Dongarra replied, "My favorite child has to be my most recent child, if you will, and we're still doing it, a package called SLATE, a linear algebra package that is targeted at the exascale machines being put in place by the [U.S.] Department of Energy."

SLATE runs on the "computational pyramid," from laptops to desktops to clusters to supercomputers. The user of SLATE "Can forget about the underlying hardware, they make the call that they've made to solve that linear algebra problem, and SLATE internally figures out what to do to spread it out across hundreds of thousands of processors or GPUs."

In no small part because of his efforts, the world of supercomputing has finally gone mainstream. Increasingly, the supercomputers that Dongarra spent his life programming are available via cloud computing, democratizing their use. 

"We're at an inflection point in terms of how we do our computing," said Dongarra. "We have done traditionally our scientific computing on these machines that take up a footprint of two tennis courts." 

The supercomputing industry is "almost at the end of the line in some sense in the traditional way that we build our supercomputers," said Dongarra. Because of the breakdown of Moore's Law, the rule of thumb that transistors double in density every 18 months, assembling a self-contained high-performance computer is not a process that will continue to scale.

"We're not going to be able to put together that large ensemble and just turn the crank; we're not going to be able to go to zettaflops." The forthcoming fastest supercomputers run exaflops, meaning, millions of trillions of floating-point operations per second; a zettaflop would be one thousand exaflops. 

Rather, cloud-scale systems have to take up the next epoch of gigantic computing after exascale.

"The inflection point is such that we are seeing more cloud-based machines being invoked for these types of applications." 

As a consequence, there is a divide coming, between the commodity parts used for supercomputers, and the custom chips in the cloud such as Amazon's "Graviton" processor. 

"We see these companies, Amazon, Facebook, Google, that have a tremendous amount of resources," he said. "They are exothermic: they have tremendous resources."

"When I look at the other side of the equation, our national laboratories struggle to put in place the resources that are necessary, because of funding, to build and put in place the machines - they are endothermic, they need resources," said Dongarra. In addition to his professorship, Dongarra is Distinguished Research Staff Member at the Department of Energy's Oak Ridge National Laboratory.

At least those commercial resources in the cloud will be available to scientists to draw upon for specific needs, he said. Scientists will be able to use resources including quantum machines to "just do that specialized thing that our application needs the quantum computer for, and the cloud will be able to apply the right resources to that application."  

Not only has supercomputing spread with the advent of the cloud, but linear algebra, Dongarra's mathematical forte, the treatment of vectors and matrices and tensors, has come to the heart of computing. Linear algebra is the bedrock of today's deep learning, which consists mostly of multiplying together tenors of various sizes and manipulating vectors of data. Today's computing hardware, moreover, systems such asCerebras Systems's Wafer-Scale-Engine, for deep learning and scientific computing, are all about speeding up vast amounts of parallel matrix multiplications.

Dongarra working in MatLab on a Tektronix 4081 in 1980, at Argonne National Laboratories. "I grew up writing FORTRAN, and today we have much better mechanisms" such as the Julia programming language and Jupyter Notebooks, he says. Still, more tools are needed to abstract the details of hardware operations. "Making the scientist more productive is the right way to go."

"I'm a mathematician, to me, everything is linear algebra, but the world is seeing that as well," he said. "It's a fabric on which we build other things." Most problems in machine learning and AI, he said, go back to an "eternal computational component" in linear algebra.

While hardware speeds up matrix multiplication, Dongarra is, again, mindful of the needs of the scientists and the software writer. "I grew up writing FORTRAN, and today we have much better mechanisms" such as the Julia programming language and Jupyter Notebooks.

What's needed now, he said, are more ways to "express those computations in an easy way," meaning, linear algebra computations such as matrix multiplications. Specifically, more tools are needed to abstract the details. "Making the scientist more productive is the right way to go," he said. 

Asked what software programming paradigm should perhaps take over, Dongarra suggested the Julia language is one good candidate, and MatLab is a good example of the kind of thing that's needed. It all comes back to the tension that Dongarra has navigated for fifty years: "I want to easily express things, and get the performance of the underlying hardware."

As far as what interests him now as a scientist, Dongarra is schooling himself in the various machine learning AI techniques built on top of all that linear algebra code. He has a strong belief in the benefits that will come from AI for engineering and science. 

"Machine learning is a tremendous tool to help solve scientific problems," said Dongarra. "We're just at the beginning of understanding how we can use AI and machine learning to help with that." 

"It's not going to solve our problems," he said, "it's going to be the thing that helps us solve our problems."

tag-icon Hot Tags : Tech Computing

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.