Is comparing HPC to Formula1 a bad idea?


I missed the Formula1 Malaysian Grand Prix recently as I was travelling to the USA at the time. Apparently I missed a highly interesting race – lots of good racing, Alonso out early, an embarrassing mistake by Hamilton, and some poorly judged behaviour by Vettel. However, it got me thinking about the way in the HPC community, we often make the comparison of supercomputing being like Formula1.

The idea is that both represent the pinnacle of their respective industries. Each sits at the leading edge, identifying, developing and proving the next generation of technologies for the greatest performance. The wider industry of each [F1 or HPC] then benefits through a tickle-down process – i.e. the technologies tested at the leading edge eventually make their way into consumer products.

But perhaps that idea of being a leading edge niche and aspirational level of performance etc. misses the real point of HPC. It implies the hardware is the dominating factor. And yet, as anyone who has heard me before will know, High Performance Computing is much more than just a High Performance Computer. This works both ways: powerful hardware alone won’t create step change performance benefits (expect for a few lucky use-cases); and real step change performance benefits can be achieved without planetary leading hardware.

Whilst powerful hardware is one ingredient, equally important are other performance contributors including software applications, supporting software infrastructure, people (users, programmers and support), physical infrastructure, business processes, R&D, etc. Successful high performance computing requires a well-rounded team and ecosystem.

In fact maybe F1 is a good example after all. It combines the rapidly evolving hardware (the car is aggressively innovated throughout the race season) with a collaborations of many different high end skills (e.g., driver, pit crew, aerodynamicists, tyre experts, engineers, race strategists, HPC(!), etc.) – all overseen by race planning, and strong business processes, etc. Just like HPC, there is a constant fight for funding (with a strong link between the scale of funding and impact/success) but there is sufficient diversity and unpredictability to allow good innovation to punch above (financial) weight.

Perhaps then, we should bear this in mind as we seek to secure greater impact from HPC. This applies to both the academic/research user space and the hope for increased industrial use of HPC. Powerful hardware (fast car) gives a big head start in achieving return on investment (it is harder to deliver performance step change without it). But the hardware alone is not enough for the best performance and sustained impact. Attention must be paid (and investment delivered) to the software innovation, the supporting business processes (e.g. ease of use/access), infrastructure, etc. – and, critically, to the people that make all of those possible.

Andrew can also be followed regularly at!/hpcnotes.



Andrew is Vice-President HPC Services and Consulting at the Numerical Algorithms Group (NAG). NAG provides HPC application performance services and impartial technology consulting to customers around the world. NAG is also a core part of the UK’s HECToR national supercomputing service, providing the Computational Science and Engineering (CSE) Support Service, including training. Andrew was originally a researcher using HPC and developing related software in government and industrial settings, later becoming involved in leadership of HPC services such as the UK’s CSAR national HPC service. Andrew has undertaken independent reviews of HPC services, was involved in the early technical management of PRACE, and was a theme co-chair in the European Exascale Software Initiative (EESI). Andrew is interested in future concerns of the HPC community, including exascale, application performance, skills development, and broadening usage.



Andrew Jones

« Go back

Add a comment

* mandatory

Platinum Sponsors

ISC Partner

Gold Sponsors