ISC HPC Blog

How to reach Exascale today with 50 million or a few more laptops

Posted:


Recently CNN published an article about the race to Exascale computing. To explain how fast an Exascale computer is, they state: "An exascale computer could perform approximately as many operations per second as 50 million laptops."

If we adopt this simplistic view, you can reach Exascale computing today! The only thing you need to do is donate your unused computing time by connecting your computer to a volunteer computing grid (you’ll need some 100 million friends to accomplish this mission. It would be risky to count on just 50 million because they might not be available all the time). Of course the volunteer computing grids would not be able to handle the approximate 95 million additional computers connected without some additional central server hardware. Well, this actually still adds up to a very large number, because we want to integrate a huge number of computers into one big supercomputer. Estimates from the "Desktop Grids for eScience – a Road map" published by the International Desktop Grid Federation lead to an amount of 1 euro per 100 machines. So for 95 million machines that would be 950,000 Euros worth of hardware investments. A big amount of money, but still far from the 1.2 billion Euros investment planned in the E.U.

But why an additional 95 million computers and where are the other 5 million needed to reach Exascale? Actually these 5 million computers are already connected to volunteer desktop grids all around the world. Most of these volunteer computing grids use BOINC as a technology, and a site like the BOINCstats provides publicly available statistics. From BOINCstats we can see that today over 6 million computers are connected to a volunteer desktop grid and that, on average, there are about  one million machines active, performing at over 6 Petaflop/s – slightly below the performance of the K-computer.

Alright, we would perhaps need a little more than 950.000 Euros to carry out the plan, but 10 million Euros of hardware investment costs would certainly be sufficient. We would need more than 100 million computers, and we would also need to plan some operational costs to keep everything running smoothly, but definitely not anywhere near what is planned for the big Exascale supercomputers in Europe.

So, has everyone gone mad and do they want to throw away billions of Euros? Or is my maths wrong? Luckily none of the above! Developing widely usable Exascale computing does cost a lot of money – really a lot – but if it is about about running an Exaflop/s computer today – yes, that can be done.

Let me explain. Firstly, yes, there are volunteer desktop grids, to which you can donate your unused computing time, and help the science of your choice. You can choose from supporting medical applications, mathematical applications, engineering applications, to astronomy. If a few 100 million people follow your choice, we do have an Exaflop/s machine. It works as follows – scientists have problems which need a lot of computational power. They produce a large number of computational jobs from these problems and put them on a Desktop Grid server. Your computer asks for work when it is idle, gets it and executes it. Depending on the scientific application you choose, your computer could discover the next radio pulsar, or run a simulation of a new drug, or solve a mathematical theorem. Real science could be done on this new Exascale computer - and you could be part of it.

Secondly, although real science could be done today on an Exascale computer that can be built right now from volunteer computing resources, it is only a small part of scientific problems that can be solved on more general parallel computers. If your computer would be part of a Volunteer Desktop Grid, it would run its computer work independently of the other 100 million computers, so it runs what is known as "pleasantly parallel" programs. Most supercomputer applications are not pleasantly parallel. They can be broken in smaller parts that run on individual computers or individual computer cores, but these individual parts regularly need to exchange information. Hence the interconnect between the computer cores has to be very fast, which in return is very expensive. Also there has to be a lot of memory inside the machines – fast and expensive. But perhaps the most difficult part is to map applications onto these parallel supercomputers. That requires a lot of thinking, work, and money.


References:

http://edition.cnn.com/2012/03/29/tech/super-computer-exa-flop/

http://boincstats.com

http://desktopgridfederation.org

http://primeurmagazine.com




About the author:

Ad Emmen studied physics at the University of Nijmegen, The Netherlands. From 1980 until 1995, he worked at the foundation for Academic Computing Services Amsterdam (SARA) in several positions. In 1996, he set up Genias Benelux, of which he is currently managing director.

Ad Emmen participated in many European projects, including  HOISe-NM , EROPPA, and Dynamite and BEinGRID. He is managing director of the Foundation AlmereGrid and Member of the board of Gridforum Nederland. Currently he is active in Grid and Cloud project including EDGI, DEGISCO and Contrail. He published papers on supercomputing and publishing technology and was co-founder of the journal “Supercomputer”. He is at the moment also editor of Primeur Magazine.

 

Contact the author

« Go back

Platinum Sponsors

ISC Partner

Gold Sponsors