ISC HPC Blog
HPC is Getting Crowded
Until the recent years, HPC was a computing ecosystem with specialized (highly parallel) architectures, abundant number of commodity components and high-performance applications running for hours and days. A bunch of tools and experts were able to glue it all together. There was a continuous evolution of HPC concepts, processors, interconnects, memory hierarchies, compilers, and software tools. That was the time when expectation was big and the International Supercomputing Conference (ISC) was small!
However, instead of HPC utilization getting easier (as we dreamed of many years ago), we are now confronted with increasing complexity and a new trend emerging almost every year, such as multicore and manycore, scaling up and out, big data, digital manufacturing and the missing middle, green computing, and HPC in the Cloud. For many, especially the end-user, this is a very painful and growing mixture of technical, mental, and even political challenges which no-one is able to handle individually anymore.
That’s why we need a conference like ISC, and the growing number of conference attendees and exhibitors underlines the fact that HPC is getting more and more complex on every layer - processors, network, servers, storage, interconnect, middleware, access, applications, and usage and business models. As ISC has grown into a large international gathering with a variety of topics, it is no longer possible to cover every topic in detail, to the full satisfaction of each individual participant.
When we recognized this dilemma, the ISC organizers decided to spin-off a topical satellite conference on the subject of HPC and big data in the cloud - the ISC Cloud Conference.
HPC Clouds are of particular interest with the growing tendency to outsource HPC and time-consuming data analysis, increase business and research flexibility, reduce management overhead, and extend existing and limited HPC infrastructures. Clouds reduce the barrier for service providers to offer HPC services with minimum entry costs and infrastructure requirements. Clouds also allow service providers and users to experiment with novel services and to reduce the risk of wasting resources.
Rather than having to rely on a corporate IT department to procure, install and wire HPC servers and services into the data center, there is the notion of self-service, where users access a cloud portal and make a request for compute and data servers with specific hardware or software characteristics, and have them provisioned automatically in a matter of minutes. When no longer needed, the underlying resources are put back into the cloud to service the next customer. This notion of disposable computing dramatically reduces the barrier for research and development, in research and industry! Clouds will surely revolutionize how HPC is applied because of its utilitarian usage model. Clouds will make HPC and data processing genuinely mainstream.
We strongly believe that it is the time now for all HPC users to dig deeper into HPC Cloud, for compute and data intensive applications.
Please see more information on ISC Cloud’12 at http://www.isc-events.com/cloud12/.