Interview with Jason Stowe, CEO of Cycle Computing

In his opening keynote titled “HPC and Supercomputing – On our Way towards a New Utility” at the ISC’13 Cloud Conference, Jason Stowe, the CEO of Cycle Computing, will be talking about what will happen when scientists and researchers are no longer limited by fixed-size compute and data capacity. For some, cloud computing technologies are already making that possible. In one case, a 10,000 server cluster in the cloud was used to do 40 years of science in less than half a day, and at a cost of just $4372. Stowe sees trends he believes point to a much wider use of cloud computing for scientific applications in research and industry. In light of this, we caught up with him to discuss his view of the future.

How do you think cloud computing for HPC has progressed over, say, the last five years? And what developments have driven this evolution? 

Stowe: The past five years have been a period of tremendous growth for HPC in the cloud or utility HPC - it’s clearly reached the point of global awareness and mainstream adoption. Individual users and companies as a whole have realized that on-demand access to the right compute and data resources at the right time makes better science and better business possible faster and at less cost than possible before. A number of developments have influenced this growth. Global pressures to accelerate the pace of innovation, an insatiable demand for compute and data, and an economic climate of austerity and budget constraints have created a perfect storm that’s fueled the need for utility HPC.

What important barriers remain to be overcome? 

Stowe: Security continues to be an important consideration. Having your own local set of encryption keys, that your cloud provider does not have, can be critical for ensuring that your data is protected. Doing proper reviews, and enabling proper audit trails for utility HPC systems is also important. Any infrastructure provider that has multiple clients interacting in the same operating system instance on a host is a risk, but thankfully the large cloud providers all handle this properly. Also, as organizations leverage utility HPC more and more, centrally orchestrating data movement and access from lower cost “cold storage” as applications demand it will be a key requirement. One top 10 pharma company uses Cycle’s Data Manager to automate archival and retrieval of 75-terabyte data sets to and from Amazon Glacier, without the need to re-code or disrupt their application’s workflow.

Many use cases for cloud-enabled technical computing seem to be in the life science realm. What do you attribute this to? 

Stowe: Many of life science workloads, such as genome sequencing, or needle-in-a-haystack simulations like drug design are “pleasantly parallel” or high throughput, where computations are independent of each other. In the case of drug design, a cancer target is a protein that, much like a lock, has a pocket where molecules can fit, like keys, to either enhance or inhibit its function. The problem is, rather than the tens of keys on a normal key chain, you have tens of millions of molecules to check. Each one is computationally intensive to simulate, so in this case, a drug designer has approximately 340,000 hours of computation, or nearly 40 compute years, ahead of herself.With utility HPC, what would have taken a year to set up at a price-tag of $44 million, this drug sequencing workload completed in just 11 hours at a cost of $4,372. Without utility HPC, it’s safe to say this science would never happen. Even though life science was a logical proving ground for HPC in the beginning, other industries - financial services/insurance, EDA, manufacturing, and even energy, are now capitalizing on these kinds of benefits.

What other types of HPC applications or industries do you think are most suitable for the utility model of computing at this point? 

Stowe: We think that utility HPC will be the single largest accelerator of human invention in the coming decades. We have many use cases – energy, manufacturing, financial services, and many more - that prove how most modern sciences, especially Monte Carlo or data parallel simulations, work great in the cloud. Researchers, quants, and scientists of all disciplines can now execute computational science and complex finer-grained analysis that was previously unapproachable due to cost or overhead. Consider the impact on Financial Services as an example: a Fortune 100 firm uses HPC in the cloud to launch its monthly risk report – a 2.5 million compute hour Monte Carlo simulation that now completes over a weekend. A Fortune 1000 life insurance firm dynamically hedges risk across its entire portfolio, with nested stochastic on stochastic models and billions of inner paths for each annuity. Even at smaller scales, where scientists can start work in 10 minutes instead of waiting 6 weeks to get five servers, great science can now be done in a wide range of industries and applications.

The ISC Cloud’13 Conference will take place in Heidelberg, Germany on September 23 – 24, followed by the ISC Big Data'13 Conference on September 25 and 26. Register now and enjoy 25 percent off the conference fees. 

The ISC Cloud'13 is sponsored by Cycle Computing, HP, Intel, Cordys, Mellanox, Advania, Ansys, Bull, Transtec, C12G Labs, OpenNebula.org IDC and Intersect360 Research. 

The ISC Big Data'13, the first of its kind, is supported by Intel, SAS, Mellanox, Quantum, SAP, SGI, TimetoAct Group, IDC and Intersect360 Research.  

 

ISC Cloud, Big Data and the Heidelberg Autumn Fair 

If you haven’t looked into your stay at the Marriott Heidelberg, please do so now. The ISC special rates will disappear on August 31 due to a very high demand arising from tourists travelling to the Heidelberg autumn fair. If you’re travelling with a companion, you might want to stay back and enjoy the fair.

PR Contact:

Ms. Nages Sieslack

Phone +49 (0) 621 180686 16

Mobile +49 (0) 178 18798 58

nages.sieslack@isc-events.com| www.isc-events.com

 

« Go back

PLATINUM SPONSORS

GOLD SPONSORS