Industry News Desk
Cloud Computing vs. Grid Computing - What's the Difference?
Cloud computing really is about lots of small allocation requests
Aug. 21, 2008 05:30 AM
Thorsten von Eicken's RightScale Blog
Recently Rich Wolski (UCSB Eucalyptus project) and I were discussing grid computing vs. cloud computing. An observation he made makes a lot of sense to me. Since he doesn’t blog [...], let me repeat here what he said. Grid computing has been used in environments where users make few but large allocation requests. For example, a lab may have a 1000 node cluster and users make allocations for all 1000, or 500, or 200, etc. So only a few of these allocations can be serviced at a time and others need to be scheduled for when resources are released. This results in sophisticated batch job scheduling algorithms of parallel computations.
Cloud computing really is about lots of small allocation requests. The Amazon EC2 accounts are limited to 20 servers each by default and lots and lots of users allocate up to 20 servers out of the pool of many thousands of servers at Amazon. The allocations are real-time and in fact there is no provision for queueing allocations until someone else releases resources. This is a completely different resource allocation paradigm, a completely different usage pattern, and all this results in completely different method of using compute resources.
I always come back to this distinction between cloud and grid computing when people talk about “in-house clouds.” It’s easy to say “ah, we’ll just run some cloud management software on a bunch of machines,” but it’s a completely different matter to uphold the premise of real-time resource availability. If you fail to provide resources when they are needed, the whole paradigm falls apart and users will start hoarding servers, allocating for peak usage instead of current usage, and so forth.