Google downshifts App Engine to infrastructure cloud
Google I/O Microsoft just downshifted its Azure platform cloud so it can support raw virtual machines and any old applications companies would like to cram into them, and now Google has followed suit with Compute Engine.
Announced today on the Google I/O extravaganza in San Francisco by Urs Hölzle, senior vp of infrastructure on the Chocolate Factory, Compute Engine gives Google customers what they’ve been inquiring for: Virtual machines that run on Google’s vast infrastructure. And it gives Google something it wants: one more product which can generate revenue and profits from its many large data centers scattered all over the world.
To illustrate the facility of Compute Engine, Hölzle mentioned the Cancer Regulome Explorer application created by the Institute for Systems Biology, which used to run its genome-matching algorithms, utilized in cancer research, on an internal cluster with 1,000 cores. In this machine, the Genome Explorer app took 10 minutes to locate a match on a selected segment of the chromosomes between two samples.
After a couple of days, the Cancer Regulome Explorer application was ported to a mixture of App Engine and Compute Engine, with 2,000 cores devoted to the job, and simply by doubling the capacity, it was in a position to make connections about every two seconds or so.
“Anyone with large-scale computing needs can now access the identical infrastructure with Compute Engine virtual machines,” said Hölzle. “And this infrastructure comes with a scale, and a performance, and a cost it really is un paralleled within the industry because you can enjoy the efficiency of Google’s data centers and our experience using them.”
While Hölzle was talking, the live demo genome app was quietly scaling out, and at last was running on over 770,000 cores in a single of Google’s data centers, and the links were stoning up faster than you can see.
“That’s cool, and that’s how infrastructure as a service is meant to work,” Hölzle boasted.
Without naming any names, Hölzle also bragged that Google may provide raw virtual machines and the raw compute, storage, and networking capacity underneath them at a more robust price than its rivals – as much as 50 per cent more compute than other infrastructure cloud providers.
“So that you wouldn’t have to make a choice from getting one of the best performance and getting the precise price,” explained Hölzle. “We worked very hard for the past decade to lower the price of computing, and we’re passing these savings directly to you.”
Compute Engine is in limited preview in the present day, and you may subscribe to it here . Google is suggesting that the initial uses of the infrastructure cloud are for batch jobs like video transcoding or rendering, big data jobs like letting Hadoop chew on unstructured data, or traditional and massively parallel HPC workloads in scientific and research fields.
At the instant, Compute Engine fires up a pool of virtual Linux instances, accordingly either Ubuntu 12.04 from Canonical or the RHEL-ish CentOS 6.2, nonetheless it just isn’t clear what virtual machine container Google is using. Presumably, it’s the same VM that Google uses internally for its own code. Google failed to say when or if it could support other Linuxes or Microsoft’s Windows.
You store data on Google’s Cloud Storage, and you’ll use ephemeral storage for the VMs as they run in addition to persistent storage to hang data sets. You can also make the persistent storage read only in addition, which means it cannot be messed with and that it usually is shared by multiple Linux instances at the Google infrastructure.
Compute Engine has a service level agreement for enterprise customers, assuring a definite uptime, but what exactly that level of uptime isn’t within the developer’s guide . Google warns that it will probably take the service down for periodic maintenance in the course of the limited preview, and in addition that it is just supporting Compute Engine inside the US, not in its data centers in Europe or Asia/Pacific, in the course of the preview. Google has an open and RESTful API stack for Compute Engine and is operating with Puppet Labs, Opscode, and RightScale to integrate their cloudy management tools with Google’s infrastructure
Google Compute Engine is pretty simple on the subject of configuration and pricing . The fundamental VM slice has one virtual core, 3.75GB of virtual memory, and 420GB of disk capacity on it and is rated at 2.75 Google Compute Engine Units (GCEUs), which isn’t explained well when it comes to how that pertains to an actual-world server slice.
The point is, Google guarantees a definite level of performance per virtual core, it doesn’t matter what the underlying iron is and its virtualization layer ensures this. You’ll be able to stir up virtual machines with 1, 2, 4, or 8 virtual cores, and the memory, disk, and function all scale with the cores in a linear fashion. (Well, the disk is just a little better than linear.) The starter VM costs 14.5 cents per hour and the worth also scales linearly – it really works out to five.3 cents per GCEU per hour.
It costs nothing to upload data into the Compute Engine service, and you’ll move data between cloud services inside the same countryside without charge to boot. In order to move data to another zone in Google’s data center infrastructure in the same region, you should pay a penny per gigabyte, and ditto that allows you to move to another region throughout the US.
Exporting data in your facilities runs you between 8 and 12 cents per gigabyte within the Americas and EMEA regions on a sliding scale that decreases the cost as you ramp up the terabytes. It’ll cost from 15 to 21 cents per gigabyte to go data out of the Google cloud to you within the Asia/Pacific region. Persistent storage runs 10 cents per GB per hour. ®