Introduction
Amazon and Google are among the top providers of cloud computing platforms. Each has invested in large data centers and reliable networks that deliver scalable compute, storage and database services. Both offer web services accessed through simple API endpoints that can rent infrastructure, store data, or create a network.
The goal of this paper is to equip those familiar with AWS with the key concepts required to get started with Google Cloud Platform. It compares and contrasts Google Cloud Platform with AWS and highlights the key differences between the two. It is also a quick reference guide that maps AWS products, concepts, terminology and scenarios to corresponding offerings on Google Cloud Platform.
If you are a developer already familiar with AWS offerings such as Amazon Elastic Compute Cloud, Amazon Simple Storage Service, Amazon VPC and Amazon RDS, this guide will help you apply that knowledge as you get started with Google Cloud Platform. This report doesn’t attempt to compare the syntax and the semantics of using the SDK, API or the command line tools.
General Overview
Google Cloud Platform
When Google first made Google App Engine public in 2008, we had actually been running a very large cloud infrastructure for years—internally. For the past 15 years, Google has been building the fastest, most powerful, highest quality cloud infrastructure on the planet. We use this infrastructure to bring customers services like Gmail, Maps, YouTube, and Search. The size and scale of these services means we have a lot of experience running cloud infrastructure. Over time, we’ve learned what works and what doesn’t work. With Google Cloud Platform, we are taking the cloud we made for ourselves and exposing it for you to use.
Currently, the Google Cloud Platform consists of a variety of services. Together, these services provide a comprehensive platform for application development and hosting.
Regions and Zones
Nearly all AWS products are deployed within regions located around the world. These regions are collections of data centers in relatively close proximity to each other. Within the regions, there are 2 or more Availability Zones. These regions are spread across the world, in the United States, Europe, Asia, and Australia. These concepts translate quite easily to Google Cloud Platform, as there are also regions, with zones, across the globe—also in the United States, Europe, and Asia. The Cloud Platform regions are located in Iowa in the United States, Belgium in Europe, and in Taiwan in Asia.
Some Google Cloud Platform services are not necessarily always located in specific regions, but can also be located in a continental locations. These services include Google App Engine and Google Cloud Storage. Currently, the continental locations available are United States, Europe, and Asia.
Each AWS Region is isolated and independent of each other. This is by design so that the availability of one region doesn’t affect any of the others, and so that services within regions remain independent of each other. The regions for Google Cloud Platform are similarly distinct from each other for availability reasons, but have functions that enable them to synchronize data across regions for various services.
Both AWS and Google Cloud Platform have points of presence (POPs) located in many more locations around the world. These POP locations help cache content closer to end users. However, each platform uses these POP locations for different reasons on each platform. AWS provides a distinct CDN (Content Delivery Network) service—CloudFront—via these POP locations. Google Cloud Platform integrates POP locations in multiple services—Google App Engine, Google Cloud Storage for example—to improve edge caching for all of these services. In addition, these POPs connect to our data centers through Google-owned fiber, which often means that Google Cloud Platform-based applications can have faster and more reliable access than AWS, which often goes over public internet.
Mapping AWS terms and concepts to Google Cloud Platform:
Concept | AWS term | Google Cloud Platform term |
Cluster of data centers and services | Region | Region |
Abstracted data center | Availability Zone | Zone |
Edge caching | POP (just CloudFront) | POP (multiple services) |
Accounts, Limits, and Pricing
To use any service from AWS, you must sign up for an AWS account. Once you have completed this relatively straightforward process, you can launch any service under your account, within the default limits. Those services bill to your account and your account only. It is possible for AWS to create billing accounts and sub-accounts that roll up to them and see which account bills what. In this way many organizations can emulate a billing structure within their organization.
Google Cloud Platform also requires you to sign up and create an account with a process very similar to AWS. With Cloud Platform, you use your Google Account. This could be a Gmail account, an account registered with your corporate e-mail address, or a single sign-on enabled corporate account.
Unlike AWS, Google Cloud Platform does not group the services you use by account. Instead, services are grouped by project, and you must have at least one project under your account to use any Cloud Platform services. After that, the process is similar to AWS, in that you can create any resource under that project within the default limits of the account. This structure lets you create multiple projects under the same account that are wholly separate from each other. This is advantageous when you need to create separate divisions or groups within your company. You can even create a project just for testing and then delete the project—all the resources created by that project will be deleted as well.
Both AWS and Google Cloud Platform have default soft limits on their services for new accounts. Soft limits differ from hard limits. Hard limits on a service are technical limits that can’t be changed for an individual customer. They are either limits that could be changed by feature releases or are limits that are part of the nature of the service and won’t change. Soft limits, on the other hand, are sensible as they can help prevent fraudulent accounts from creating resources they won’t pay for, allow for management of cost, and limit risk. Soft limits also prevent new users who aren’t as familiar with the platform from accidentally spending more than they intended until they fully understand the platform. Both providers have straightforward mechanisms to get in touch with the appropriate internal teams to raise the limits on their services.
Finally, a note on pricing. We want this documentation to serve as a long term tool for users to use as a resource when exploring the Google Cloud Platform. When new features and services are introduced, by either Google or Amazon, we try to update our documentation to reflect those changes as soon as possible. However, pricing tends to change more often than core features or services. As such, we’re going to try to avoid pricing specifics. This allows the document to stay useful, without having anachronistic pricing. We will, however, discuss the pricing model behind services wherever useful. If you're interested in seeing a cost comparison for a specific solution, we encourage you to make use of the pricing calculators provided by both AWS and Google Cloud Platform, so you can see what configuration provides the best value in terms of flexibility, scalability, and cost.
Command Line Interface and Console
Both AWS and Google Cloud Platform provide a command line interface (CLI) for controlling and querying all the resources of their respective platforms. They both provide a unified CLI for all services, rather than making users install separate command line libraries for each service. These CLIs are available on Windows, Linux, and Mac OS X. Not only can they be used from the command line to control resources in the cloud platforms, they can be used to create scripts and inside applications to automate control and status checking of these platforms.
One feature that many developers find useful with Google Cloud Platform is that you can access the CLI commands from any system that has a web browser, using Google Cloud Shell.
Both AWS and Google Cloud Platform have robust web user interfaces that allow for easy management of all their services and the resources created by those services. These user interfaces give users the ability to create and manage resources with their mouse and visualize what their infrastructure is doing. While developers and system administrators might prefer the command line interface to the console, a web UI is generally more accessible to more people. The console for Cloud Platform is located at https://console.cloud.google.com.
Building Block Services
In general, developers and system operators use a platform based on fundamental services with additional, higher level, services built on top of them::
Nearly all cloud providers follow this general pattern, which is why, at a high level, AWS and Google Cloud Platform share similar characteristics.
The fundamental services for AWS and Google Cloud Platform are:
- Compute
- Storage
- Networking
- Databases
Many, if not all, of the higher level services provided on a cloud platform are built on top of these services. These higher level services usually consist of:
- Application services. These are services designed to help optimize application in the cloud. Examples include AWS SNS and Google Cloud Pub/Sub.
- Data services. These are services that help process large amounts of data, such as Amazon Kinesis and Google Cloud Dataflow.
- Management services. These include services that help you track the performance of an application, such as Amazon CloudWatch and Google Cloud Monitoring.
While nearly all customer workloads can run on just the fundamental services, most benefit from using these higher level ones.
AWS Building Block Services
The fundamental AWS products are:
- Amazon Elastic Compute Cloud for virtual compute
- Amazon Simple Storage Service and Amazon Elastic Block Store for storage
- Amazon VPC for networking
- Amazon RDS and Amazon DynamoDB for databases
Higher level building block services are built on these fundamental services. These can range from a platform as a service—such as AWS Elastic Beanstalk—to more abstract services like Amazon Kinesis.
Google Cloud Platform Building Block Services
The fundamental Google Cloud Platform services are:
- Google Compute Engine and Google App Engine for virtual compute
- Google Cloud Storage, which supports both persistent disk and local SSD for storage
- Google Cloud DNS and Google Cloud Interconnect for networking basics
- Google Cloud SQL, Google Cloud Datastore, and Google Cloud Bigtable for databases.
Like AWS, Google Cloud Platform layers a number of higher-level services on top of these fundamental ones.
Both AWS and Google Cloud Platform build other services on top of these fundamental ones. Some of them—AWS CloudFormation and Google Cloud Deployment Manager—are designed to make it easier to create large, repeatable deployments. However, there are also other services that doesn't have a direct comparison between AWS and Cloud Platform.
The following table provides a side-by-side comparison of building block services, application services, data services and management services on AWS and Google Cloud Platform. We will go into detail on each of these sections below.
Service Category | Service | AWS | Google Cloud Platform |
Compute | IaaS | Amazon Elastic Compute Cloud | Google Compute Engine |
PaaS | AWS Elastic Beanstalk | Google App Engine | |
Containers | Amazon Elastic Compute Cloud Container Service | Google Container Engine | |
Network | Load Balancer | Elastic Load Balancer | Google Compute Engine Load Balancer |
Peering | Direct Connect | Google Cloud Interconnect | |
DNS | Amazon Route 53 | Google Cloud DNS | |
Storage | Object Storage | Amazon Simple Storage Service | Google Cloud Storage |
Block Storage | Amazon Elastic Block Store | Google Compute Engine Persistent Disks | |
Cold Storage | Amazon Glacier | Google Cloud Storage Nearline | |
File Storage | Amazon Elastic File System | ZFS / Avere | |
Database | RDBMS | Amazon Relational Database Service | Google Cloud SQL |
NoSQL: Key-value | Amazon DynamoDB | Google Cloud Bigtable | |
NoSQL: Indexed | Amazon SimpleDB | Google Cloud Datastore | |
Big Data & Analytics | Batch Data Processing | Amazon Elastic Map Reduce | Google Cloud Dataproc, Google Cloud Dataflow |
Stream Data Processing | Amazon Kinesis | Google Cloud Dataflow | |
Stream Data Ingest | Amazon Kinesis | Google Cloud Pub/Sub | |
Analytics | Amazon Redshift | Google BigQuery | |
Application Services | Messaging | Amazon Simple Notification Service | Google Cloud Pub/Sub |
Data Sync | Amazon Cognito | Google Firebase | |
Mobile Backend | Amazon Cognito | Google Cloud Endpoints Google Firebase |
|
Management Services | Monitoring | Amazon CloudWatch | Google Cloud Monitoring |
Deployment | AWS CloudFormation | Google Cloud Deployment Manager |
Compute
Compute services can be considered the foundation of cloud platforms. Developers can choose computing capacity in the form of virtual machines, use pre-configured, managed execution environment available as PaaS, or deploy containerized applications that run on a managed cluster.
Infrastructure as a Service (IaaS)
Infrastructure as a Service (IaaS) is a fundamental building block of cloud services. More than just virtual compute, which has existed for quite some time, IaaS gives users on-demand access to flexible compute power. Not surprisingly, the IaaS services offered by AWS and Google Cloud Platform are fundamental to each platform and almost every type of customer workload uses them.
The IaaS service in AWS is called Amazon Elastic Compute Cloud. On Google Cloud Platform, it is called Google Compute Engine. Below we will go into the differences between these two services.
Mapping Amazon Elastic Compute Cloud terms and concepts to Google Compute Engine:
Feature | Amazon Elastic Compute Cloud | Google Compute Engine |
Virtual Machines | Instances | Virtual Machines, Instances |
VM template | Amazon Machine Image | Image |
Temporary Virtual Machines | Spot Instances | Preemptible Virtual Machines |
Firewall | Security Groups | Google Compute Engine Firewall Rules |
Scale-out | Auto Scaling | Autoscaler |
Local attached disk | Ephemeral disk | Local SSD |
VM Import | Supported formats— Raw, OVA, VMDK, and VHD | Supported formats - AMI, Raw, and VirtualBox |
Deployment locality | Zonal | Zonal |
Virtual Machines
Both Elastic Compute Cloud and Google Compute Engine refer to virtual machines either as instances or virtual machines. On both platforms, you can create instances from stored images. You can launch instances quickly and terminate them on demand. After you access your instance, you have complete control over it and can do whatever you want. Each platform supports a number of instance types and operating systems you can use and you can tag your instances as well.
AWS and Google Cloud Platform do differ in how you access your instances. With AWS, you must include your own SSH key if you want terminal access to the instance. With Cloud Platform, you can create the key only when you need it—even if your instance is already running. Google Cloud Platform also offers a way to SSH directly from the web console, so there are no keys stored on the local machine. Additionally, you can always address instances by name within your private network due to the default internal DNS. (There are some differences in firewalls and networks, but we’ll cover those sections separately.)
Instance Types
Both Elastic Compute Cloud and Google Compute Engine have a standard list of instance configurations with certain amounts of virtual CPU, RAM, and network assigned to them. These configurations are referred to as Instance Types in Elastic Compute Cloud and Machine Types in Google Compute Engine.
Additionally, Google Compute Engine allows you to customize your instance type to fit the workload that you need to run.
The following table lists the instance types for both services. Note that both Elastic Compute Cloud and Google Compute Engine add new instance types regularly, so this list is not definitive. Instead, we recommend you check these sites for the latest information on instance types.
Machine Type | Elastic Compute Cloud | Google Compute Engine |
Shared Core (machines for tasks that don’t require a lot of resources but do have to remain online for long periods of time) | t2.micro - t2.large | f1-micro, g1-small |
Standard (machines that provide a balance of compute, network and memory resources ideal for many applications) | m3.medium - m3.2xlargem4.large - m4.10xlarge | n1-standard-1 - n1-standard-32 |
High Memory (machines for tasks that require more memory relative to virtual CPUs) | r3.large - r3.8xlarge | n1-highmem-2 - n1-highmem-32 |
High CPU (machines for tasks that require more virtual CPUs relative to memory) | c3.large - c3.8xlargec4.large - c4.8xlarge | n1-highcpu-2 - n1-highcpu-32 |
GPU (machines that come with discrete GPU’s) | g2.2xlarge, g2.8xlarge | N/A |
SSD Storage (machines that come with SSD local storage) | i2.xlarge - i2.8xlarge | n1-standard-1 - n1-standard-32 n1-highmem-2 - n1-highmem-32n1-highcpu-2 - n1-highcpu-32* |
Dense Storage (machines that come with increased amounts local HDD storage) | d2.xlarge - d2.8xlarge | N/A |
* While these machine types don’t exist as such on Google Compute Engine, attaching SSD local storage to a different machine types can accomplish the same thing.
Google Compute Engine and AWS share many of the same families of instance types—such as standard, high memory, high CPU, and shared core. While Google does not have a category of instances that use SSD storage, all of the non-shared core families support the addition of local SSD disks. These disks are 375 GB in size, and you can add 4 of them to an instance. The ability to add SSD disks to these instances results in both Google and AWS having comparable instance families for SSD storage.
Google Compute Engine does lack two specialized families: large magnetic storage and GPUs.
OS Support
Both Elastic Compute Cloud and Google Compute Engine provide support for a variety of operating systems. On both platforms, any operating system that has a license costs you an additional fee.
Operating systems supported by both platforms include:
- CentOS
- CoreOS
- Debian
- FreeBSD
- openSUSE
- Red Hat Enterprise Linux (Premium)
- SELinux
- SUSE(Premium)
- Ubuntu
- Windows Server 2008 R2, 2012 (Premium)
Operating systems supported only by Elastic Compute Cloud:
- Amazon Linux
- Windows Server 2003 (Premium)
- Oracle Linux (Premium)
Virtual Machine Templates
Creating a new instance is similar for both Elastic Compute Cloud and Google Compute Engine. In general, the steps are:
- Choose an instance type.
- Select an image, which determines what operating system the instance uses. On Elastic Compute Cloud, instance images are called Amazon Machine Images, or AMIs. On Google Compute Engine they’re called images.
Both Elastic Compute Cloud and Google Compute Engine are similar enough that you can employ the same workflow for image creation on either platform. For example, both Elastic Compute Cloud AMIs and Google Compute Engine Images contain at an operating system. They also can contain other software, such as Web servers or databases. Both AWS and Google Compute Engine allow you to use images published by third party vendors, such as Microsoft, or custom images created for private use.
Elastic Compute Cloud and Google Compute Engine do differ in how they store images. AMIs, for example, are stored either in Simple Storage Service or Elastic Block Store. This leads to differences in startup times (images stored on Simple Storage Service take longer to launch) and whether or not the instance requires Elastic Block Store. On Google Compute Engine, all images are stored on Persistent Disk—which is covered below but is the equivalent service to Elastic Block Store. Also, all Google Compute Engine instances use Persistent Disk for the root volume; there is no option for a local, or ephemeral, disk. It is possible to export images to Google Cloud Storage (an object store service equivalent to Simple Storage Service), but they are not stored there.
Also, you currently cannot publish images publicly, although you can export them to Google Cloud Storage and share them. This means any images you launch are either images you created or images created by an authorized software vendor, like Microsoft.
One other difference between AMIs and images concern availability. With AWS, AMIs are available only within a specific region, while Google Compute Engine Images are always global.
Temporary Virtual Machines
Both Elastic Compute Cloud and Google Compute Engine have specialized types of instances that you can create. These instances are cheaper than standard instances. However, you can also lose them with little notice. In Elastic Compute Cloud these are called Spot Instances, and in Google Compute Engine they are called Preemptible Virtual Machines.
Applications use instances like these when they have tasks that can be interrupted. They are also useful when an application can benefit from increased compute power but doesn’t necessarily need it. Examples of these tasks include batch processing, rendering, testing, and many more.
While the application of Spot Instances and Preemptible Virtual Machines are similar, their implementation is quite different. Spot Instances have two models. The first type, regular Spot Instances, are priced on a market, the Spot Market, and launch when a bid is accepted. Users can bid any amount on an instance type, from $0.01 up to many times the on-demand price of the instance. If that bid is the current highest bid, then an instance (or instances) is created in Elastic Compute Cloud. The instance runs either until you terminate it, someone else outbids you, or Elastic Compute Cloud requires it for fleet needs. Aside from these rules about termination and price, Spot Instances behave similarly to it on-demand Elastic Compute Cloud instances. They support any AMI, any instance type, and you have total control over the instance while it’s running.
The second kind of Spot Instances, Spot Blocks, have a fixed priced that is less than the regular on-demand rate. However, they can only run for a maximum of 6 hours at that fixed rate.
Preemptible Virtual Machines, have a fixed price—there is no market. The price is always the 70% of the on-demand rate. Unlike regular Spot Instances, Preemptible Virtual Machines run for as long as 24 hours and then are terminated, but you can always terminate them sooner. One final difference is that any OS that also includes a license fee will charge the full cost of the license while using that Preemptible Virtual Machine. Aside from these three key differences, like Spot Instances, Preemptible Virtual Machines behave just as normal instances.
Firewall
Based on software-defined networking, cloud infrastructure services like Amazon Elastic Compute Cloud and Google Compute Engine offer configurable, programmable firewall policies. These policies protect virtual machines and networks by selectively allowing and denying traffic.
By default, both services block all incoming traffic from outside a network and require an ingress firewall rule for packets to reach an instance. This means that, if you want to allow incoming network traffic, you need to set up firewalls that permit connections to your instances. Each firewall has at least a single rule that determines what traffic is permitted into the network.
Amazon Elastic Compute Cloud and Amazon Virtual Private Cloud use the concept of Security Groups and Network Access Control Lists (NACLs) to allow or deny incoming and outgoing traffic. Elastic Compute Cloud Security Groups secure instances in Elastic Compute Cloud Classic, while Amazon Virtual Private Cloud Security Groups and NACLs are used to secure both Elastic Compute Cloud instances and network subnets in an Amazon Virtual Private Cloud.
Google Compute Engine uses a flat list of firewall rules to secure Google Compute Engine virtual machines and networks. You create a rule by specifying the source IP address range, protocol, ports, and optional tags that represent source and target groups of virtual machines. However, Google Compute Engine firewall rules can’t block outbound traffic. To do that, you can use a different kind of technology, such as iptables.
Scale-Out
Auto scaling brings elasticity to cloud deployments. Both Elastic Compute Cloud and Google Compute Engine support auto scaling. Auto Scaling helps maintain a specific number of virtual machines at any given point, or adjust capacity in response to certain conditions. These instances launch from a predefined template. Users create policies to define when Auto Scaling or the Autoscaler should scale up or down.
Amazon's Auto Scaling creates groups of instances called, intuitively, Groups. Scaling activity is triggered by a Scaling Plan, which uses a Launch Configuration to determine what to launch when it needs to scale out. These groups and configuration are combined into one feature called a Managed Instance Group within Google Compute Engine's Autoscaler. Scaling activity for Autoscaler is triggered by an Autoscaling Policy.
You can set Auto Scaling to scale in one of three ways:
- Manual, where someone dictates to Auto Scaling that it must scale immediately.
- Scheduled, when Auto Scaling scales capacity based on time.
- Dynamic, when Auto Scaling scales based on a policy. The dynamic scaling policies can be based on any CloudWatch alarm or an SQS queue.
Autoscaler only supports dynamic scaling and supports three policy types:
- Average CPU utilization
- Cloud Monitoring metric
- HTTP load balancing serving capacity, which can be based on either utilization or requests per second.
Local Attached Storage
Both Elastic Compute Cloud and Google Compute Engine give users the option of using a disk that is local to the virtual machine, as opposed to being network attached. Because they are closer to the machine, these local disks offer much faster transfer rates. However, these local disks are not redundant like Elastic Block Store or Persistent Disk.
On Elastic Compute Cloud, local disks are called Ephemeral or Instance Store. They can be either HDD or SSD, depending on the instance type family. The number of disks and the size of these disks of Instance Store depends on the Instance Type and is not adjustable.
On Google Compute Engine, local disks are called Local SSD. As the name implies, they are SSD only. The amount of disk you can attach is independent of the instance type, with the caveat that the shared core instance types (currently f1-micro and g1-small) don’t support Local SSD. Like Instance Store, Local SSD is significantly faster than network attached storage. The size of the disks is fixed at 375 GB per disk and you can only attach four disks at once. A key difference between Instance Store and Local SSD is that Local SSD does have a cost associated with it.
VM Import
Both AWS and Google Cloud Platform provide ways for you to import existing virtual machine images created on other platforms to their platform. This allows you to leverage your investment in your on-premises servers and avoids the overhead of having to repeat work already done.
Elastic Compute Cloud accomplishes this with a service called VM Import/Export. It supports a number of virtual machine image types (Raw, OVA, VMDK and VHD). It also supports a number of operating systems: a variety of versions of Windows, Red Hat Enterprise Linux, CentOS, Ubuntu, and Debian. The process for importing involves running the command line tool, which bundles the virtual machine image and uploads it to Simple Storage Service, where it is then created as an AMI.
You can import virtual machine images in to Google Compute Engine, but the process does not have a formal name like VM Import/Export. The current supported virtual machine image types are Raw, VirtualBox, and AMI. The actual import process is similar, though less automated than the AWS process. To import an image, you need to convert it and make sure certain packages and drivers are installed. While this is more effort than Import/Export, it does leave you with more flexibility in importing images to Google Compute Engine. After the image has been converted, it is uploaded to Google Cloud Storage (Google Cloud Storage), where it is imported as an Image. Like AWS, there is no cost for this service, aside from the cost to store an image in Google Cloud Storage.
Pricing Model
Elastic Compute Cloud and Google Compute Engine offer very similar pricing models. Both services only charge you for instances for the length of time that you use them. With Amazon Elastic Compute Cloud, each instance type has a cost per hour, called the on-demand rate. If you run that instance type for an hour, you are billed that amount. With Google Compute Engine, you are charged by the minute, rather than the hour. Google charges you for the first 10 minutes, and then by the minute as long as it’s running. With both services, you can run your instance for as long as you want without ever having to talk to someone at Amazon or Google.
AWS and Google Cloud Platform do have significant differences when it comes to customer discounts. Elastic Compute Cloud has three pricing options. The first two, on-demand and Spot Instances, are discussed above. The third is called Reserved Instances (RIs). RIs work by asking you to commit to a certain number of instances for either a one- or three- year commitment. In exchange, you receive a lower cost for those instances. You can choose to pay none of the term up front, some of it, or all of it. The more you pay up front, the greater the discount. You also get a larger discount if you choose a three year term over a one year term. With RIs, you trade resource flexibility for a lower instance price. Also, RIs are are tied a specific instance type and availability zone at purchase. You can only move availability zones and exchange RIs with different instance types in the same family.
Google Compute Engine discounting works quite differently. Google Compute Engine automatically applies discounting to instances when they run for a certain amount of time each month, meaning you don't need a multi-year commitment to get the same or better discount. This is called a Sustained Use Discount. The longer you use an instance in a given month, the greater the discount. Sustained Use Discounts can net you as much as 30% off the standard on-demand rate. We also provide a calculator to help customers estimate what their workloads might cost. To get a sense of how these pricing structures differ, check out our TCO calculator.
Networking Services
Networking services provide connectivity across virtual machines, servers deployed on-premises, and other cloud services. Common networking services such as DNS, VPN, load balancers, firewall, and routing are available through APIs. Network administrators use the console, command line interface and API to configure these services.
In AWS, while networking is generally treated as a separate service, it is very much tied to Elastic Compute Cloud. This is because, with few exceptions, most services are deployed on instances, just as Amazon Elastic Compute Cloud. Amazon Relational Database Service, Amazon Redshift, Amazon Elastic MapReduce, for example, all deploy on instances. That means these services need to be aware of networking, so it’s not just a part of the compute service. This is an important difference to Google, as we will discuss below.
AWS has two different networking stacks. The oldest, Elastic Compute Cloud-Classic was introduced with Elastic Compute Cloud. Elastic Compute Cloud-Classic launches all instance types in to a public, shared network, where every instance has access to the internet and is assigned a public IP address.
Virtual Private Cloud, or VPC, was introduced to allow networking administration more common to traditional data centers. It allowed for creating private, RFC 1918 address spaces, and included sub networking, network access control lists (NACLs), inbound and outbound firewall rules, routing, and VPN. These two network stacks, Elastic Compute Cloud-Classic and Elastic Compute Cloud-VPC have existed in parallel since VPC was launched. When Elastic Compute Cloud-VPC first launched, it was optional. In late 2013, Elastic Compute Cloud-VPC became the default behavior for new accounts where VPC would be the default network stack. Elastic Compute Cloud-Classic remains on option only for older AWS accounts.
The differences between AWS networking and Google Cloud networking are significant. This due to the nature of how these services were designed. Google Cloud Platform treats networking as something that spans all services, not just compute services. It is based on Google’s Andromeda software-defined networking architecture, which allows for creating networking elements at any level with software. As a result, Cloud Platform can create a network that fits Google's needs of exactly—for example, create secure firewalls for virtual machines in Google Compute Engine, allow for fast connections between database nodes in Cloud Bigtable, or deliver query results quickly in BigQuery.
To create an instance in Google Compute Engine, you need a network. In Google Cloud Platform, we create a default network for you automatically, and you can create more as needed. Unlike AWS, there is no choice of a public network like Elastic Compute Cloud-Classic. In all cases, you create a private network, much like Elastic Compute Cloud-VPC. Unlike Elastic Compute Cloud-VPC, Google Networking does not have sub-networking, but it does have firewall rules, routing, and VPN. These prerequisites are not necessarily required for all Google Cloud Platform services. Google BigQuery, for example, does not require a network because it is a managed service.
Most of the networking entities in Google Cloud Platform, such as load balancers, firewall rules and routing tables, have global scope. More importantly, networks themselves have a global scope. This means that you can create a single private IP space that is global, without having to connect multiple private networks, with the operational overhead of having to manage those spaces separately. Due to this single, global network, all of your instances are addressable within your network by both IP address and name.
Another major difference between Google Cloud Platform networking and Elastic Compute Cloud-VPC is the concept of Live Migration. Under normal circumstances, all hardware in any data center—including Google—will eventually need either maintenance or replacement. There are also unforeseen circumstances that can happen to hardware that can cause it to fail in any number of ways. When these events happen at Google, Cloud Platform has the ability to transparently move virtual machines from affected hardware to hardware that is working normally. This is done without any interaction from the customer.
IP Addresses
Assigning IP addresses to virtual machines is a critical task. There are a few important differences in how this works between AWS and Google Cloud Platform, and some difference in terminology that you should know.
Here's how AWS IP terms and concepts map to Google Cloud Platform:
Feature | AWS | Google Cloud Platform |
Permanent IP | Elastic IP | Static IP |
Temporary IP | Ephemeral IP | Ephemeral IP |
Internal IP | Internal IP | Internal IP |
When you create an instance in Elastic Compute Cloud-Classic, your instance is given an external IP address that is only valid as long as that machine is running. This is referred to as an ephemeral IP. It is also given an internal network IP. At any point you can create and Elastic IP and assign it to the instance. This is much the same for Elastic Compute Cloud-VPC, except it is optional to have an external IP assigned to a new instance.
In Google Cloud Platform, IP addresses work in a way that is similar to Elastic Compute Cloud-VPC. At launch, all instances have an internal IP. You can optionally request an external IP that only exists for as long as that instance is running. Additionally, you can request a permanent IP address to attach an instance. Like an Elastic IP, this IP address is yours until you choose to release it. One difference between AWS and Google is that you can take an ephemeral IP address and promote it to a static IP address, thereby attaching it to your account.
Load Balancing
Load balancers distribute incoming traffic across multiple virtual machines. When configured appropriately, load balancers make applications fault tolerant and increase application availability.
Here's how Elastic Load Balancer terms and concepts to map Google Compute Engine Load Balancer:
Feature | Elastic Load Balancer | Google Compute Engine LB |
Network Load Balancing | Yes | Yes |
Content-Based Load Balancing | No | Yes |
Support for static IP address | No | Yes |
Cross Region Load Balancing | No | Yes |
Scaling pattern | Linear | Real-time |
Deployment Locality | Regional | Global |
The AWS Elastic Load Balancer (ELB) allows you to direct traffic to your instances within at least one and as many as all availability zones in a particular region. The ELB checks the health of the instances and, should any of them become unhealthy, stops sending traffic to that instance. It can also integrate with AWS’s Auto Scaling service such that when instances are created or terminated by Auto Scaling, the Elastic Load Balancer is made aware of the change automatically. When an Elastic Load Balancer is created, you are given a CNAME which you can use to point traffic to. Providing you are using Amazon’s Route 53, you can use Elastic Load Balancer as a root domain. Otherwise, you have to use a CNAME for the Elastic Load Balancer.
The Google Compute Engine Load Balancer also directs traffic to back end instances in as many zones as you choose. However, there are a few important differences between how they work from this point on:
- Google Compute Engine lets you pick whether you need a Network (Layer 4) Load Balancer, which balances TCP traffic within a region; or an HTTP(s) (Layer 7) Load Balancer, which can can balance traffic globally.
- When you provision a Google Compute Engine Load Balancer, it returns a single, globally accessible IP address. This IP address can be used for the lifetime of the Load Balancer, so it can be used for DNS A Records, whitelisting, or configurations in applications.
Scaling Pattern
Elastic Load Balancer scales up and down in response to traffic. The more traffic that goes through the Elastic Load Balancer, the more capacity it adds. The reverse is also true—as less traffic arrives, the more capacity the Elastic Load Balancer removes. The way Elastic Load Balancer changes capacity is either by changing the size of the load balancing resources (such as—adding larger load balancers to meet increased load) or by changing the number of load balancing resources (such as—reducing the number of load balancers when traffic goes down) as described here. Elastic Load Balancer does not scale instantly—it can take anywhere from 1 to 7 minutes for the Elastic Load Balancer to respond to changes in traffic. If you expect a sudden spike in traffic, you must request that AWS pre-warm your Elastic Load Balancer to a certain traffic level.
The Google Compute Engine Load Balancer also responds to traffic by scaling up or down the amount of capacity necessary to meet the traffic being passed through it. However, it responds in real time to the traffic, without a delay or pre-warming.
Pricing Model
Both AWS and Google Cloud Platform load balancing services are the same—each charges an hourly rate for the load balancer and a separate rate for how much traffic passes through it.
Peering
A peering service allows customers to connect to a cloud service directly over a network. How this is done depends on the type of service.
Here's how AWS peering terms and concepts map to Google Cloud Platform:
Feature | AWS | Google Cloud Platform |
Virtual Private Network | VPC-VPN | Cloud VPN |
Carrier Peering | Direct Connect | Carrier Interconnect |
Direct Peering | N/A | Direct Peering |
CDN Peering | N/A | CDN Interconnect |
Virtual Private Network
Creating a Virtual Private Network, or VPN, from one location to another allows you to create a secured, private link between two networks over the public internet. Both AWS and Google Cloud Platform offer this as a service. The process for creating a VPN is very similar. At a high level, VPN gateways at both ends create tunnels from their public IP address to one other public IP address, and establish a secure connection over it.
Carrier Peering
There are circumstances when connecting to a cloud platform over a VPN doesn’t provide the speed or security required by a particular workload. In such cases it is beneficial to have leased network line at guaranteed capacity level. Both Amazon and Google offer this service in conjunction with partners.
At AWS, Direct Connect allows you to create a private leased line to AWS from a partner carrier facility. These facilities will allow you to connect a private line, at a certain capacity level, into a certain region. Each partner location services a specific region.
At Google, Carrier Interconnect also allows you to create a private leased line into the Google Cloud Platform from a partner facility. These facilities allow you to connect a private line to a certain capacity level. The major difference is that Carrier Interconnect connects your traffic into the global Google Cloud Platform network, not a particular region.
Direct Peering
Direct peering is similar to Carrier Peering in that you may want a private, dedicated line from your facility to the cloud. The difference is that you would be connecting directly to the cloud provider, not via a third party partner. Otherwise, the mechanism is quite similar. Amazon does not offer this service; Google does.
CDN Peering
Content Delivery Network (CDN) peering is conceptually similar to carrier peering. In this case, instead of peering between your facility and a cloud provider, you are connecting between your resources in the cloud and a CDN. This is done from the edge locations of the cloud to the CDN. Google offers this service through CDN Interconnect. Amazon only offers this service through its own CDN service, Cloudfront.
Pricing
AWS and Google Cloud Platform charge for VPN services the same way, at an hourly rate.
Pricing for the carrier peering services has two components. The first component is the pricing from the partner for the leased line. This is generally outside of the control of each platform. The second component is the pricing for the cloud services. Here, AWS and Google Cloud Platform handle pricing differently. AWS charges by the amount of capacity you have provisioned with your partner. For example, there’s a charge for a 1G port speed, per hour, and more for 10G, and so on.
Conversely, Google does not charge for this service. In addition, Google does not charge for direct peering.
With Carrier Peering, you have two pricing components for CDN peering. The first pricing component is set by the partner. AWS also charges an additional amount based on the amount of capacity you have provisioned.
Like carrier peering, Google does not charge for CDN Interconnect.
DNS
DNS translates human readable domain names into numeric IP addresses that servers use to connect with each other. Managed DNS services such as Amazon Route 53 and Google Cloud DNS offer scalable managed DNS service in the cloud.
Here's how Amazon Route 53 features map to Google Cloud DNS:
Feature | Amazon Route 53 | Google Cloud DNS |
Zone | Hosted Zone | Managed Zone |
Support for most DNS record types | Yes | Yes |
Anycast-based serving | Yes | Yes |
Domain Registrar | Yes* | Yes** |
Latency based routing | Yes | No |
Geo DNS load balancing | Yes | No |
DNSSEC | No | No |
*Route 53 is a domain reseller for Gandi.
**Google Domains is available to purchase domains from Google.
The Domain Name System, DNS, has been around for nearly as long as the internet has. It has a relatively simple feature set that allows many of the things that we do every day to work easily. With that in mind, there are not many complicated features to Route 53 or Cloud DNS, and the two are very close in terms of feature parity.
Both Route 53 and Cloud DNS support nearly all records types, anycast-based serving, and connect to services that allow you to register domains. Amazon is a reseller of the registrar Gandi. Google has a separate registrar service, Google Domains. Neither service supports DNSSEC—security extensions for DNS.
Route 53 does support two kinds of routing that Cloud DNS does not—geography-based routing and latency-based routing. Geography-based routing lets you restrict your content to certain geographic regions of the world. Latency-based routing lets you direct traffic based on the latency measured by the DNS service.
Pricing
Both services price based on two similar parameters. First, the price on the number of zones hosted per month by the service. The second is for the number of queries per month. Route 53 charges a higher rate for either geographic based routing or latency based routing queries.