This document describes the machine families, machine series, and machine types that you can choose from to create a virtual machine (VM) instance or bare metal instance with the resources that you need. When you create a compute instance, you select a machine type from a machine family that determines the resources available to that instance.
There are several machine families you can choose from. Each machine family is
further organized into machine series and predefined machine types within each
series. For example, within the N2 machine series in the general-purpose
machine family, you can select the n2-standard-4
machine type.
- General-purpose—best price-performance ratio for a variety of workloads.
- Memory-optimized—ideal for memory-intensive workloads, offering more memory per core than other machine families, with up to 12 TB of memory.
- Accelerator-optimized—ideal for massively parallelized Compute Unified Device Architecture (CUDA) compute workloads, such as machine learning (ML) and high performance computing (HPC). This family is the best option for workloads that require GPUs.
Compute Engine terminology
This documentation uses the following terms:
- Machine family: A curated set of processor and hardware configurations optimized for specific workloads, for example, General-purpose, Accelerator-optimized, or Memory-optimized.
Machine series: Machine families are further classified by series, generation, and processor type. Each series focuses on a different aspect of computing power or performance. For example, the M series offers more memory, while the C series offer better performance.
Machine type: Every machine series offers at least one machine type. Each machine type provides a set of resources for your compute instance, such as vCPUs, memory, disks, and GPUs.
Predefined machine types
Machine types are predefined and come with a non-configurable amount of memory and vCPUs. The machine types use a variety of vCPU to memory ratios:
highcpu
— from 1 to 3 GB memory per vCPU; typically, 2 GB memory per vCPU.standard
— from 3 to 7 GB memory per vCPU; typically, 4 GB memory per vCPU.highmem
— from 7 to 12 GB memory per vCPU; typically, 8 GB memory per vCPU.megamem
— from 12 to 15 GB memory per vCPU; typically, 14 GB memory per vCPU.ultramem
— from 24 to 31 GB memory per vCPU.
For example, a c3-standard-22
machine type has 22 vCPUs, and as a
standard
machine type, it also has 88 GB of memory.
Machine family and series recommendations
The following table provides recommendations for different workloads.
C3 | M3 | A3 |
---|---|---|
Consistently high performance for a variety of workloads | Highest memory to compute ratios for memory-intensive workloads | Optimized for accelerated high performance computing workloads |
|
|
|
After you create a compute instance, you can use rightsizing recommendations to optimize resource utilization based on your workload. For more information, see Applying machine type recommendations for VMs.
General-purpose machine family guide
The general-purpose machine family offers several machine series with the best price-performance ratio for a variety of workloads.
Compute Engine offers general-purpose machine types that run on x86 architecture. The C3 machine series offers up to 176 vCPUs and 2, 4, or 8 GB of memory per vCPU on the Intel Sapphire Rapids CPU platform and Titanium. C3 instances are aligned with the underlying NUMA architecture to offer optimal, reliable, and consistent performance.
Memory-optimized machine family guide
The memory-optimized machine family has machine series that are ideal for OLAP and OLTP SAP workloads, genomic modeling, electronic design automation, and memory intensive HPC workloads. This family offers more memory per core than any other machine family, with up to 4 TB of memory.
M3 instances offer up to 128 vCPUs, with up to 30.5 GB of memory per vCPU, and are available on the Intel Ice Lake CPU platform.
Accelerator-optimized machine family guide
The accelerator-optimized machine family is ideal for massively parallelized Compute Unified Device Architecture (CUDA) compute workloads, such as machine learning (ML) and high performance computing (HPC). This machine family is the optimal choice for workloads that require GPUs.
A3 instances are available with the A3 Edge machine type
(a3-edgegpu-8g-nolssd
), which offers 208 vCPUs, 1,872 GB of memory, and 8
NVIDIA H100 GPUs, on the Intel Sapphire Rapids CPU platform and
Titanium.
Machine series comparison
To learn how your selection affects the performance of disk volumes attached to your compute instances, see Hyperdisk performance limits.
Compare the characteristics of C3, M3, and A3 machine series. You can select specific properties in the Choose instance properties to compare field to compare those properties across all machine series in the following table.
C3 | M3 | A3 Edge | |
---|---|---|---|
Workload type | General-purpose | Memory optimized | Accelerator optimized |
Instance type | VM | VM | VM |
CPU type | Intel Sapphire Rapids | Intel Ice Lake | Intel Sapphire Rapids |
Architecture | x86 | x86 | x86 |
vCPUs | 4 to 176 | 32 to 128 | 208 |
vCPU definition | Thread | Thread | Thread |
Memory | 8 to 1,408 GB | 976 to 3,904 GB | 1,872 GB |
Custom machine types | — | — | — |
Extended memory | — | — | — |
Sole tenancy | — | ||
Nested virtualization | — | — | |
Confidential Computing | — | — | — |
Disk interface type | NVMe | NVMe | NVMe |
Hyperdisk Balanced | |||
Hyperdisk Balanced HA | — | — | — |
Hyperdisk Extreme | — | — | — |
Hyperdisk ML | — | — | — |
Hyperdisk Throughput | — | — | — |
Local SSD | — | — | — |
Max Local SSD | 0 | 0 | 0 |
Standard PD | — | — | — |
Balanced PD | — | — | — |
SSD PD | — | — | — |
Extreme PD | — | — | — |
Network interfaces | gVNIC and IDPF | gVNIC | gVNIC |
Network performance | 23 to 100 Gbps | up to 32 Gbps | up to 800 Gbps |
High-bandwidth network | 50 to 200 Gbps | 50 to 100 Gbps | up to 800 Gbps |
Max GPUs | 0 | 0 | 8 |
Sustained use discounts | — | — | — |
Committed use discounts | — | — | — |
Spot VM discounts | — | — | — |
GPUs and compute instances
GPUs are used to accelerate workloads, and are supported for A3 instances. The GPUs are automatically attached when you create the instance. A3 instances have a fixed number of GPUs, vCPUs and memory per machine type.
For more information, see GPUs on Compute Engine.
What's next
Learn how to create and start a VM.
Complete the Quickstart using a Linux VM.
Complete the Quickstart using a Windows VM.
Learn more about attaching block storage to your VMs.