This document describes how to reduce network latency among your Compute Engine instances by creating and applying compact placement policies to them. To learn more about placement policies, including their supported machine series, restrictions, and pricing, see Placement policies overview.
A compact placement policy specifies that your instances should be physically placed closer to each other. This can help improve performance and reduce network latency among your instances when, for example, you run high performance computing (HPC), machine learning (ML), or database server workloads.
Before you begin
-
If you haven't already, then set up authentication.
Authentication is
the process by which your identity is verified for access to Trusted Cloud by S3NS services and APIs.
To run code or samples from a local development environment, you can authenticate to
Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
gcloud
-
Install the Google Cloud CLI, and then sign in to the gcloud CLI with your federated identity. After signing in, initialize the Google Cloud CLI by running the following command:
gcloud init
- Set a default region and zone.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI, and then sign in to the gcloud CLI with your federated identity. After signing in, initialize the Google Cloud CLI by running the following command:
gcloud init
For more information, see Authenticate for using REST in the Trusted Cloud authentication documentation.
-
Required roles
To get the permissions that you need to create and apply a compact placement policy to compute instances, ask your administrator to grant you the following IAM roles on your project:
-
Compute Instance Admin (v1) (
roles/compute.instanceAdmin.v1
) -
To create a reservation:
Compute Admin (
roles/compute.admin
)
For more information about granting roles, see Manage access to projects, folders, and organizations.
These predefined roles contain the permissions required to create and apply a compact placement policy to compute instances. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The following permissions are required to create and apply a compact placement policy to compute instances:
-
To create placement policies:
compute.resourcePolicies.create
on the project -
To apply a placement policy to existing instances:
compute.instances.addResourcePolicies
on the project -
To create instances:
compute.instances.create
on the project- To use a custom image to create the VM:
compute.images.useReadOnly
on the image - To use a snapshot to create the VM:
compute.snapshots.useReadOnly
on the snapshot - To use an instance template to create the VM:
compute.instanceTemplates.useReadOnly
on the instance template - To assign a legacy network to the VM:
compute.networks.use
on the project - To specify a static IP address for the VM:
compute.addresses.use
on the project - To assign an external IP address to the VM when using a legacy network:
compute.networks.useExternalIp
on the project - To specify a subnet for the VM:
compute.subnetworks.use
on the project or on the chosen subnet - To assign an external IP address to the VM when using a VPC network:
compute.subnetworks.useExternalIp
on the project or on the chosen subnet - To set VM instance metadata for the VM:
compute.instances.setMetadata
on the project - To set tags for the VM:
compute.instances.setTags
on the VM - To set labels for the VM:
compute.instances.setLabels
on the VM - To set a service account for the VM to use:
compute.instances.setServiceAccount
on the VM - To create a new disk for the VM:
compute.disks.create
on the project - To attach an existing disk in read-only or read-write mode:
compute.disks.use
on the disk - To attach an existing disk in read-only mode:
compute.disks.useReadOnly
on the disk
-
To create a reservation:
compute.reservations.create
on the project -
To create an instance template:
compute.instanceTemplates.create
on the project -
To create a managed instance group (MIG):
compute.instanceGroupManagers.create
on the project -
To view the details of a instance:
compute.instances.get
on the project
You might also be able to get these permissions with custom roles or other predefined roles.
Create a compact placement policy
Before you create a compact placement policy, consider the following:
If you want to apply a compact placement policy to a compute instance other than N2 or N2D, then we recommend that you specify a maximum distance value.
You can only apply compact placement policies to A4 or A3 Ultra instances that are deployed using the features provided by Cluster Director. For more information, see Cluster Director in the AI Hypercomputer documentation.
By default, you can't apply compact placement policies with a max distance value to A3 Mega, A3 High, or A3 Edge instances. To request access to this feature, contact your assigned Technical Account Manager (TAM) or the Sales team.
To create a compact placement policy, select one of the following options:
gcloud
To apply the compact placement policy to N2 or N2D instances, create the policy using the
gcloud compute resource-policies create group-placement
command with the--collocation=collocated
flag.gcloud compute resource-policies create group-placement POLICY_NAME \ --collocation=collocated \ --region=REGION
Replace the following:
POLICY_NAME
: the name of the compact placement policy.REGION
: the region in which to create the placement policy.
To apply the compact placement policy to any other supported instances, create the policy using the
gcloud beta compute resource-policies create group-placement
command with the--collocation=collocated
and--max-distance
flags.gcloud beta compute resource-policies create group-placement POLICY_NAME \ --collocation=collocated \ --max-distance=MAX_DISTANCE \ --region=REGION
Replace the following:
POLICY_NAME
: the name of the compact placement policy.MAX_DISTANCE
: the maximum distance configuration for your instances. The value must be between1
, which specifies to place your instances in the same rack for the lowest network latency possible, and3
, which specifies to place your instances in adjacent clusters. If you want to apply the compact placement policy to a reservation, or to an A4 or A3 Ultra instance, then you can't specify a value of1
.REGION
: the region in which to create the placement policy.
REST
To apply the compact placement policy to N2 or N2D instances, create the policy by making a
POST
request to theresourcePolicies.insert
method. In the request body, include thecollocation
field and set it toCOLLOCATED
.POST https://compute.s3nsapis.fr/compute/v1/projects/PROJECT_ID/regions/REGION/resourcePolicies { "name": "POLICY_NAME", "groupPlacementPolicy": { "collocation": "COLLOCATED" } }
Replace the following:
PROJECT_ID
: the ID of the project where you want to create the placement policy.REGION
: the region in which to create the placement policy.POLICY_NAME
: the name of the compact placement policy.
To apply the compact placement policy to any other supported instances, create the policy by making a
POST
request to thebeta.resourcePolicies.insert
method. In the request body, include the following:The
collocation
field set toCOLLOCATED
.The
maxDistance
field.
POST https://compute.s3nsapis.fr/compute/beta/projects/PROJECT_ID/regions/REGION/resourcePolicies { "name": "POLICY_NAME", "groupPlacementPolicy": { "collocation": "COLLOCATED", "maxDistance": MAX_DISTANCE } }
Replace the following:
PROJECT_ID
: the ID of the project where you want to create the placement policy.REGION
: the region in which to create the placement policy.POLICY_NAME
: the name of the compact placement policy.MAX_DISTANCE
: the maximum distance configuration for your instances. The value must be between1
, which specifies to place your instances in the same rack for the lowest network latency possible, and3
, which specifies to place your instances in adjacent clusters. If you want to apply the compact placement policy to a reservation, or to an A4 or A3 Ultra instance, then you can't specify a value of1
.
Apply a compact placement policy
You can apply a compact placement policy to an existing compute instance or managed instance group (MIG), or when creating instances, instance templates, MIGs, or reservations of instances.
To apply a compact placement policy to a Compute Engine resource, select one of the following methods:
- Apply the policy to an existing instance.
- Apply the policy while creating a instance.
- Apply the policy while creating a reservation.
- Apply the policy while creating an instance template.
- Apply the policy to instances in a MIG.
After you apply a compact placement policy to a instance, you can verify the physical location of the instance in relation to other instances that specify the same placement policy.
Apply the policy to an existing instance
Before applying a compact placement policy to an existing compute instance, make sure of the following:
The instance and the compact placement policy must be located in the same region. For example, if the placement policy is located in region
us-central1
, then the instance must be located in a zone inus-central1
. If you need to migrate a instance to another region, then see Move a instance between zones or regions.The instance must use a supported machine series and host maintenance policy. If you need to make changes to the instance, then do one or both of the following:
Otherwise, applying the compact placement policy to the instance fails. If the instance already specifies a placement policy and you want to replace it, then see Replace a placement policy in a instance instead.
To apply a compact placement policy to an existing instance, select one of the following options:
gcloud
To apply a compact placement policy to an existing instance, use the
gcloud compute instances add-resource-policies
command.gcloud compute instances add-resource-policies INSTANCE_NAME \ --resource-policies=POLICY_NAME \ --zone=ZONE
Replace the following:
INSTANCE_NAME
: the name of an existing instance.POLICY_NAME
: the name of an existing compact placement policy.ZONE
: the zone where the instance is located.
REST
To apply a compact placement policy to an existing instance, make a
POST
request to theinstances.addResourcePolicies
method.POST https://compute.s3nsapis.fr/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME/addResourcePolicies { "resourcePolicies": [ "projects/PROJECT_ID/regions/REGION/resourcePolicies/POLICY_NAME" ] }
Replace the following:
PROJECT_ID
: the ID of the project where the compact placement policy and the instance are located.ZONE
: the zone where the instance is located.INSTANCE_NAME
: the name of an existing instance.REGION
: the region where the compact placement policy is located.POLICY_NAME
: the name of an existing compact placement policy.
Apply the policy while creating a instance
You can only create a compute instance that specifies a compact placement policy in the same region as the placement policy.
To create a instance that specifies a compact placement policy, select one of the following options:
gcloud
To create a instance that specifies a compact placement policy, use the
gcloud compute instances create
command
with the --maintenance-policy
and --resource-policies
flags.
gcloud compute instances create INSTANCE_NAME \
--machine-type=MACHINE_TYPE \
--maintenance-policy=MAINTENANCE_POLICY \
--resource-policies=POLICY_NAME \
--zone=ZONE
Replace the following:
INSTANCE_NAME
: the name of the instance to create.MACHINE_TYPE
: a supported machine type for compact placement policies.MAINTENANCE_POLICY
: the host maintenance policy of the instance. If the compact placement policy you specify uses a maximum distance value of1
or2
, or your chosen machine type doesn't support live migration, then you can only specifyTERMINATE
. Otherwise, you can specifyMIGRATE
orTERMINATE
.POLICY_NAME
: the name of an existing compact placement policy.ZONE
: the zone in which to create the instance.
REST
To create an instance that specifies a compact placement policy, make a
POST
request to the
instances.insert
method.
In the request body, include the onHostMaintenance
and resourcePolicies
fields.
POST https://compute.s3nsapis.fr/compute/v1/projects/PROJECT_ID/zones/ZONE/instances
{
"name": "INSTANCE_NAME",
"machineType": "zones/ZONE/machineTypes/MACHINE_TYPE",
"disks": [
{
"boot": true,
"initializeParams": {
"sourceImage": "projects/IMAGE_PROJECT/global/images/IMAGE"
}
}
],
"networkInterfaces": [
{
"network": "global/networks/default"
}
],
"resourcePolicies": [
"projects/PROJECT_ID/regions/REGION/resourcePolicies/POLICY_NAME"
],
"scheduling": {
"onHostMaintenance": "MAINTENANCE_POLICY"
}
}
Replace the following:
PROJECT_ID
: the ID of the project where the compact placement policy is located.ZONE
: the zone where to create the instance in and where the machine type is located. You can only specify a zone in the region of the compact placement policy.INSTANCE_NAME
: the name of the instance to create.MACHINE_TYPE
: a supported machine type for compact placement policies.IMAGE_PROJECT
: the image project that contains the image—for example,debian-cloud
. For more information about the supported image projects, see Public images.IMAGE
: specify one of the following:A specific version of the OS image—for example,
debian-12-bookworm-v20240617
.An image family, which must be formatted as
family/IMAGE_FAMILY
. This specifies the most recent, non-deprecated OS image. For example, if you specifyfamily/debian-12
, the latest version in the Debian 12 image family is used. For more information about using image families, see Image families best practices.
REGION
: the region where the compact placement policy is located.POLICY_NAME
: the name of an existing compact placement policy.MAINTENANCE_POLICY
: the host maintenance policy of the instance. If the compact placement policy you specify uses a maximum distance value of1
or2
, or your chosen machine type doesn't support live migration, then you can only specifyTERMINATE
. Otherwise, you can specifyMIGRATE
orTERMINATE
.
For more information about the configuration options to create a instance, see Create and start an instance.
Apply the policy while creating a reservation
If you want to create an on-demand, single-project reservation that specifies a compact placement policy, then you must create a specifically targeted reservation. When you create instances to consume the reservation, make sure of the following:
The instances must specify the same compact placement policy applied to the reservation.
The instances must specifically target the reservation to consume it. For more information, see Consume instances from a specific reservation.
To create a single-project reservation with a compact placement policy, select one of the following methods:
Create the reservation by specifying properties directly as described in this section.
Apply the policy while creating an instance template as described in this document, and then create a single-project reservation by specifying the newly created instance template.
To create a single-project reservation with a compact placement policy by specifying properties directly, select one of the following options:
gcloud
To create a single-project reservation with a compact placement policy by
specifying properties directly, use the
gcloud compute reservations create
command
with the --require-specific-reservation
and --resource-policies=policy
flags.
gcloud compute reservations create RESERVATION_NAME \
--machine-type=MACHINE_TYPE \
--require-specific-reservation \
--resource-policies=policy=POLICY_NAME \
--vm-count=NUMBER_OF_INSTANCES \
--zone=ZONE
Replace the following:
RESERVATION_NAME
: the name of the reservation.MACHINE_TYPE
: a supported machine type for compact placement policies.POLICY_NAME
: the name of an existing compact placement policy.NUMBER_OF_INSTANCES
: the number of instances to reserve, which can't be higher than the supported maximum number of instances of the specified compact placement policy.ZONE
: the zone in which to reserve instances. You can only reserve instances in a zone in the region of the specified compact placement policy.
REST
To create a single-project reservation with a compact placement policy by
specifying properties directly, make a POST
request to the
reservations.insert
method.
In the request body, include the resourcePolicies
field, and the
specificReservationRequired
field set to true
.
POST https://compute.s3nsapis.fr/compute/v1/projects/PROJECT_ID/zones/ZONE/reservations
{
"name": "RESERVATION_NAME",
"resourcePolicies": {
"policy" : "projects/PROJECT_ID/regions/REGION/resourcePolicies/POLICY_NAME"
},
"specificReservation": {
"count": "NUMBER_OF_INSTANCES",
"instanceProperties": {
"machineType": "MACHINE_TYPE",
}
},
"specificReservationRequired": true
}
Replace the following:
PROJECT_ID
: the ID of the project where the compact placement policy is located.ZONE
: the zone in which to reserve instances. You can only reserve instances in a zone in the region of the specified compact placement policy.RESERVATION_NAME
: the name of the reservation.REGION
: the region where the compact placement policy is located.POLICY_NAME
: the name of an existing compact placement policy.NUMBER_OF_INSTANCES
: the number of instances to reserve, which can't be higher than the supported maximum number of instances of the specified compact placement policy.MACHINE_TYPE
: a supported machine type for compact placement policies.
For more information about the configuration options to create single-project reservations, see Create a reservation for a single project.
Apply the policy while creating an instance template
If you want to create a regional instance template, then you must create the template in the same region as the compact placement policy. Otherwise, creating the instance template fails.
After creating an instance template that specifies a compact placement policy, you can use the template to do the following:
To create an instance template that specifies a compact placement policy, select one of the following options:
gcloud
To create an instance template that specifies a compact placement policy,
use the
gcloud compute instance-templates create
command
with the --maintenance-policy
and --resource-policies
flags.
For example, to create a global instance template that specifies a compact placement policy, run the following command:
gcloud compute instance-templates create INSTANCE_TEMPLATE_NAME \
--machine-type=MACHINE_TYPE \
--maintenance-policy=MAINTENANCE_POLICY \
--resource-policies=POLICY_NAME
Replace the following:
INSTANCE_TEMPLATE_NAME
: the name of the instance template.MACHINE_TYPE
: a supported machine type for compact placement policies.MAINTENANCE_POLICY
: the host maintenance policy of the instance. If the compact placement policy you specify uses a maximum distance value of1
or2
, or your chosen machine type doesn't support live migration, then you can only specifyTERMINATE
. Otherwise, you can specifyMIGRATE
orTERMINATE
.POLICY_NAME
: the name of an existing compact placement policy.
REST
To create an instance template that specifies a compact placement policy,
make a POST
request to one of the following methods:
To create a global instance template:
instanceTemplates.insert
method.To create a regional instance template:
regionInstanceTemplates.insert
method.
In the request body, include the onHostMaintenance
and resourcePolicies
fields.
For example, to create a global instance template that specifies a compact
placement policy, make a POST
request as follows:
POST https://compute.s3nsapis.fr/compute/v1/projects/PROJECT_ID/global/instanceTemplates
{
"name": "INSTANCE_TEMPLATE_NAME",
"properties": {
"disks": [
{
"boot": true,
"initializeParams": {
"sourceImage": "projects/IMAGE_PROJECT/global/images/IMAGE"
}
}
],
"machineType": "MACHINE_TYPE",
"networkInterfaces": [
{
"network": "global/networks/default"
}
],
"resourcePolicies": [
"POLICY_NAME"
],
"scheduling": {
"onHostMaintenance": "MAINTENANCE_POLICY"
}
}
}
Replace the following:
PROJECT_ID
: the ID of the project where the compact placement policy is located.INSTANCE_TEMPLATE_NAME
: the name of the instance template.IMAGE_PROJECT
: the image project that contains the image—for example,debian-cloud
. For more information about the supported image projects, see Public images.IMAGE
: specify one of the following:A specific version of the OS image—for example,
debian-12-bookworm-v20240617
.An image family, which must be formatted as
family/IMAGE_FAMILY
. This specifies the most recent, non-deprecated OS image. For example, if you specifyfamily/debian-12
, the latest version in the Debian 12 image family is used. For more information about using image families, see Image families best practices.
MACHINE_TYPE
: a supported machine type for compact placement policies.POLICY_NAME
: the name of an existing compact placement policy.MAINTENANCE_POLICY
: the host maintenance policy of the instance. If the compact placement policy you specify uses a maximum distance value of1
or2
, or your chosen machine type doesn't support live migration, then you can only specifyTERMINATE
. Otherwise, you can specifyMIGRATE
orTERMINATE
.
For more information about the configuration options to create an instance template, see Create instance templates.
Apply the policy to instances in a MIG
After you create an instance template that specifies a compact placement policy, you can use the template to do the following:
Apply the policy while creating a MIG
You can only create compute instances that specify a compact placement policy if the instances are located in the same region as the placement policy.
To create a MIG using an instance template that specifies a compact placement policy, select one of the following options:
gcloud
To create a MIG using an instance template that specifies a compact
placement policy, use the
gcloud compute instance-groups managed create
command.
For example, to create a zonal MIG using a global instance template that specifies a compact placement policy, run the following command:
gcloud compute instance-groups managed create INSTANCE_GROUP_NAME \
--size=SIZE \
--template=INSTANCE_TEMPLATE_NAME \
--zone=ZONE
Replace the following:
INSTANCE_GROUP_NAME
: the name of the MIG to create.SIZE
: the size of the MIG.INSTANCE_TEMPLATE_NAME
: the name of an existing global instance template that specifies a compact placement policy.ZONE
: the zone in which to create the MIG, which must be in the region where the compact placement policy is located.
REST
To create a MIG using an instance template that specifies a compact
placement policy, make a POST
request to one of the following methods:
To create a zonal MIG:
instanceGroupManagers.insert
method.To create a regional MIG:
regionInstanceGroupManagers.insert
method.
For example, to create a zonal MIG using a global instance template that
specifies a compact placement policy, make a POST
request as follows:
POST https://compute.s3nsapis.fr/compute/v1/projects/PROJECT_ID/zones/ZONE/instanceGroupManagers
{
"name": "INSTANCE_GROUP_NAME",
"targetSize": SIZE,
"versions": [
{
"instanceTemplate": "global/instanceTemplates/INSTANCE_TEMPLATE_NAME"
}
]
}
Replace the following:
PROJECT_ID
: the ID of the project where the compact placement policy and the instance template that specifies the placement policy are located.ZONE
: the zone in which to create the MIG, which must be in the region where the compact placement policy is located.INSTANCE_GROUP_NAME
: the name of the MIG to create.INSTANCE_TEMPLATE_NAME
: the name of an existing global instance template that specifies a compact placement policy.SIZE
: the size of the MIG.
For more information about the configuration options to create MIGs, see Basic scenarios for creating MIGs.
Apply the policy to an existing MIG
You can only apply a compact placement policy to an existing MIG if the MIG is located in the same region as the placement policy or, for zonal MIGs, in a zone in the same region as the placement policy.
To update a MIG to use an instance template that specifies a compact placement policy, select one of the following options:
gcloud
To update a MIG to use an instance template that specifies a compact
placement policy, use the
gcloud compute instance-groups managed rolling-action start-update
command.
For example, to update a zonal MIG to use an instance template that specifies a compact placement policy, and replace the existing instances from the MIG with new instances that specify the template's properties, run the following command:
gcloud compute instance-groups managed rolling-action start-update MIG_NAME \
--version=template=INSTANCE_TEMPLATE_NAME \
--zone=ZONE
Replace the following:
MIG_NAME
: the name of an existing MIG.INSTANCE_TEMPLATE_NAME
: the name of an existing global instance template that specifies a compact placement policy.ZONE
: the zone where the MIG is located. You can only apply the compact placement policy to a MIG located in the same region as the placement policy.
REST
To update a MIG to use an instance template that specifies a compact
placement policy, and automatically apply the properties of the template and
the placement policy to existing instances in the MIG, make a PATCH
request to one of the following methods:
To update a zonal MIG:
instanceGroupManagers.insert
method.To update a regional MIG:
regionInstanceGroupManagers.insert
method.
For example, to update a zonal MIG to use a global instance template that
specifies a compact placement policy, and replace the existing instances
from the MIG with new instances that specify the template's properties,
make the following PATCH
request:
PATCH https://compute.s3nsapis.fr/compute/v1/projects/PROJECT_ID/zones/ZONE/instanceGroupManagers/MIG_NAME
{
"instanceTemplate": "global/instanceTemplates/INSTANCE_TEMPLATE_NAME"
}
Replace the following:
PROJECT_ID
: the ID of the project where the MIG, the compact placement policy, and the instance template that specifies the placement policy are located.ZONE
: the zone where the MIG is located. You can only apply the compact placement policy to a MIG located in the same region as the placement policy.MIG_NAME
: the name of an existing MIG.INSTANCE_TEMPLATE_NAME
: the name of an existing global instance template that specifies a compact placement policy.
For more information about the configuration options to update the instances in a MIG, see Update and apply new configurations to instances in a MIG.
Verify the physical location of an instance
After applying a compact placement policy to a compute instance, you can view the instance's physical location in relation to other instances. This comparison is limited to instances located in your project and that specify the same compact placement policy. Viewing the physical location of an instance helps you to do the following:
Confirm that the policy was successfully applied.
Identify which instances are closest to each other.
To view the physical location of an instance in relation to other instances that specify the same compact placement policy, select one of the following options:
gcloud
To view the physical location of an instance that specifies a compact placement
policy, use the
gcloud compute instances describe
command
with the --format
flag.
gcloud compute instances describe INSTANCE_NAME \
--format="table[box,title=VM-Position](resourcePolicies.scope():sort=1,resourceStatus.physicalHost:label=location)" \
--zone=ZONE
Replace the following:
INSTANCE_NAME
: the name of an existing instance that specifies a compact placement policy.ZONE
: the zone where the instance is located.
The output is similar to the following:
VM-Position
RESOURCE_POLICIES: us-central1/resourcePolicies/example-policy']
PHYSICAL_HOST: /CCCCCCC/BBBBBB/AAAA
The value for the PHYSICAL_HOST
field is composed by three parts. These
parts each represent the cluster, rack, and host where the instance is
located.
When comparing the position of two instances that use the same compact
placement policy in your project, the more parts of the PHYSICAL_HOST
field the instances share, the closer they are physically located to each
other. For example, assume that two instances both specify one of the
following sample values for the PHYSICAL_HOST
field:
/CCCCCCC/xxxxxx/xxxx
: the two instances are placed in the same cluster, which equals a maximum distance value of2
. Instances placed in the same cluster experience low network latency./CCCCCCC/BBBBBB/xxxx
: the two instances are placed in the same rack, which equals a maximum distance value of1
. Instances placed in the same rack experience lower network latency than instances placed in the same cluster./CCCCCCC/BBBBBB/AAAA
: the two instances share the same host. Instances placed in the same host minimize network latency as much as possible.
REST
To view the physical location of an instance that specifies a compact placement
policy, make a GET
request to the
instances.get
method.
GET https://compute.s3nsapis.fr/compute/v1/projects/PROJECT_ID/zones/ZONE/instances/INSTANCE_NAME
Replace the following:
PROJECT_ID
: the ID of the project where the instance is located.ZONE
: the zone where the instance is located.INSTANCE_NAME
: the name of an existing instance that specifies a compact placement policy.
The output is similar to the following:
{ ... "resourcePolicies": [ "https://www.s3nsapis.fr/compute/v1/projects/example-project/regions/us-central1/resourcePolicies/example-policy" ], "resourceStatus": { "physicalHost": "/xxxxxxxx/xxxxxx/xxxxx" }, ... }
The value for the physicalHost
field is composed by three parts. These
parts each represent the cluster, rack, and host where the instance is
located.
When comparing the position of two instances that use the same compact
placement policy in your project, the more parts of the physicalHost
field the instances share, the closer they are physically located to each
other. For example, assume that two instances both specify one of the
following sample values for the physicalHost
field:
/CCCCCCC/xxxxxx/xxxx
: the two instances are placed in the same cluster, which equals a maximum distance value of2
. Instances placed in the same cluster experience low network latency./CCCCCCC/BBBBBB/xxxx
: the two instances are placed in the same rack, which equals a maximum distance value of1
. Instances placed in the same rack experience lower network latency than instances placed in the same cluster./CCCCCCC/BBBBBB/AAAA
: the two instances share the same host. Instances placed in the same host minimize network latency as much as possible.
What's next?
Learn how to view placement policies.
Learn how to replace, remove, or delete placement policies.
Learn how to do the following with a compute instance that specifies a placement policy: