A backend service defines how Cloud Load Balancing distributes traffic. The backend service configuration contains a set of values, such as the protocol used to connect to backends, various distribution and session settings, health checks, and timeouts. These settings provide fine-grained control over how your load balancer behaves. To get you started, most of the settings have default values that allow for fast configuration. A backend service is regional in scope.
Load balancers, Envoy proxies, and proxyless gRPC clients use the configuration information in the backend service resource to do the following:
- Direct traffic to the correct backends, which are instance groups or network endpoint groups (NEGs).
- Distribute traffic according to a balancing mode, which is a setting for each backend.
- Determine which health check is monitoring the health of the backends.
- Specify session affinity.
You set these values when you create a backend service or add a backend to the backend service.
The following table summarizes which load balancers use backend services. The product that you are using also determines the maximum number of backend services, the scope of a backend service, the type of backends supported, and the backend service's load balancing scheme. The load balancing scheme is an identifier that Google uses to classify forwarding rules and backend services. Each load balancing product uses one load balancing scheme for its forwarding rules and backend services. Some schemes are shared among products.
| Product | Maximum number of backend services | Scope of backend service | Supported backend types | Load balancing scheme |
|---|---|---|---|---|
| Regional external Application Load Balancer | Multiple | Regional | Each backend service supports one of the following backend combinations:
|
EXTERNAL_MANAGED |
| Regional internal Application Load Balancer | Multiple | Regional | Each backend service supports one of the following backend combinations:
|
INTERNAL_MANAGED |
| Regional external proxy Network Load Balancer | 1 | Regional | The backend service supports one of the following backend combinations:
|
EXTERNAL_MANAGED |
| Regional internal proxy Network Load Balancer | 1 | Regional | The backend service supports one of the following backend combinations:
|
INTERNAL_MANAGED |
| External passthrough Network Load Balancer | 1 | Regional | The backend service supports one of the following backend combinations:
|
EXTERNAL |
| Internal passthrough Network Load Balancer | 1 | Regional | The backend service supports one of the following backend combinations:
|
INTERNAL |
Load balancer naming
For Proxy Network Load Balancers and Passthrough Network Load Balancers, the name of the load balancer is always the same as the name of the backend service. The behavior for each Cloud de Confiance interface is as follows:
- Cloud de Confiance console. If you create either a proxy Network Load Balancer or a passthrough Network Load Balancer by using the Cloud de Confiance console, the backend service is automatically assigned the same name that you entered for the load balancer name.
- Google Cloud CLI or API. If you create either a proxy Network Load Balancer or a passthrough Network Load Balancer by using the gcloud CLI or the API, you enter a name of your choice while creating the backend service. This backend service name is then reflected in the Cloud de Confiance console as the name of the load balancer.
To learn about how naming works for Application Load Balancers, see URL maps overview: Load balancer naming.
Backends
A backend is one or more endpoints that receive traffic from a Cloud de Confiance by S3NS load balancer or a proxyless gRPC client. There are several types of backends:
- Instance group containing virtual machine (VM) instances. An instance group can be a managed instance group (MIG), with or without autoscaling, or it can be an unmanaged instance group. More than one backend service can reference an instance group, but all backend services that reference the instance group must use compatible balancing modes. For more information, in this document, see Restrictions and guidance for instance groups.
- Zonal NEG
- Internet NEG
- Hybrid connectivity NEG
- Port mapping NEG
- Service Directory service bindings
You cannot delete a backend instance group or NEG that is associated with a backend service. Before you delete an instance group or NEG, you must first remove it as a backend from all backend services that reference it.
Instance groups
This section discusses how instance groups work with the backend service.
Backend VMs and external IP addresses
Backend VMs in backend services don't need external IP addresses:
For regional external Application Load Balancers: Clients communicate with an Envoy proxy which hosts your load balancer's external IP address. Envoy proxies communicate with backend VMs or endpoints by sending packets to an internal address created by joining an identifier for the backend's VPC network with the internal IPv4 address of the backend.
- For instance group backends, the internal IPv4 address is always the primary
internal IPv4 address that corresponds to the
nic0interface of the VM, andnic0must be in the same network as the load balancer. - For
GCE_VM_IP_PORTendpoints in a zonal NEG, you can specify the endpoint's IP address as either the primary IPv4 address associated with any network interface of a VM or any IPv4 address from an alias IP address range associated with any network interface of a VM, as long as the network interface is in the same network as the load balancer.
- For instance group backends, the internal IPv4 address is always the primary
internal IPv4 address that corresponds to the
For external passthrough Network Load Balancers: Clients communicate directly with backends by way of Google's Maglev pass-through load balancing infrastructure. Packets are routed and delivered to backends with the original source and destination IP addresses preserved. Backends respond to clients using direct server return. The methods used to select a backend and to track connections are configurable.
- For instance group backends, packets are always delivered to the
nic0interface of the VM. - For
GCE_VM_IPendpoints in a zonal NEG, packets are delivered to the VM's network interface that is in the subnetwork associated with the NEG.
- For instance group backends, packets are always delivered to the
Named ports
The backend service's named port attribute is only applicable to proxy-based load balancers (Application Load Balancers and Proxy Network Load Balancers) using instance group backends. The named port defines the destination port used for the TCP connection between the proxy (GFE or Envoy) and the backend instance.
Named ports are configured as follows:
On each instance group backend, you must configure one or more named ports using key-value pairs. The key represents a meaningful port name that you choose, and the value represents the port number you assign to the name. The mapping of names to numbers is done individually for each instance group backend.
On the backend service, you specify a single named port using just the port name (
--port-name).
On a per-instance group backend basis, the backend service translates the port
name to a port number. When an instance group's named port matches the backend
service's --port-name, the backend service uses this port number for
communication with the instance group's VMs.
For example, you might set the named port on an instance group with the name
my-service-name and the port 8888:
gcloud compute instance-groups unmanaged set-named-ports my-unmanaged-ig \
--named-ports=my-service-name:8888
Then you refer to the named port in the backend service configuration with the
--port-name on the backend service set to my-service-name:
gcloud compute backend-services update my-backend-service \
--port-name=my-service-name
A backend service can use a different port number when communicating with VMs in different instance groups if each instance group specifies a different port number for the same port name.
The resolved port number used by the proxy load balancer's backend service doesn't need to match the port number used by the load balancer's forwarding rules. A proxy load balancer listens for TCP connections sent to the IP address and destination port of its forwarding rules. Because the proxy opens a second TCP connection to its backends, the second TCP connection's destination port can be different.
Named ports are only applicable to instance group backends. Zonal NEGs with
GCE_VM_IP_PORT endpoints, hybrid NEGs with NON_GCP_PRIVATE_IP_PORT
endpoints, and internet NEGs define ports using a different mechanism, namely,
on the endpoints themselves.
Internal passthrough Network Load Balancers and external passthrough Network Load Balancers don't use named ports. This is because they are pass-through load balancers that route connections directly to backends instead of creating new connections. Packets are delivered to the backends preserving the destination IP address and port of the load balancer's forwarding rule.
To learn how to create named ports, see the following instructions:
- Unmanaged instance groups: Working with named ports
- Managed instance groups: Assigning named ports to managed instance groups
Restrictions and guidance for instance groups
Keep the following in mind when you use instance group backends:
A VM instance can only belong to a single load-balanced instance group. For example, a VM can be a member of two unmanaged instance groups, or a VM can be a member of one managed instance group and one unmanaged instance group. When a VM is a member of two or more instance groups, only one of the instance groups can be referenced by one or more load balancer backend services.
The same instance group can be used by two or more backend services. Each mapping between an instance group and a backend service can use a different balancing mode except for the incompatible balancing mode combinations.
The incompatible balancing mode combinations are as follows:
The
UTILIZATIONbalancing mode is incompatible with all other balancing modes. If an instance group is a backend of multiple backend services, the instance group must use theUTILIZATIONbalancing mode on every backend service.The
CUSTOM_METRICSbalancing mode is incompatible with all other balancing modes. If an instance group is a backend of multiple backend services, the instance group must use theCUSTOM_METRICSbalancing mode on every backend service.
As a consequence of the incompatible balancing mode combinations, if an instance group uses either the
UTILIZATIONorCUSTOM_METRICSbalancing mode as a backend for at least one backend service, the same instance group can't be used as a backend for a passthrough Network Load Balancer because passthrough Network Load Balancers require theCONNECTIONbalancing mode.
There's no single command that can change the balancing mode of the same instance group on multiple backend services. To change the balancing mode for an instance group that's a backend of two or more backend services, you can use this technique:
- Remove the instance group as a backend from all backend services except for one backend service.
- Change the instance group's balancing mode for the one remaining backend service.
- Re-add the instance group as a backend to the other backend services.
Consider the following best practices, which provide more flexible options:
Avoid using the same instance group as a backend for two or more backend services. Instead, use multiple NEGs.
Unlike instance groups, a VM can have an endpoint in two or more load-balanced NEGs.
For example, if a VM needs to simultaneously be a backend of both a passthrough Network Load Balancer and either a proxy Network Load Balancer or an Application Load Balancer, use multiple load-balanced NEGs. Place a VM endpoint in a unique NEG compatible with each load balancer type. Then associate each NEG with the corresponding load balancer backend service.
Don't add an autoscaled managed instance group to more than one backend service when using the HTTP Load Balancing Utilization autoscaling metric. Two or more backend services referencing the same autoscaled managed instance group can contradict with one another unless the autoscaling metric is unrelated to load balancer activity.
Zonal network endpoint groups
Network endpoints represent services by their IP address or an IP address and port combination, rather than referring to a VM in an instance group. A network endpoint group (NEG) is a logical grouping of network endpoints.
Zonal NEGs are zonal resources that represent collections of either IP addresses or IP address and port combinations for Cloud de Confiance resources within a single subnet.
A backend service that uses zonal NEGs as its backends distributes traffic among applications or containers running within VMs.
There are two types of network endpoints available for zonal NEGs:
GCE_VM_IPendpoints (supported only with internal passthrough Network Load Balancers and backend service-based external passthrough Network Load Balancers).GCE_VM_IP_PORTendpoints.
To see which products support zonal NEG backends, see Table: Backend services and supported backend types.
For details, see Zonal NEGs overview.
Internet network endpoint groups
Internet NEGs are resources that define external backends. An external backend is a backend that is hosted within on-premises infrastructure or on infrastructure provided by third parties.
An internet NEG is a combination of a hostname or an IP address, plus an
optional port. There are two types of network endpoints available for internet
NEGs: INTERNET_FQDN_PORTand INTERNET_IP_PORT.
For details, see Internet network endpoint group overview.
Mixed backends
The following usage considerations apply when you add different types of backends to a single backend service:
- A single backend service cannot simultaneously use both instance groups and zonal NEGs.
- You can use a combination of different types of instance groups on the same backend service. For example, a single backend service can reference a combination of both managed and unmanaged instance groups. For complete information about which backends are compatible with which backend services, see the table in the previous section.
- With certain proxy load balancers, you can use a combination of zonal NEGs
(with
GCE_VM_IP_PORTendpoints) and hybrid connectivity NEGs (withNON_GCP_PRIVATE_IP_PORTendpoints) to configure hybrid load balancing. To see which load balancers have this capability, refer Table: Backend services and supported backend types.
Protocol to the backends
When you create a backend service, you must specify the protocol used to communicate with the backends. You can specify only one protocol per backend service — you cannot specify a secondary protocol to use as a fallback.
Which protocols are valid depends on the type of load balancer.
| Product | Backend service protocol options |
|---|---|
| Application Load Balancer | HTTP, HTTPS, HTTP/2 |
| Proxy Network Load Balancer | TCP or SSL The regional proxy Network Load Balancers support only TCP. |
| Passthrough Network Load Balancer | TCP, UDP, or UNSPECIFIED |
Changing a backend service's protocol makes the backends inaccessible through load balancers for a few minutes.
Encryption between the load balancer and backends
For information about encryption between the load balancer and backends, see Encryption to the backends.
Balancing mode, target capacity, and capacity scaler
For Application Load Balancers, Cloud Service Mesh, and proxy Network Load Balancers, the balancing mode, target capacity, and capacity scaler are parameters you provide when you add a supported backend to a backend service. The load balancers use these parameters to manage the distribution of new requests or new connections to zones that contain supported backends:
- The balancing mode defines how the load balancer measures capacity.
Cloud de Confiance has the following balancing modes:
CONNECTION: defines capacity based on the number of new TCP connections.RATE: defines capacity based on the rate of new HTTP requests.IN-FLIGHT(Preview): defines capacity based on the number of in-flight HTTP requests instead of the rate of HTTP requests. Use this balancing mode instead ofRATEif requests take more than a second to complete.UTILIZATION: defines capacity based on the approximated CPU utilization of VMs in a zone of an instance group.CUSTOM_METRICS: defines capacity based on user-defined custom metrics.
- The target capacity defines the target capacity number.
- The target capacity isn't a circuit breaker.
- When capacity usage reaches the target capacity, the load balancer directs new requests or new connections to a different zone if backends are configured in two or more zones.
- Global external Application Load Balancers, global external proxy Network Load Balancers, cross-region internal Application Load Balancers, and cross-region internal proxy Network Load Balancers also use capacity to direct requests to zones in different regions, if you've configured backends in more than one region.
- When all zones have reached target capacity, new requests or new connections are distributed by overfilling proportionally.
- The capacity scaler provides a way to scale the target capacity manually.
The values for the capacity scaler are as follows:
0: indicates that the backend is completely drained. You can't use a value of0if a backend service only has one backend.0.1(10%) -1.0(100%): indicates the percentage of backend capacity that is in use.
Passthrough Network Load Balancers symbolically use the
CONNECTION balancing mode, but don't support a target capacity or capacity
scaler. For more information about how passthrough Network Load Balancers distribute new
connections, see the following:
- Traffic distribution for internal passthrough Network Load Balancers
- Traffic distribution for external passthrough Network Load Balancers
Supported backends
For Application Load Balancers, Cloud Service Mesh, and proxy Network Load Balancers, the following types of backends support the balancing mode, target capacity, and capacity scaler parameters:
Internet NEGs, serverless NEGs, and Private Service Connect NEGs don't support the balancing mode, target capacity, and capacity scaler parameters.
Balancing modes for Application Load Balancers and Cloud Service Mesh
Available balancing modes for Application Load Balancer and Cloud Service Mesh backends depend on the type of supported backend and a traffic duration setting (Preview).
Traffic duration setting
For Application Load Balancer and Cloud Service Mesh backends, you can optionally specify a traffic duration setting. This setting is unique to the mapping between a supported backend and a backend service. The traffic duration setting has two valid values:
SHORT: recommended for HTTP requests answered with responses from backends in less than one second. If you don't explicitly specify a traffic duration, the load balancer operates as if you'd specifiedSHORT.LONG: recommended for HTTP requests for which the backend needs more than one second to generate responses.
To explicitly set the traffic duration when you add a backend to a backend service, do one of the following:
- Run the
gcloud compute backend-services add-backendcommand with the--traffic-durationflag. - Create a backend service or update a backend service with the
trafficDurationattribute.
Balancing modes for short traffic duration
When the traffic duration setting isn't specified or is set to
SHORT(Preview),
the available balancing modes for Application Load Balancer and Cloud Service Mesh
backends depend on the type of supported backend.
| Supported backend | Balancing mode | ||||
|---|---|---|---|---|---|
CONNECTION |
RATE |
IN_FLIGHT |
UTILIZATION |
CUSTOM_METRICS |
|
| Instance groups | |||||
Zonal NEGs with GCE_VM_IP_PORT endpoints |
|||||
| Zonal hybrid connectivity NEGs | |||||
Balancing modes for long traffic duration
When the traffic duration setting is LONG, the available balancing modes for
Application Load Balancer and Cloud Service Mesh backends depend on the type of
supported backend.
| Supported backend | Balancing mode | ||||
|---|---|---|---|---|---|
CONNECTION |
RATE |
IN_FLIGHT |
UTILIZATION |
CUSTOM_METRICS |
|
| Instance groups | |||||
Zonal NEGs with GCE_VM_IP_PORT endpoints |
|||||
| Zonal hybrid connectivity NEGs | |||||
Balancing modes for Proxy Network Load Balancers
Available balancing modes for proxy Network Load Balancer backends depend on the type of supported backend.
| Supported backend | Balancing mode | ||||
|---|---|---|---|---|---|
CONNECTION |
RATE |
IN_FLIGHT |
UTILIZATION |
CUSTOM_METRICS |
|
| Instance groups | |||||
Zonal NEGs with GCE_VM_IP_PORT endpoints |
|||||
| Zonal hybrid connectivity NEGs | |||||
Target capacity specifications
Target capacity specifications are relevant to Application Load Balancer, Cloud Service Mesh, and proxy Network Load Balancer backends that support balancing mode, target capacity, and capacity scaler settings.
Target capacity specifications aren't relevant to passthrough Network Load Balancers.
Connection balancing mode
Proxy Network Load Balancer backends can use the CONNECTION balancing mode with one
of the following required target capacity parameters:
| Target capacity parameter | Supported backend | |||
|---|---|---|---|---|
| Zonal (managed or unmanaged) instance groups | Regional managed instance groups | Zonal NEGs with GCE_VM_IP_PORT endpoints |
Zonal hybrid connectivity NEGs | |
max-connectionsTarget TCP connections per backend zone |
||||
max-connections-per-instanceTarget TCP connections per VM instance. Cloud Load Balancing uses this parameter to calculate target TCP connections per backend zone. |
||||
max-connections-per-endpointTarget TCP connections per NEG endpoint. Cloud Load Balancing uses this parameter to calculate target TCP connections per backend zone. |
||||
Using the max-connections parameter
When you specify the max-connections parameter, the value you provide defines
the capacity for an entire zone.
For a zonal instance group with
Ntotal instances andhhealthy instances (whereh≤N), the calculations are as follows:- If you set
max-connectionstoX, the zonal target capacity isX. - The average connections per instance is
X / h.
- If you set
Regional managed instance groups don't support the
max-connectionsparameter because they consist of multiple zones. Instead, use themax-connections-per-instanceparameter.For a zonal NEG with
Ntotal endpoints andhhealthy endpoints (whereh≤N), the calculations are as follows:- If you set
max-connectionstoX, the zonal target capacity isX. - The average connections per endpoint is
X / h.
- If you set
Using the max-connections-per-instance or max-connections-per-endpoint parameter
When you specify either the max-connections-per-instance or
max-connections-per-endpoint parameter, the load balancer uses the value you
provide to calculate a per-zone capacity:
For a zonal instance group with
Ntotal instances andhhealthy instances (whereh≤N), the calculations are as follows:- If you set
max-connections-per-instancetoX, the zonal target capacity isN * X. This is equivalent to settingmax-connectionstoN * X. - The average connections per instance is
(N * X) / h.
- If you set
For a regional managed instance group, if you set
max-connections-per-instancetoX, Cloud de Confiance calculates a per-zone target capacity for each zone of the instance group. In each zone, if there areKtotal instances andhhealthy instances (whereh≤K), the calculations are as follows:- The zone's target capacity is
K * X. - The average connections per instance in the zone is
(K * X) / h.
- The zone's target capacity is
For a zonal NEG with
Ntotal endpoints andhhealthy endpoints (whereh≤N), the calculations are as follows:- If you set
max-connections-per-endpointtoX, the zonal target capacity isN * X. This is equivalent to settingmax-connectionstoN * X. - The average connections per endpoint is
(N * X) / h.
- If you set
Rate balancing mode
Application Load Balancer and Cloud Service Mesh backends with an unspecified or
short traffic duration setting (Preview)
can use the RATE balancing mode with one of the following required target
capacity parameters:
| Target capacity parameter | Supported backend | |||
|---|---|---|---|---|
| Zonal (managed or unmanaged) instance groups | Regional managed instance groups | Zonal NEGs with GCE_VM_IP_PORT endpoints |
Zonal hybrid connectivity NEGs | |
max-rateTarget HTTP request rate per backend zone |
||||
max-rate-per-instanceTarget HTTP request rate per VM instance. Cloud Load Balancing uses this parameter to calculate target HTTP request rate per backend zone. |
||||
max-rate-per-endpointTarget HTTP request rate per NEG endpoint. Cloud Load Balancing uses this parameter to calculate target HTTP request rate per backend zone. |
||||
Using the max-rate parameter
When you specify the max-rate parameter, the value you provide defines the
capacity for an entire zone.
For a zonal instance group with
Ntotal instances andhhealthy instances (whereh≤N), the calculations are as follows:- If you set
max-ratetoX, the zonal target capacity isXrequests per second. - The average requests per second per instance is
X / h.
- If you set
Regional managed instance groups don't support the
max-rateparameter because they consist of multiple zones. Instead, use themax-rate-per-instanceparameter.For a zonal NEG with
Ntotal endpoints andhhealthy endpoints (whereh≤N), the calculations are as follows:- If you set
max-ratetoX, the zonal target capacity isXrequests per second. - The average requests per second per endpoint is
X / h.
- If you set
Using the max-rate-per-instance or max-rate-per-endpoint parameter
When you specify either the max-rate-per-instance or max-rate-per-endpoint
parameter, the load balancer uses the value you provide to calculate a per-zone
capacity:
For a zonal instance group with
Ntotal instances andhhealthy instances (whereh≤N), the calculations are as follows:- If you set
max-rate-per-instancetoX, the zonal target capacity isN * Xrequests per second. This is equivalent to settingmax-ratetoN * X. - The average requests per second per instance is
(N * X) / h.
- If you set
For a regional managed instance group, if you set
max-rate-per-instancetoX, Cloud de Confiance calculates a per-zone target capacity for each zone of the instance group. In each zone, if there areKtotal instances andhhealthy instances (whereh≤K), the calculations are as follows:- The zone's target capacity is
K * Xrequests per second. - The average requests per second per instance in the zone is
(K * X) / h.
- The zone's target capacity is
For a zonal NEG with
Ntotal endpoints andhhealthy endpoints (whereh≤N), the calculations are as follows:- If you set
max-rate-per-endpointtoX, the zonal target capacity isN * Xrequests per second. This is equivalent to settingmax-ratetoN * X. - The average requests per second per endpoint is
(N * X) / h.
- If you set
In-flight balancing mode
Application Load Balancer and Cloud Service Mesh backends with a long traffic
duration setting can use the IN_FLIGHT balancing
mode with one of the following required target capacity parameters:
| Target capacity parameter | Supported backend | |||
|---|---|---|---|---|
| Zonal (managed or unmanaged) instance groups | Regional managed instance groups | Zonal NEGs with GCE_VM_IP_PORT endpoints |
Zonal hybrid connectivity NEGs | |
max-in-flight-requestsTarget number of in-progress HTTP requests per backend zone |
||||
max-in-flight-requests-per-instanceTarget number of in-progress HTTP requests per VM instance. Cloud Load Balancing uses this parameter to calculate target number of in-progress HTTP requests per backend zone. |
||||
max-in-flight-requests-per-endpointTarget number of in-progress HTTP requests per NEG endpoint. Load balancing uses this parameter, to calculate target number of in-progress HTTP requests per backend zone. |
||||
Using the max-in-flight-requests parameter
When you specify the max-in-flight-requests parameter, the value you provide
defines the capacity for an entire zone.
For a zonal instance group with
Ntotal instances andhhealthy instances (whereh≤N), the calculations are as follows:- If you set
max-in-flight-requeststoX, the zonal target capacity isXin-progress HTTP requests. - The average number of in-progress HTTP requests per instance is
X / h.
- If you set
Regional managed instance groups don't support the
max-in-flight-requestsparameter because they consist of multiple zones. Instead, use themax-in-flight-requests-per-instanceparameter.For a zonal NEG with
Ntotal endpoints andhhealthy endpoints (whereh≤N), the calculations are as follows:- If you set
max-in-flight-requeststoX, the zonal target capacity isXin-progress HTTP requests. - The average number of in-progress HTTP requests per endpoint is
X / h.
- If you set
Using the max-in-flight-requests-per-instance or max-in-flight-requests-per-endpoint parameters
When you specify either the max-in-flight-requests-per-instance or
max-in-flight-requests-per-endpoint parameter, the load balancer uses the
value you provide to calculate a per-zone capacity:
For a zonal instance group with
Ntotal instances andhhealthy instances (whereh≤N), the calculations are as follows:- If you set
max-in-flight-requests-per-instancetoX, the zonal target capacity isN * Xin-progress HTTP requests. This is equivalent to settingmax-in-flight-requeststoN * X. - The average in-progress HTTP requests per instance is
(N * X) / h.
- If you set
For a regional managed instance group, if you set
max-in-flight-requests-per-instancetoX, Cloud de Confiance calculates a per-zone target capacity for each zone of the instance group. In each zone, if there areKtotal instances andhhealthy instances (whereh≤K), the calculations are as follows:- The zone's target capacity is
K * Xin-progress HTTP requests. - The average in-progress HTTP requests per instance in the zone is
(K * X) / h.
- The zone's target capacity is
For a zonal NEG with
Ntotal endpoints andhhealthy endpoints (whereh≤N), the calculations are as follows:- If you set
max-in-flight-requests-per-endpointtoX, the zonal target capacity isN * Xin-progress HTTP requests. This is equivalent to settingmax-in-flight-requeststoN * X. - The average in-progress HTTP requests per endpoint is
(N * X) / h.
- If you set
Utilization balancing mode
Application Load Balancer, Cloud Service Mesh, and proxy Network Load Balancer instance group
backends can use the UTILIZATION balancing mode. NEG backends don't support
this balancing mode.
The UTILIZATION balancing mode depends on VM CPU utilization along with other
factors. When these factors fluctuate, the load balancer might calculate
utilization in a way that leads to some VMs receiving more requests or
connections than others. Therefore, keep the following in mind:
Only use the
UTILIZATIONbalancing mode with session affinity set toNONE. If your backend service uses a session affinity that's different fromNONE, then use theRATE,IN-FLIGHT, orCONNECTIONbalancing modes instead.If the average utilization of VMs in all instance groups is less than 10%, some load balancers prefer to distribute new requests or connections to specific zones. This zonal preference becomes less prevalent when the request rate or connection count increases.
The UTILIZATION balancing mode has no mandatory target capacity setting, but
you can optionally define a target capacity by using one of the target capacity
parameters or combinations of target capacity parameters described in the
following sections.
Utilization target capacity parameters for Application Load Balancer and Cloud Service Mesh backends with an unspecified or short traffic duration setting
Application Load Balancer and Cloud Service Mesh backends with an unspecified or
short traffic duration setting (Preview) can use the
UTILIZATION balancing mode with one of the following target capacity
parameters or combinations of parameters:
| Target capacity parameter or parameter combination | Supported backend | |||
|---|---|---|---|---|
| Zonal (managed or unmanaged) instance groups | Regional managed instance groups | Zonal NEGs with GCE_VM_IP_PORT endpoints |
Zonal hybrid connectivity NEGs | |
max-utilizationTarget utilization per backend zone |
||||
max-rateTarget HTTP request rate per backend zone |
||||
max-rate and max-utilizationTarget is the first to be reached in the backend zone:
|
||||
max-rate-per-instanceTarget HTTP request rate per VM instance. Cloud Load Balancing uses this parameter to calculate target HTTP request rate per backend zone. |
||||
max-rate-per-instance and
max-utilizationTarget is the first to be reached in the backend zone:
|
||||
For more information about the max-rate and max-rate-per-instance target
capacity parameters, in this document, see Rate balancing mode.
Utilization target capacity parameters for Application Load Balancer and Cloud Service Mesh backends with a long traffic duration setting
Application Load Balancer and Cloud Service Mesh backends with a long traffic
duration setting (Preview) can use the UTILIZATION
balancing mode with one of the following target capacity parameters or
combinations of parameters:
| Target capacity parameter or parameter combination | Supported backend | |||
|---|---|---|---|---|
| Zonal (managed or unmanaged) instance groups | Regional managed instance groups | Zonal NEGs with GCE_VM_IP_PORT endpoints |
Zonal hybrid connectivity NEGs | |
max-utilizationTarget utilization per backend zone |
||||
max-in-flight-requestsTarget number of in-progress HTTP requests per backend zone |
||||
max-in-flight-requests and max-utilizationTarget is the first to be reached in the backend zone:
|
||||
max-in-flight-requests-per-instanceTarget number of in-progress HTTP requests per VM instance. Cloud Load Balancing uses this parameter to calculate target number of in-progress HTTP requests per backend zone. |
||||
max-in-flight-requests-per-instance and
max-utilizationTarget is the first to be reached in the backend zone:
|
||||
For more information about the max-in-flight-requests and
max-in-flight-requests-per-instance target capacity parameters, in this
document, see In-flight balancing mode.
Utilization target capacity parameters for proxy Network Load Balancers
Instance group backends of proxy Network Load Balancers can use the UTILIZATION balancing
mode with one of the following target capacity parameters or combinations of
parameters.
| Target capacity parameter or parameter combination | Supported backend | |||
|---|---|---|---|---|
| Zonal (managed or unmanaged) instance groups | Regional managed instance groups | Zonal NEGs with GCE_VM_IP_PORT endpoints |
Zonal hybrid connectivity NEGs | |
max-utilizationTarget utilization per backend zone |
||||
max-connectionsTarget TCP connections per backend zone |
||||
max-connections and max-utilizationTarget is the first to be reached in the backend zone:
|
||||
max-connections-per-instanceTarget TCP connections per VM instance. Cloud Load Balancing uses this parameter to calculate target TCP connections per backend zone. |
||||
max-connections-per-instance and
max-utilizationTarget is the first to be reached in the backend zone:
|
||||
For more information about the max-connections and
max-connections-per-instance target capacity parameters, in this document, see
Connection balancing mode.
Custom metrics balancing mode
Application Load Balancer and proxy Network Load Balancer backends can
use the CUSTOM_METRICS balancing mode. Custom metrics let you define target
capacity based on application or infrastructure data that's most important to
you. For more information, see Custom metrics for
Application Load Balancers.
The CUSTOM_METRICS balancing mode has no mandatory target capacity setting,
but you can optionally define a target capacity by using one of the target
capacity parameters or combinations of target capacity parameters described in
the following sections.
Custom metrics target capacity parameters for Application Load Balancer backends with an unspecified or short traffic duration setting
Application Load Balancer backends with an unspecified or
short traffic duration setting (Preview)
can use the CUSTOM_METRICS balancing mode with one of the following target
capacity parameters or combinations of parameters:
| Target capacity parameter or parameter combination | Supported backend | |||
|---|---|---|---|---|
| Zonal (managed or unmanaged) instance groups | Regional managed instance groups | Zonal NEGs with GCE_VM_IP_PORT endpoints |
Zonal hybrid connectivity NEGs | |
backends[].customMetrics[].maxUtilizationTarget custom metric utilization per backend zone |
||||
max-rateTarget HTTP request rate per backend zone |
||||
max-rate and
backends[].customMetrics[].maxUtilizationTarget is the first to be reached in the backend zone:
|
||||
max-rate-per-instanceTarget HTTP request rate per VM instance. Cloud Load Balancing uses this parameter to calculate target HTTP request rate per backend zone. |
||||
max-rate-per-instance and
backends[].customMetrics[].maxUtilizationTarget is the first to be reached in the backend zone:
|
||||
max-rate-per-endpointTarget HTTP request rate per NEG endpoint. Cloud Load Balancing uses this parameter to calculate target HTTP request rate per backend zone. |
||||
max-rate-per-endpoint and
backends[].customMetrics[].maxUtilizationTarget is the first to be reached in the backend zone:
|
||||
For more information about the max-rate, max-rate-per-instance, and
max-rate-per-endpoint target capacity parameters, in this document, see
Rate balancing mode.
Custom metrics target capacity parameters for Application Load Balancer backends with a long traffic duration setting
Application Load Balancer backends with a long traffic
duration setting can use the CUSTOM_METRICS
balancing mode with one of the following target capacity parameters or
combinations of parameters:
| Target capacity parameter or parameter combination | Supported backend | |||
|---|---|---|---|---|
| Zonal (managed or unmanaged) instance groups | Regional managed instance groups | Zonal NEGs with GCE_VM_IP_PORT endpoints |
Zonal hybrid connectivity NEGs | |
backends[].customMetrics[].maxUtilizationTarget custom metric utilization per backend zone |
||||
max-in-flight-requestsTarget number of in-progress HTTP requests per backend zone |
||||
max-in-flight-requests and
backends[].customMetrics[].maxUtilizationTarget is the first to be reached in the backend zone:
|
||||
max-in-flight-requests-per-instanceTarget number of in-progress HTTP requests per VM instance. Cloud Load Balancing uses this parameter to calculate target number of in-progress HTTP requests per backend zone. |
||||
max-in-flight-requests-per-instance and
backends[].customMetrics[].maxUtilizationTarget is the first to be reached in the backend zone:
|
||||
max-in-flight-requests-per-endpointTarget number of in-progress HTTP requests per NEG endpoint. Load balancing uses this parameter to calculate target number of in-progress HTTP requests per backend zone. |
||||
max-in-flight-requests-per-endpoint and
backends[].customMetrics[].maxUtilizationTarget is the first to be reached in the backend zone:
|
||||
For more information about the max-in-flight-requests,
max-in-flight-requests-per-instance, and
max-flight-requests-per-endpoint target capacity parameters, see the
In-flight balancing mode.
Service load balancing policy
A service load balancing policy (serviceLbPolicy) is a resource associated
with the load balancer's backend
service. It lets you customize the
parameters that influence how traffic is distributed within the backends
associated with a backend service:
- Customize the load balancing algorithm used to determine how traffic is distributed among regions or zones.
- Enable auto-capacity draining so that the load balancer can quickly drain traffic from unhealthy backends.
Additionally, you can designate specific backends as preferred backends. These backends must be used to capacity (that is, the target capacity specified by the backend's balancing mode) before requests are sent to the remaining backends.
To learn more, see Advanced load balancing optimizations.
Load balancing locality policy
For a backend service, traffic distribution is based on a balancing mode and a
load balancing locality policy. The balancing mode determines the fraction of
traffic that should be sent to each backend (instance group or NEG). The load
balancing locality policy then (LocalityLbPolicy) determines how traffic is
distributed across instances or endpoints within each zone. For regional managed
instance groups, the locality policy applies to each constituent zone.
The load balancing locality policy is configured per-backend service. The following settings are available:
ROUND_ROBIN(default): This is the default load balancing locality policy setting in which the load balancer selects a healthy backend in round robin order.WEIGHTED_ROUND_ROBIN: The load balancer uses user-defined custom metrics to select the optimal instance or endpoint within the backend to serve the request.LEAST_REQUEST: AnO(1)algorithm in which the load balancer selects two random healthy hosts and picks the host which has fewer active requests.RING_HASH: This algorithm implements consistent hashing to backends. The algorithm has the property that the addition or removal of a host from a set of N hosts only affects 1/N of the requests.RANDOM: The load balancer selects a random healthy host.ORIGINAL_DESTINATION: The load balancer selects a backend based on the client connection metadata. Connections are opened to the original destination IP address specified in the incoming client request, before the request was redirected to the load balancer.ORIGINAL_DESTINATIONis not supported for global and regional external Application Load Balancers.MAGLEV: Implements consistent hashing to backends and can be used as a replacement for theRING_HASHpolicy. Maglev is not as stable asRING_HASHbut has faster table lookup build times and host selection times. For more information about Maglev, see the Maglev whitepaper.WEIGHTED_MAGLEV: Implements per-instance weighted load balancing by using weights reported by health checks. If this policy is used, the backend service must configure a non legacy HTTP-based health check, and health check replies are expected to contain the non-standard HTTP response header field,X-Load-Balancing-Endpoint-Weight, to specify the per-instance weights. Load balancing decisions are made based on the per-instance weights reported in the last processed health check replies, as long as every instance reports a valid weight or reportsUNAVAILABLE_WEIGHT. Otherwise, load balancing will remain equal-weight.WEIGHTED_MAGLEVis supported only for External passthrough Network Load Balancers. For an example, see Set up weighted load balancing for external passthrough Network Load Balancers.
Configuring a load balancing locality policy is supported only on backend services used with the following load balancers:
- Global external Application Load Balancer
- Regional external Application Load Balancer
- Cross-region internal Application Load Balancer
- Regional internal Application Load Balancer
- Global external proxy Network Load Balancer
- Regional external proxy Network Load Balancer
- Cross-region internal proxy Network Load Balancer
- Regional internal proxy Network Load Balancer
- External passthrough Network Load Balancer
Note that the effective default value of the load balancing locality policy
(localityLbPolicy) changes according to your session affinity
settings. If session affinity is not configured—that is, if session
affinity remains at the default value of NONE—then the
default value for localityLbPolicy is ROUND_ROBIN. If
session affinity is set to a value other than NONE, then the
default value for localityLbPolicy is MAGLEV.
To configure a load balancing locality policy, you can use the
Cloud de Confiance console, gcloud
(--locality-lb-policy)
or the API
(localityLbPolicy).
Backend subsetting
Backend subsetting is an optional feature that improves performance and scalability by assigning a subset of backends to each of the proxy instances.
Backend subsetting is supported for the following:
- Regional internal Application Load Balancer
- Internal passthrough Network Load Balancer
Backend subsetting for regional internal Application Load Balancers
For regional internal Application Load Balancers, backend subsetting automatically assigns only a subset of the backends within the regional backend service to each proxy instance. By default, each proxy instance opens connections to all the backends within a backend service. When the number of proxy instances and the backends are both large, opening connections to all the backends can lead to performance issues.
By enabling subsetting, each proxy only opens connections to a subset of the backends, reducing the number of connections which are kept open to each backend. Reducing the number of simultaneously open connections to each backend can improve performance for both the backends and the proxies.
The following diagram shows a load balancer with two proxies. Without backend subsetting, traffic from both proxies is distributed to all the backends in the backend service 1. With backend subsetting enabled, traffic from each proxy is distributed to a subset of the backends. Traffic from proxy 1 is distributed to backends 1 and 2, and traffic from proxy 2 is distributed to backends 3 and 4.
You can additionally refine the load balancing traffic to the backends by setting the
localityLbPolicy policy.
For more information, see Traffic policies.
To read about setting up backend subsetting for internal Application Load Balancers, see Configure backend subsetting.
Caveats related to backend subsetting for internal Application Load Balancer
- Although backend subsetting is designed to ensure that all backend instances
remain well utilized, it can introduce some bias in the amount of traffic that
each backend receives. Setting the
localityLbPolicytoLEAST_REQUESTis recommended for backend services that are sensitive to the balance of backend load. - Enabling or disabling subsetting breaks existing connections.
- Backend subsetting requires that the session affinity is
NONE(a 5-tuple hash). Other session affinity options can only be used if backend subsetting is disabled. The default values of the--subsetting-policyand--session-affinityflags are bothNONE, and only one of them at a time can be set to a different value.
Backend subsetting for internal passthrough Network Load Balancer
Backend subsetting for internal passthrough Network Load Balancers lets you scale your internal passthrough Network Load Balancer to support a larger number of backend VM instances per internal backend service.
For information about how subsetting affects this limit, see Backend services in "Quotas and limits".
By default, subsetting is disabled, which limits the backend service to distributing to up to 250 backend instances or endpoints. If your backend service needs to support more than 250 backends, you can enable subsetting. When subsetting is enabled, a subset of backend instances is selected for each client connection.
The following diagram shows a scaled-down model of the difference between these two modes of operation.
Without subsetting, the complete set of healthy backends is better utilized, and new client connections are distributed among all healthy backends according to traffic distribution. Subsetting imposes load balancing restrictions but allows the load balancer to support more than 250 backends.
For configuration instructions, see Subsetting.
Caveats related to backend subsetting for internal passthrough Network Load Balancer
- When subsetting is enabled, not all backends will receive traffic from a given sender even when the number of backends is small.
- For the maximum number of backend instances when subsetting is enabled, see the quotas page .
- Only 5-tuple session affinity is supported with subsetting.
- Packet Mirroring is not supported with subsetting.
- Enabling or disabling subsetting breaks existing connections.
- If on-premises clients need for to access an internal passthrough Network Load Balancer, subsetting can substantially reduce the number of backends that receive connections from your on-premises clients. This is because the region of the Cloud VPN tunnel or Cloud Interconnect VLAN attachment determines the subset of the load balancer's backends. All Cloud VPN and Cloud Interconnect endpoints in a specific region use the same subset. Different subsets are used in different regions.
Backend subsetting pricing
There is no charge for using backend subsetting. For more information, see All networking pricing.
Session affinity
Session affinity lets you control how the load balancer selects backends for new connections in a predictable way as long as the number of healthy backends remains constant. This is useful for applications that need multiple requests from a given user to be directed to the same backend or endpoint. Such applications usually include stateful servers used by ads serving, games, or services with heavy internal caching.
Cloud de Confiance load balancers provide session affinity on a best-effort basis. Factors such as changing backend health check states, adding or removing backends, changes in backend weights (including enabling or disabling weighted balancing), or changes to backend fullness, as measured by the balancing mode, can break session affinity.
Load balancing with session affinity works well when there is a reasonably large distribution of unique connections. Reasonably large means at least several times the number of backends. Testing a load balancer with a small number of connections won't result in an accurate representation of the distribution of client connections among backends.
By default, all Cloud de Confiance load balancers select backends by using a
five-tuple hash (--session-affinity=NONE), as follows:
- Packet's source IP address
- Packet's source port (if present in the packet's header)
- Packet's destination IP address
- Packet's destination port (if present in the packet's header)
- Packet's protocol
To learn more about session affinity for passthrough Network Load Balancers, see the following documents:
- Traffic distribution for external passthrough Network Load Balancers
- Traffic distribution for internal passthrough Network Load Balancers
To learn more about session affinity for Application Load Balancers, see the following documents:
- Session affinity for external Application Load Balancers
- Session affinity for internal Application Load Balancers
To learn more about session affinity for proxy Network Load Balancers, see the following documents:
- Session affinity for external proxy Network Load Balancers
- Session affinity for internal proxy Network Load Balancers
Backend service timeout
Most Cloud de Confiance load balancers have a backend service timeout. The default value is 30 seconds. The full range of timeout values allowed is 1 - 2,147,483,647 seconds.
For external Application Load Balancers and internal Application Load Balancers using the HTTP, HTTPS, or HTTP/2 protocol, the backend service timeout is a request and response timeout for HTTP(S) traffic.
For more details about the backend service timeout for each load balancer, see the following:
- For regional external Application Load Balancers, see Timeouts and retries.
- For internal Application Load Balancers, see Timeouts and retries.
For external proxy Network Load Balancers and internal proxy Network Load Balancers, the configured backend service timeout is the length of time the load balancer keeps the TCP connection open in the absence of any data transmitted from either the client or the backend. After this time has passed without any data transmitted, the proxy closes the connection.
- Default value: 30 seconds
- Configurable range: 1 to 2,147,483,647 seconds
For internal passthrough Network Load Balancers and external passthrough Network Load Balancers, you can set the value of the backend service timeout using
gcloudor the API, but the value is ignored. Backend service timeout has no meaning for these pass-through load balancers.
Health checks
Each backend service whose backends are instance groups or zonal NEGs must have an associated health check.
When you create a load balancer using the Cloud de Confiance console, you can create the health check, if it is required, when you create the load balancer, or you can reference an existing health check.
When you create a backend service using either instance group or zonal NEG backends using the Google Cloud CLI or the API, you must reference an existing health check. Refer to the load balancer guide in the Health Checks Overview for details about the type and scope of health check required.
For more information, read the following documents:
IAP
IAP lets you establish a central authorization layer for applications accessed by HTTPS, so you can use an application-level access control model instead of relying on network-level firewalls. IAP is supported by certain Application Load Balancers.
IAP is incompatible with Cloud CDN. They can't be enabled on the same backend service.
Advanced traffic management features
To learn about advanced traffic management features that are configured on the backend services and URL maps associated with load balancers, see the following:
- Traffic management overview for internal Application Load Balancers
- Traffic management overview for global external Application Load Balancers
- Traffic management overview for regional external Application Load Balancers
API and gcloud reference
For more information about the properties of the backend service resource, see the following references:
Regional backend service API resource
gcloud compute backend-servicespage, for regional backend services
What's next
For related documentation and information about how backend services are used in load balancing, review the following: