This document helps you determine which Trusted Cloud by S3NS load balancer best meets your needs. To see an overview of all the Cloud Load Balancing products available, see Cloud Load Balancing overview.
To determine which Cloud Load Balancing product to use, you must first determine the traffic type that your load balancers must handle.
- Choose an Application Load Balancer when you need a flexible feature set for your applications with HTTP(S) traffic.
- Choose a proxy Network Load Balancer to implement TCP proxy load balancing to backends in one or more regions.
- Choose a passthrough Network Load Balancer to preserve client source IP addresses, avoid the overhead of proxies, and to support additional protocols like UDP, ESP, and ICMP.
You can further narrow down your choices depending on whether your application is external (internet-facing) or internal.
The following diagram summarizes all the available deployment modes for Cloud Load Balancing.
Load balancing aspects
To decide which load balancer best suits your implementation of Trusted Cloud, consider the following aspects of Cloud Load Balancing:
Traffic type
The type of traffic that you need your load balancer to handle is another factor in determining which load balancer to use.
Load balancer type | Traffic type |
---|---|
Application Load Balancers | HTTP or HTTPS |
Passthrough Network Load Balancers | TCP or UDP These load balancers also support other IP protocol traffic such as ESP, GRE, ICMP, and ICMPv6. |
Proxy Network Load Balancers | TCP |
External versus internal load balancing
Trusted Cloud load balancers can be deployed as external or internal load balancers:
External load balancers distribute traffic that comes from the internet to your Trusted Cloud Virtual Private Cloud (VPC) network.
Internal load balancers distribute traffic that comes from clients in the same VPC network as the load balancer or clients connected to your VPC network by using VPC Network Peering, Cloud VPN, or Cloud Interconnect.
To determine which load balancer works for your application, use the summary table.
Proxy versus passthrough load balancing
Depending on the type of traffic you need the load balancer to handle, and whether your clients are internal or external, you might have the option to choose between either a proxy load balancer or a passthrough load balancer.
Proxy load balancers terminate incoming client connections at the load balancer and then open new connections from the load balancer to the backends. All the Application Load Balancers and the proxy Network Load Balancers work this way. They terminate client connections by using either Google Front Ends (GFEs) or Envoy proxies.
Passthrough load balancers don't terminate client connections. Instead, load-balanced packets are received by backend VMs with the packet's source, destination, and, if applicable, port information unchanged. Connections are then terminated by the backend VMs. Responses from the backend VMs go directly to the clients, not back through the load balancer. The term for this is direct server return. Use a passthrough load balancer when you need to preserve the client packet information. As the name suggests, the passthrough Network Load Balancers come under this category.
To determine which load balancer works for your application, use the summary table.
Summary of Trusted Cloud load balancers
The following table provides details, such as the network service tier on which each load balancer operates, along with its load balancing scheme.
Load balancer | Deployment mode | Traffic type | Network service tier | Load-balancing scheme * |
---|---|---|---|---|
Application Load Balancers | Regional external | HTTP or HTTPS | Premium or Standard Tier | EXTERNAL_MANAGED |
Regional internal | HTTP or HTTPS | Premium Tier | INTERNAL_MANAGED | |
Proxy Network Load Balancers | Regional external | TCP | Premium or Standard Tier | EXTERNAL_MANAGED |
Regional internal | TCP without SSL offload | Premium Tier | INTERNAL_MANAGED | |
Passthrough Network Load Balancers | External Always regional |
TCP, UDP, ESP, GRE, ICMP, and ICMPv6 | Premium or Standard Tier | EXTERNAL |
Internal Always regional |
TCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GRE | Premium Tier | INTERNAL |
* The load-balancing scheme is an attribute on the forwarding rule and the backend service of a load balancer and indicates whether the load balancer can be used for internal or external traffic.
The term managed in `EXTERNAL_MANAGED` or `INTERNAL_MANAGED` indicates that the load balancer is implemented as a managed service either on a Google Front End (GFE) or on the open source Envoy proxy. In a load-balancing scheme that is managed, requests are routed either to the GFE or to the Envoy proxy.
What's next
- To see a comparative overview of the load balancing features offered by Cloud Load Balancing, see Load balancer feature comparison.