This document shows you how to configure and test an internal passthrough Network Load Balancer that supports internal IPv6-only traffic. In this example configuration, you install an Apache web server on the internal IPv6-only backend virtual machine (VM) instances to serve content in response to traffic received through the load balancer's virtual IP (VIP).
As shown in the following architecture diagram, the backend VMs of the load balancer are configured with internal IPv6-only addresses. For the purpose of this example configuration, these backend VMs need to download Apache to install a web server, which requires access to the public internet. However, because these backend VMs lack external IP addresses, they cannot reach the internet directly.
To enable internet access, this example uses a separate VM instance, configured
using an external IPv6 address, that serves as a NAT gateway. This VM performs
address translation at the Linux kernel level. Specifically, the POSTROUTING
chain in the NAT table is used to masquerade the source address of outgoing
packets—replacing each backend VM's internal IPv6 address with the NAT
gateway VM's external IPv6 address on the specified network interface.
The information that follows walks you through the configuration of the different components that are used to set up an internal passthrough Network Load Balancer with internal IPv6-only backends.
Permissions
To follow this guide, you need to create instances and modify a network in a project. You need to be either a project owner or editor, or you must have all of the following Compute Engine IAM roles:
Task | Required role |
---|---|
Create networks, subnets, and load balancer components | Compute Network Admin ( roles/compute.networkAdmin )
|
Add and remove firewall rules | Compute Security Admin ( roles/compute.securityAdmin )
|
Create instances | Compute Instance Admin ( roles/compute.instanceAdmin )
|
For more information, see the following guides:
Configure a network and an IPv6-only subnet with internal IPv6 addresses
The example internal passthrough Network Load Balancer described on this page is created in a
custom mode VPC network named
lb-network-ipv6-only
.
To configure subnets with internal IPv6 ranges, enable a VPC network ULA internal IPv6 range. Internal IPv6 subnet ranges are allocated from this range.
Console
In the Trusted Cloud console, go to the VPC networks page.
Click Create VPC network.
For Name, enter
lb-network-ipv6-only
.If you want to configure internal IPv6 address ranges on subnets in this network, complete these steps:
- For Private IPv6 address settings, select Configure a ULA internal IPv6 range for this VPC Network.
- For Allocate internal IPv6 range, select Automatically or
Manually.
If you select Manually, enter a
/48
range from within thefd20::/20
range. If the range is already in use, you are prompted to provide a different range.
For Subnet creation mode, select Custom.
In the New subnet section, specify the following configuration parameters for the subnet:
- Name:
lb-subnet-ipv6-only
- Region:
us-west1
- IP stack type: IPv6 (single-stack)
- IPv6 access type: Internal
- Name:
Click Done.
Click Create.
gcloud
To create a new custom mode VPC network, run the
gcloud compute networks create
command.To configure internal IPv6 ranges on any subnets in this network, use the
--enable-ula-internal-ipv6
flag.gcloud compute networks create lb-network-ipv6-only \ --subnet-mode=custom \ --enable-ula-internal-ipv6 \ --bgp-routing-mode=regional
Configure a subnet with the
ipv6-access-type
set toINTERNAL
. This indicates that the VMs in this subnet can only have internal IPv6 addresses. In this example, the subnet is namedlb-subnet-ipv6-only-internal
.To create the subnet, run the
gcloud compute networks subnets create
command.gcloud compute networks subnets create lb-subnet-ipv6-only-internal \ --network=lb-network-ipv6-only \ --region=us-west1 \ --stack-type=IPV6_ONLY \ --ipv6-access-type=INTERNAL
Configure an IPv6-only subnet with external IPv6 addresses
An IPv6-only subnet with external IPv6 addresses is used to create a VM instance that serves as a NAT gateway.
Console
In the Trusted Cloud console, go to the VPC networks page.
To view the VPC network details page, click the name of the VPC network that you created.
On the Subnets tab, click
Add subnet. In the panel that appears, specify the following configuration parameters for the subnet:- Name:
lb-subnet-ipv6-only-external
- Region:
us-west1
- IP stack type: IPv6 (single-stack)
- IPv6 access type: External
- Name:
Click Add.
gcloud
Configure a subnet with the ipv6-access-type
set to EXTERNAL
.
This indicates that the VMs in this subnet can have external IPv6
addresses. In this example, the subnet is named lb-subnet-ipv6-only-external
.
To create the subnet, run the
gcloud compute networks subnets create
command.
gcloud compute networks subnets create lb-subnet-ipv6-only-external \ --network=lb-network-ipv6-only \ --region=us-west1 \ --stack-type=IPV6_ONLY \ --ipv6-access-type=EXTERNAL
Configure firewall rules in the VPC network
This example uses the following firewall rules:
fw-allow-lb-access-ipv6-only
: an ingress rule, applicable to all targets in the VPC network, that allows traffic from all IPv6 sources.fw-allow-ssh
: an ingress rule that allows incoming SSH connectivity on TCP port22
from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the system from which you initiate SSH sessions. This example uses the target tagallow-ssh
to identify the VMs to which it must apply.fw-allow-health-check-ipv6-only
: an ingress rule, applicable to the instances being load balanced, that allows traffic from the Trusted Cloud health checking systems (2600:2d00:1:b029::/64
). This example uses the target tagallow-health-check-ipv6
to identify the instances to which it must apply.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.
Console
In the Trusted Cloud console, go to the Firewall policies page.
To allow IPv6 subnet traffic, click Create firewall rule again and enter the following information:
- Name:
fw-allow-lb-access-ipv6-only
- Network:
lb-network-ipv6-only
- Priority:
1000
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: All instances in the network
- Source filter: IPv6 ranges
- Source IPv6 ranges:
::/0
- Protocols and ports: Allow all
- Name:
Click Create.
To allow incoming SSH connections, click Create firewall rule again and enter the following information:
- Name:
fw-allow-ssh
- Network:
lb-network-ipv6-only
- Priority:
1000
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-ssh
- Source filter: IPv6 ranges
- Source IPv6 ranges:
::/0
- Protocols and ports: Select Specified protocols and ports,
select the TCP checkbox, and then enter
22
in Ports.
- Name:
Click Create.
To allow Trusted Cloud IPv6 health checks, click Create firewall rule again and enter the following information:
- Name:
fw-allow-health-check-ipv6-only
- Network:
lb-network-ipv6-only
- Priority:
1000
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-health-check-ipv6
- Source filter: IPv6 ranges
- Source IPv6 ranges:
2600:2d00:1:b029::/64
- Protocols and ports: Allow all
- Name:
Click Create.
gcloud
Create the
fw-allow-lb-access-ipv6-only
firewall rule to allow all inbound IPv6 traffic to all VM instances in the VPC network:gcloud compute firewall-rules create fw-allow-lb-access-ipv6-only \ --network=lb-network-ipv6-only \ --action=allow \ --direction=ingress \ --source-ranges=::/0 \ --rules=all
Create the
fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tagallow-ssh
. When you omitsource-ranges
, Trusted Cloud interprets the rule to mean any source.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network-ipv6-only \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --source-ranges=::/0 \ --rules=tcp:22
Create the
fw-allow-health-check-ipv6
rule to allow Trusted Cloud IPv6 health checks.gcloud compute firewall-rules create fw-allow-health-check-ipv6-only \ --network=lb-network-ipv6-only \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check-ipv6 \ --source-ranges=2600:2d00:1:b029::/64 \ --rules=tcp,udp
Create a VM instance that serves as a NAT gateway
In this example, a Bash script is used to create a NAT gateway that is used to modify IPv6 packets at the Linux kernel level.
The Bash script modifies the source address of all outgoing IPv6 packets in the POSTROUTING
chain, replacing it with the external IPv6 address of the
VM interface.
The Bash script modifies the POSTROUTING
chain in iptables
to masquerade the
source IPv6 address of all outgoing packets, replacing it with the external IPv6
address of the VM's network interface.
You also need to enable IP forwarding for this instance.
To create a VM instance that serves as a NAT gateway, do the following:
Console
In the Trusted Cloud console, go to the VM instances page.
Click Create instance.
For Name, enter
nat-gateway-instance
.For Region, select
us-west1
, and for Zone, selectus-west1-a
.In the Boot disk section, ensure that Debian GNU/Linux 12 (bookworm) is selected for the boot disk options. If necessary, click Change to change the image.
Click Networking and configure the following fields:
- For Network tags, enter
allow-ssh
. - For IP forwarding, select the Enable checkbox.
- For Network interfaces, select the following:
- Network:
lb-network-ipv6-only
- Subnet:
lb-subnet-ipv6-only-external
- IP stack type: IPv6 (single-stack)
- External IPv6 address: Auto-Allocate
- Network:
- For Network tags, enter
Click Advanced and in the Startup script field, enter the following script:
#!/bin/bash set -e echo "Starting GCE startup script..." # --- IPv6 NAT table configuration --- echo "Modifying the source IPv6 address using the NAT table" # Enable IPv6 forwarding sysctl -w net.ipv6.conf.all.forwarding=1 # Determine the primary network interface (assuming it's the last one listed) IFACE=$(ip -brief link | tail -1 | awk '{print $1}') echo "Using interface: $IFACE for IPv6 NAT" # Flush existing IPv6 NAT rules ip6tables -F -t nat ip6tables -X -t nat # Masquerade all outgoing IPv6 traffic on the determined interface ip6tables -t nat -A POSTROUTING -o "$IFACE" -j MASQUERADE echo "IPv6 masquerading configured successfully." echo "GCE startup script finished."
Click Create.
gcloud
Create a startup script.
nano startup.sh
Add the following script and save the file.
#!/bin/bash set -e echo "Starting GCE startup script..." # --- IPv6 NAT table configuration --- echo "Modifying the source IPv6 address using the NAT table" # Enable IPv6 forwarding sysctl -w net.ipv6.conf.all.forwarding=1 # Determine the primary network interface (assuming it's the last one listed) IFACE=$(ip -brief link | tail -1 | awk '{print $1}') echo "Using interface: $IFACE for IPv6 NAT" # Flush existing IPv6 NAT rules ip6tables -F -t nat ip6tables -X -t nat # Masquerade all outgoing IPv6 traffic on the determined interface ip6tables -t nat -A POSTROUTING -o "$IFACE" -j MASQUERADE echo "IPv6 masquerading configured successfully." echo "GCE startup script finished."
Create a VM instance and add the metadata file to the VM instance. To use this VM as a next hop for a route, use the
--can-ip-forward
flag to enable IP forwarding for this instance.gcloud compute instances create nat-gateway-instance \ --zone=us-west1-a \ --tags=allow-ssh \ --image-family=debian-12 \ --image-project=debian-cloud \ --subnet=lb-subnet-ipv6-only-external \ --stack-type=IPV6_ONLY \ --can-ip-forward \ --metadata-from-file=startup-script=startup.sh
Create a new static route in the VPC network
In this example, a custom route is created that directs all IPv6 internet-bound
traffic (::/0
) from VMs tagged nat-gw-tag
to the nat-gateway-instance
VM
instance, which serves as the NAT gateway.
To create a route, do the following:
Console
In the Trusted Cloud console, go to the Routes page.
Click the Route management tab.
Click Create route.
Specify a name and a description for the route.
In the Network list, select the VPC network
lb-network-ipv6-only
.In the IP version list, select IPv6.
Specify a destination IPv6 range. The broadest possible destination is
::/0
for IPv6.To make the route applicable only to select instances with matching network tags, specify those in the Instance tags field. Leave the field blank to make the route applicable to all instances in the network. For this example, enter
nat-gw-tag
.For the next hop, select Specify an instance.
Select the name of the instance that you created to serve as the NAT gateway. For this example, select
nat-gateway-instance
.Click Create.
gcloud
Use the gcloud compute routes create
command to create a new route.
The packet is forwarded to the nat-gateway-instance
VM instance
as specified by --next-hop-instance
of the route.
gcloud compute routes create route-1 \ --network=lb-network-ipv6-only \ --priority=1000 \ --tags=nat-gw-tag \ --destination-range=::/0 \ --next-hop-instance=nat-gateway-instance \ --next-hop-instance-zone=us-west1-a
Create backend VMs and instance groups
This example uses two unmanaged instance groups, each having two backend
VMs. To demonstrate the regional nature of internal passthrough Network Load Balancers,
the two instance groups are placed in separate zones, us-west1-a
and us-west1-c
.
- Instance group
ig-a
contains these two VMs:vm-a1
vm-a2
- Instance group
ig-c
contains these two VMs:vm-c1
vm-c2
Traffic to all four of the backend VMs is load balanced.
In this example, the static route that was created in an earlier step is scoped to specific VM instances by using the network tag,
nat-gw-tag
.
Console
Create backend VMs
In the Trusted Cloud console, go to the VM instances page.
Repeat these steps for each VM, using the following name and zone combinations.
- Name:
vm-a1
, zone:us-west1-a
- Name:
vm-a2
, zone:us-west1-a
- Name:
vm-c1
, zone:us-west1-c
- Name:
vm-c2
, zone:us-west1-c
- Name:
Click Create instance.
Set the Name as indicated in step 2.
For Region, select
us-west1
, and choose a Zone as indicated in step 2.In the Boot disk section, ensure that Debian GNU/Linux 12 (bookworm) is selected for the boot disk options. If necessary, click Change to change the image.
Click Advanced options.
Click Networking and configure the following fields:
- For Network tags, enter
allow-ssh
,allow-health-check-ipv6
, andnat-gw-tag
. - For Network interfaces, select the following:
- Network:
lb-network-ipv6-only
- Subnet:
lb-subnet-ipv6-only-internal
- IP stack type: IPv6 (single-stack)
- Primary internal IPv6 address: Ephemeral (Automatic)
- Network:
- For Network tags, enter
Click Advanced, and then in the Startup script field, enter the following script. The script contents are identical for all four VMs.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Click Create.
Create instance groups
In the Trusted Cloud console, go to the Instance groups page.
Repeat the following steps to create two unmanaged instance groups each with two VMs in them, using these combinations.
- Instance group name:
ig-a
, zone:us-west1-a
, VMs:vm-a1
andvm-a2
- Instance group name:
ig-c
, zone:us-west1-c
, VMs:vm-c1
andvm-c2
- Instance group name:
Click Create instance group.
Click New unmanaged instance group.
Set Name as indicated in step 2.
In the Location section, select
us-west1
for the Region, and then choose a Zone as indicated in step 2.For Network, select
lb-network-ipv6-only
.For Subnetwork, select
lb-subnet-ipv6-only-internal
.In the VM instances section, add the VMs as indicated in step 2.
Click Create.
gcloud
To create the four VMs, run the
gcloud compute instances create
command four times, using the following four combinations for[VM-NAME]
and[ZONE]
.VM-NAME
:vm-a1
,ZONE
:us-west1-a
VM-NAME
:vm-a2
,ZONE
:us-west1-a
VM-NAME
:vm-c1
,ZONE
:us-west1-c
VM-NAME
:vm-c2
,ZONE
:us-west1-c
gcloud compute instances create VM-NAME \ --zone=ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check-ipv6,nat-gw-tag \ --subnet=lb-subnet-ipv6-only-internal \ --stack-type=IPV6_ONLY \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Create the two unmanaged instance groups in each zone:
gcloud compute instance-groups unmanaged create ig-a \ --zone=us-west1-a gcloud compute instance-groups unmanaged create ig-c \ --zone=us-west1-c
Add the VMs to the appropriate instance groups:
gcloud compute instance-groups unmanaged add-instances ig-a \ --zone=us-west1-a \ --instances=vm-a1,vm-a2 gcloud compute instance-groups unmanaged add-instances ig-c \ --zone=us-west1-c \ --instances=vm-c1,vm-c2
Configure load balancer components
The steps that follow configure the different components of an internal passthrough Network Load Balancer starting with the health check and backend service followed by the frontend.
Console
Start your configuration
In the Trusted Cloud console, go to the Load balancing page.
- Click Create load balancer.
- For Type of load balancer, select Network Load Balancer (TCP/UDP/SSL) and click Next.
- For Proxy or passthrough, select Passthrough load balancer and click Next.
- Click Configure.
Basic configuration
On the Create internal passthrough Network Load Balancer page, enter the following information:
- Load balancer name:
ilb-ipv6-only
- Region:
us-west1
- Network:
lb-network-ipv6-only
Backend configuration
- Click Backend configuration.
- In the New Backend section of Backends, select the IP stack type as IPv6 (single-stack).
- In Instance group, select the
ig-a
instance group and click Done. - Click Add a backend and repeat the step to add
ig-c
. - From the Health check list, select Create a health check,
enter the following information, and click Save:
- Name:
hc-http-80
. - Scope: Regional.
- Protocol:
HTTP
. - Port:
80
. - Proxy protocol:
NONE
. - Request path:
/
.
- Name:
- Verify that a blue check mark appears next to Backend configuration.
Frontend configuration
- Click Frontend configuration. In the New Frontend IP and port
section, do the following:
- For Name, enter
fr-ilb-ipv6-only
. - To handle IPv6 traffic, do the following:
- For IP version, select IPv6. The IPv6 TCP server that you are going to create in the following section binds to the VIP of the forwarding rule.
- For Subnetwork, select
lb-subnet-ipv6-only-internal
. The IPv6 address range in the forwarding rule is always ephemeral. - For Ports, select Multiple, and then in the Port
number field, enter
80
. - Click Done.
- Verify that there is a blue check mark next to Frontend configuration before continuing.
- For Name, enter
Review the configuration
- Click Review and finalize. Check all your settings.
- If the settings are correct, click Create. It takes a few minutes for the internal passthrough Network Load Balancer to be created.
gcloud
Create a new regional HTTP health check to test HTTP connectivity to the VMs on port 80.
gcloud compute health-checks create http hc-http-80 \ --region=us-west1 \ --port=80
Create the backend service:
gcloud compute backend-services create ilb-ipv6-only \ --load-balancing-scheme=INTERNAL \ --protocol=tcp \ --region=us-west1 \ --health-checks=hc-http-80 \ --health-checks-region=us-west1
Add the two instance groups to the backend service:
gcloud compute backend-services add-backend ilb-ipv6-only \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-a
gcloud compute backend-services add-backend ilb-ipv6-only \ --region=us-west1 \ --instance-group=ig-c \ --instance-group-zone=us-west1-c
Create the IPv6 forwarding rule with an ephemeral IPv6 address.
gcloud compute forwarding-rules create fr-ilb-ipv6-only \ --region=us-west1 \ --load-balancing-scheme=INTERNAL \ --subnet=lb-subnet-ipv6-only-internal \ --ip-protocol=TCP \ --ports=80 \ --backend-service=ilb-ipv6-only \ --backend-service-region=us-west1 \ --ip-version=IPV6
Test your load balancer
To test the load balancer, create a client VM in the same region as the load balancer, and then send traffic from the client to the load balancer.
Create a client VM
This example creates a client VM (vm-client
) in the same region as the backend
(server) VMs.
Console
In the Trusted Cloud console, go to the VM instances page.
Click Create instance.
For Name, enter
vm-client
.For Region, select
us-west1
.For Zone, select
us-west1-a
.Click Advanced options.
Click Networking and configure the following fields:
- For Network tags, enter
allow-ssh
. - For Network interfaces, select the following:
- Network:
lb-network-ipv6-only
- Subnet:
lb-subnet-ipv6-only-internal
- IP stack type: IPv6 (single-stack)
- Network:
- Click Done.
- For Network tags, enter
Click Create.
gcloud
The client VM can be in any zone in the same region as the
load balancer. In this example,
the client is in the us-west1-a
zone, and it uses the same
subnet as the backend VMs.
gcloud compute instances create vm-client \ --zone=us-west1-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --stack-type=IPV6_ONLY \ --tags=allow-ssh \ --subnet=lb-subnet-ipv6-only-internal
Test the connection
This test contacts the load balancer from a separate client VM; that is, not from a backend VM of the load balancer. The expected behavior is for traffic to be distributed among the four backend VMs.
Connect to the client VM instance by using SSH.
gcloud compute ssh vm-client --zone=us-west1-a
Describe the IPv6 forwarding rule
fr-ilb-ipv6-only
. Note theIPV6_ADDRESS
in the description.gcloud compute forwarding-rules describe fr-ilb-ipv6-only \ --region=us-west1
From clients with IPv6 connectivity, run the following command:
curl http://IPV6_ADDRESS:80
For example, if the assigned IPv6 address is
[fd20:307:120c:2000:0:1:0:0/96]:80
, the command should look as follows:curl http://[fd20:307:120c:2000:0:1:0:0]:80
The response can be as follows:
Page returned from: vm-a2
What's next
- Review the setup to configure an internal passthrough Network Load Balancer with an internal IPv6-only subnet and backend using an IPv6 TCP server installed on the backend VMs.