This document provides instructions for deploying a developer environment and using the Agent Sandbox Python client on a Google Kubernetes Engine (GKE) cluster.
For a conceptual overview of how the Agent Sandbox feature isolates untrusted AI-generated code, see About GKE Agent Sandbox.
Costs
Following the steps in this document incurs charges on your Cloud de Confiance by S3NS account. Costs begin when you create a GKE cluster. These costs include per-cluster charges for GKE, as outlined on the Pricing page, and charges for running Compute Engine VMs.
To avoid unnecessary charges, ensure that you disable GKE or delete the project after you have completed this document.
Before you begin
-
In the Cloud de Confiance console, on the project selector page, select or create a Cloud de Confiance project.
Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
-
Create a project: To create a project, you need the Project Creator role
(
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission. Learn how to grant roles.
-
Verify that billing is enabled for your Cloud de Confiance project.
Enable the Artifact Registry, Google Kubernetes Engine APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission. Learn how to grant roles.-
In the Cloud de Confiance console, activate Cloud Shell.
- You must have a GKE cluster with the Agent Sandbox feature enabled. If you don't have one, follow the instructions in Enable Agent Sandbox on GKE to create a new cluster or update an existing one.
Define environment variables
To simplify the commands that you run in this document, you can set environment variables in Cloud Shell. In Cloud Shell, define the following useful environment variables by running the following commands:
export PROJECT_ID=$(gcloud config get project)
export CLUSTER_NAME="agent-sandbox-cluster"
export REGION="us-central1"
export NODE_POOL_NAME="agent-sandbox-node-pool"
export MACHINE_TYPE="e2-standard-2"
Here's an explanation of these environment variables:
PROJECT_ID: the ID of your current Cloud de Confiance by S3NS project. Defining this variable helps ensure that all resources, like your GKE cluster, are created in the correct project.CLUSTER_NAME: the name of your GKE cluster—for example,agent-sandbox-cluster.REGION: the Cloud de Confiance by S3NS region where your GKE cluster and Artifact Registry repository will be created—for example,us-central1. We recommend colocating them because this reduces image pull latency.NODE_POOL_NAME: the name of the node pool that will run sandboxed workloads—for example,agent-sandbox-node-pool.MACHINE_TYPE: the machine type of the nodes in your node pool—for example,e2-standard-2. For details about different machine series and choosing between different options, see the Machine families resource and comparison guide.
Deploy a sandboxed environment
This section shows you how to create the sandbox blueprint
(SandboxTemplate), deploy the necessary networking router, and install the
Python client you will use to interact with the sandbox.
The recommended way to create and interact with your sandbox is by using the Agentic Sandbox Python client. This client provides an interface that simplifies the entire lifecycle of a sandbox, from creation to cleanup. It's a Python library you can use to programmatically create, use, and delete sandboxes.
The client uses a Sandbox Router as a central entry point for all traffic. In
the example described in this document, the client creates a tunnel to this
router using the command kubectl port-forward, so that you don't need to
expose any public IP addresses. Be aware that making use of
kubectl port-forward isn't a secure solution and its use should be limited to development environments.
Create a SandboxTemplate and SandboxWarmPool
You now define the configuration for your sandbox by creating a
SandboxTemplate and a SandboxWarmPool resource. The SandboxTemplate acts
as a reusable blueprint that the Agent Sandbox controller uses to create
consistent, pre-configured sandbox environments. The SandboxWarmPool resource
ensures that a specified number of pre-warmed Pods are always
running and ready to be claimed. A pre-warmed sandbox is a running Pod that's
already initialized. This pre-initialization enables new sandboxes to be created
in under a second, and avoids the startup latency of launching a regular
sandbox:
In Cloud Shell, create a file named
sandbox-template-and-pool.yamlwith the following content:apiVersion: extensions.agents.x-k8s.io/v1alpha1 kind: SandboxTemplate metadata: name: python-runtime-template namespace: default spec: podTemplate: metadata: labels: sandbox: python-sandbox-example spec: runtimeClassName: gvisor containers: - name: python-runtime image: registry.k8s.io/agent-sandbox/python-runtime-sandbox:v0.1.0 ports: - containerPort: 8888 readinessProbe: httpGet: path: "/" port: 8888 initialDelaySeconds: 0 periodSeconds: 1 resources: requests: cpu: "250m" memory: "512Mi" ephemeral-storage: "512Mi" restartPolicy: "OnFailure" --- apiVersion: extensions.agents.x-k8s.io/v1alpha1 kind: SandboxWarmPool metadata: name: python-sandbox-warmpool namespace: default spec: replicas: 2 sandboxTemplateRef: name: python-runtime-templateApply the
SandboxTemplateandSandboxWarmPoolmanifest:kubectl apply -f sandbox-template-and-pool.yaml
Deploy the Sandbox Router
The Python client that you will use to create and interact with sandboxed environments uses a component called the Sandbox Router to communicate with the sandboxes.
For this example, you use the client's developer mode for testing. This mode
is intended for local development, and uses the command kubectl port-forward
to establish a direct tunnel from your local machine to the Sandbox Router
service running in the cluster. This tunneling approach avoids the need for a
public IP address or complex ingress setup, and simplifies interacting with
sandboxes from your local environment.
Follow these steps to deploy the Sandbox Router:
In Cloud Shell, create a file named
sandbox-router.yamlwith the following content:# A ClusterIP Service to provide a stable endpoint for the router pods. apiVersion: v1 kind: Service metadata: name: sandbox-router-svc namespace: default spec: type: ClusterIP selector: app: sandbox-router ports: - name: http protocol: TCP port: 8080 # The port the service will listen on targetPort: 8080 # The port the router container listens on (from the sandbox_router/Dockerfile) --- # The Deployment to manage and run the router pods. apiVersion: apps/v1 kind: Deployment metadata: name: sandbox-router-deployment namespace: default spec: replicas: 2 # Run at least two replicas for high availability selector: matchLabels: app: sandbox-router template: metadata: labels: app: sandbox-router spec: # Ensure pods are spread across different zones for HA topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app: sandbox-router containers: - name: router image: us-central1-docker.pkg.dev/k8s-staging-images/agent-sandbox/sandbox-router:v20251124-v0.1.0-10-ge26ddb2 ports: - containerPort: 8080 readinessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 5 periodSeconds: 5 livenessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 10 periodSeconds: 10 resources: requests: cpu: "250m" memory: "512Mi" limits: cpu: "1000m" memory: "1Gi" securityContext: runAsUser: 1000 runAsGroup: 1000Apply the manifest to deploy the router to your cluster:
kubectl apply -f sandbox-router.yaml
Install the Python client
Now that the in-cluster components like the Sandbox Router are deployed, the final preparatory step is to install the Agentic Sandbox Python client on your local machine. Recall that this client is a Python library that lets you programmatically create, use, and delete sandboxes. You use it in the next section to test the environment:
Create and activate a Python virtual environment:
python3 -m venv .venv source .venv/bin/activateInstall the client package:
pip install k8s_agent_sandbox
Test the sandbox
With all the setup components in place, you can now create and interact with a sandbox using the Agentic Sandbox Python client.
In your
agent-sandboxdirectory, create a Python script namedtest_sandbox.pywith the following content:from agentic_sandbox import SandboxClient # Automatically tunnels to svc/sandbox-router-svc with SandboxClient( template_name="python-runtime-template", namespace="default" ) as sandbox: print(sandbox.run("echo 'Hello from the sandboxed environment!'").stdout )From your terminal (with the virtual environment still active), run the test script:
python3 test_sandbox.py
You should see the message "Hello from the sandboxed environment!" which is output from the sandbox.
Congratulations! You have successfully run a shell command inside a secure
sandbox. Using the sandbox.run() method, you can execute any shell command,
and the Agent Sandbox runs the command within a secure barrier that protects
your cluster's nodes and other workloads from untrusted code. This provides a
safe and reliable way for an AI agent or any automated workflow to execute
tasks.
When you run the script, the SandboxClient handles all the steps for you. It
creates the SandboxClaim resource to start the sandbox, waits for the sandbox
to be ready, and then uses the sandbox.run() method to execute bash shell
commands inside the secure container. The client then captures and prints the
stdout from that command. The sandbox is automatically deleted after the
program runs.
When a SandboxClaim resource is created, an available sandbox is assigned from
the warm pool to the Sandbox object and the claim is marked ready. The
SandboxWarmPool then automatically replenishes itself to maintain the
configured number of replicas.
To verify if a specific sandbox is claimed or available, check the
ownerReferences in sandbox pod's metadata - if the value of the kind field
is Sandbox, the pod is in use. If the value of the kind field is
SandboxWarmPool, the Pod is idle and waiting to be claimed.
Clean up resources
To avoid incurring charges to your Cloud de Confiance by S3NS account, you should delete the GKE cluster that you created:
gcloud container clusters delete $CLUSTER_NAME --location=$REGION --quiet
What's next
- Learn how to Save and restore Agent Sandbox environments with Pod snapshots.
- Learn more about the Agent Sandbox open-source project on GitHub.
- To understand the underlying technology that provides security isolation for your workloads, see GKE Sandbox.
- For more information about enhancing security for your clusters and workloads, see GKE security overview.