public sealed class WorkerPool : IMessage<WorkerPool>, IEquatable<WorkerPool>, IDeepCloneable<WorkerPool>, IBufferMessage, IMessage
Reference documentation and code samples for the Dataflow v1beta3 API class WorkerPool.
Describes one particular pool of Cloud Dataflow workers to be
instantiated by the Cloud Dataflow service in order to perform the
computations required by a job. Note that a workflow job may use
multiple pools, in order to match the various computational
requirements of the various stages of the job.
public DefaultPackageSet DefaultPackageSet { get; set; }
The default package set to install. This allows the service to
select a default set of packages which are useful to worker
harnesses written in a particular language.
The number of threads per worker harness. If empty or unspecified, the
service will choose a number of threads (according to the number of cores
on the selected machine type for batch, or 1 by convention for streaming).
Number of Google Compute Engine workers in this pool needed to
execute the job. If zero or unspecified, the service will
attempt to choose a reasonable default.
public RepeatedField<SdkHarnessContainerImage> SdkHarnessContainerImages { get; }
Set of SDK harness containers needed to execute this pipeline. This will
only be set in the Fn API path. For non-cross-language pipelines this
should have only one entry. Cross-language pipelines will have two or more
entries.
public TeardownPolicy TeardownPolicy { get; set; }
Sets the policy for determining when to turndown worker pool.
Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and
TEARDOWN_NEVER.
TEARDOWN_ALWAYS means workers are always torn down regardless of whether
the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down
if the job succeeds. TEARDOWN_NEVER means the workers are never torn
down.
If the workers are not torn down by the service, they will
continue to run and use Google Compute Engine VM resources in the
user's project until they are explicitly terminated by the user.
Because of this, Google recommends using the TEARDOWN_ALWAYS
policy except for small, manually supervised test jobs.
If unknown or unspecified, the service will attempt to choose a reasonable
default.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["\u003cp\u003eThe \u003ccode\u003eWorkerPool\u003c/code\u003e class in the Dataflow v1beta3 API defines a pool of Cloud Dataflow workers for job computations, and a job may use multiple pools for different computational needs.\u003c/p\u003e\n"],["\u003cp\u003eIt allows configuring settings such as the machine type, disk size, and network for the worker VMs, while also providing options for specifying packages and container images to be used.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eWorkerPool\u003c/code\u003e class offers autoscaling settings to adjust the number of workers dynamically and supports different teardown policies to control when worker resources are released.\u003c/p\u003e\n"],["\u003cp\u003eUsers can set metadata for Google Compute Engine VMs, choose a default package set for worker harnesses, and manage the number of threads per worker.\u003c/p\u003e\n"],["\u003cp\u003eIt also supports the specification of SDK harness container images for cross-language pipelines, ensuring that the correct environments are set up for diverse computational tasks.\u003c/p\u003e\n"]]],[],null,["# Dataflow v1beta3 API - Class WorkerPool (2.0.0-beta07)\n\nVersion latestkeyboard_arrow_down\n\n- [2.0.0-beta07 (latest)](/dotnet/docs/reference/Google.Cloud.Dataflow.V1Beta3/latest/Google.Cloud.Dataflow.V1Beta3.WorkerPool)\n- [2.0.0-beta06](/dotnet/docs/reference/Google.Cloud.Dataflow.V1Beta3/2.0.0-beta06/Google.Cloud.Dataflow.V1Beta3.WorkerPool)\n- [1.0.0-beta03](/dotnet/docs/reference/Google.Cloud.Dataflow.V1Beta3/1.0.0-beta03/Google.Cloud.Dataflow.V1Beta3.WorkerPool) \n\n public sealed class WorkerPool : IMessage\u003cWorkerPool\u003e, IEquatable\u003cWorkerPool\u003e, IDeepCloneable\u003cWorkerPool\u003e, IBufferMessage, IMessage\n\nReference documentation and code samples for the Dataflow v1beta3 API class WorkerPool.\n\nDescribes one particular pool of Cloud Dataflow workers to be\ninstantiated by the Cloud Dataflow service in order to perform the\ncomputations required by a job. Note that a workflow job may use\nmultiple pools, in order to match the various computational\nrequirements of the various stages of the job. \n\nInheritance\n-----------\n\n[object](https://learn.microsoft.com/dotnet/api/system.object) \\\u003e WorkerPool \n\nImplements\n----------\n\n[IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage-1.html)[WorkerPool](/dotnet/docs/reference/Google.Cloud.Dataflow.V1Beta3/latest/Google.Cloud.Dataflow.V1Beta3.WorkerPool), [IEquatable](https://learn.microsoft.com/dotnet/api/system.iequatable-1)[WorkerPool](/dotnet/docs/reference/Google.Cloud.Dataflow.V1Beta3/latest/Google.Cloud.Dataflow.V1Beta3.WorkerPool), [IDeepCloneable](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IDeepCloneable-1.html)[WorkerPool](/dotnet/docs/reference/Google.Cloud.Dataflow.V1Beta3/latest/Google.Cloud.Dataflow.V1Beta3.WorkerPool), [IBufferMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IBufferMessage.html), [IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage.html) \n\nInherited Members\n-----------------\n\n[object.GetHashCode()](https://learn.microsoft.com/dotnet/api/system.object.gethashcode) \n[object.GetType()](https://learn.microsoft.com/dotnet/api/system.object.gettype) \n[object.ToString()](https://learn.microsoft.com/dotnet/api/system.object.tostring)\n\nNamespace\n---------\n\n[Google.Cloud.Dataflow.V1Beta3](/dotnet/docs/reference/Google.Cloud.Dataflow.V1Beta3/latest/Google.Cloud.Dataflow.V1Beta3)\n\nAssembly\n--------\n\nGoogle.Cloud.Dataflow.V1Beta3.dll\n\nConstructors\n------------\n\n### WorkerPool()\n\n public WorkerPool()\n\n### WorkerPool(WorkerPool)\n\n public WorkerPool(WorkerPool other)\n\nProperties\n----------\n\n### AutoscalingSettings\n\n public AutoscalingSettings AutoscalingSettings { get; set; }\n\nSettings for autoscaling of this WorkerPool.\n\n### DataDisks\n\n public RepeatedField\u003cDisk\u003e DataDisks { get; }\n\nData disks that are used by a VM in this workflow.\n\n### DefaultPackageSet\n\n public DefaultPackageSet DefaultPackageSet { get; set; }\n\nThe default package set to install. This allows the service to\nselect a default set of packages which are useful to worker\nharnesses written in a particular language.\n\n### DiskSizeGb\n\n public int DiskSizeGb { get; set; }\n\nSize of root disk for VMs, in GB. If zero or unspecified, the service will\nattempt to choose a reasonable default.\n\n### DiskSourceImage\n\n public string DiskSourceImage { get; set; }\n\nFully qualified source image for disks.\n\n### DiskType\n\n public string DiskType { get; set; }\n\nType of root disk for VMs. If empty or unspecified, the service will\nattempt to choose a reasonable default.\n\n### IpConfiguration\n\n public WorkerIPAddressConfiguration IpConfiguration { get; set; }\n\nConfiguration for VM IPs.\n\n### Kind\n\n public string Kind { get; set; }\n\nThe kind of the worker pool; currently only `harness` and `shuffle`\nare supported.\n\n### MachineType\n\n public string MachineType { get; set; }\n\nMachine type (e.g. \"n1-standard-1\"). If empty or unspecified, the\nservice will attempt to choose a reasonable default.\n\n### Metadata\n\n public MapField\u003cstring, string\u003e Metadata { get; }\n\nMetadata to set on the Google Compute Engine VMs.\n\n### Network\n\n public string Network { get; set; }\n\nNetwork to which VMs will be assigned. If empty or unspecified,\nthe service will use the network \"default\".\n\n### NumThreadsPerWorker\n\n public int NumThreadsPerWorker { get; set; }\n\nThe number of threads per worker harness. If empty or unspecified, the\nservice will choose a number of threads (according to the number of cores\non the selected machine type for batch, or 1 by convention for streaming).\n\n### NumWorkers\n\n public int NumWorkers { get; set; }\n\nNumber of Google Compute Engine workers in this pool needed to\nexecute the job. If zero or unspecified, the service will\nattempt to choose a reasonable default.\n\n### OnHostMaintenance\n\n public string OnHostMaintenance { get; set; }\n\nThe action to take on host maintenance, as defined by the Google\nCompute Engine API.\n\n### Packages\n\n public RepeatedField\u003cPackage\u003e Packages { get; }\n\nPackages to be installed on workers.\n\n### PoolArgs\n\n public Any PoolArgs { get; set; }\n\nExtra arguments for this worker pool.\n\n### SdkHarnessContainerImages\n\n public RepeatedField\u003cSdkHarnessContainerImage\u003e SdkHarnessContainerImages { get; }\n\nSet of SDK harness containers needed to execute this pipeline. This will\nonly be set in the Fn API path. For non-cross-language pipelines this\nshould have only one entry. Cross-language pipelines will have two or more\nentries.\n\n### Subnetwork\n\n public string Subnetwork { get; set; }\n\nSubnetwork to which VMs will be assigned, if desired. Expected to be of\nthe form \"regions/REGION/subnetworks/SUBNETWORK\".\n\n### TaskrunnerSettings\n\n public TaskRunnerSettings TaskrunnerSettings { get; set; }\n\nSettings passed through to Google Compute Engine workers when\nusing the standard Dataflow task runner. Users should ignore\nthis field.\n\n### TeardownPolicy\n\n public TeardownPolicy TeardownPolicy { get; set; }\n\nSets the policy for determining when to turndown worker pool.\nAllowed values are: `TEARDOWN_ALWAYS`, `TEARDOWN_ON_SUCCESS`, and\n`TEARDOWN_NEVER`.\n`TEARDOWN_ALWAYS` means workers are always torn down regardless of whether\nthe job succeeds. `TEARDOWN_ON_SUCCESS` means workers are torn down\nif the job succeeds. `TEARDOWN_NEVER` means the workers are never torn\ndown.\n\nIf the workers are not torn down by the service, they will\ncontinue to run and use Google Compute Engine VM resources in the\nuser's project until they are explicitly terminated by the user.\nBecause of this, Google recommends using the `TEARDOWN_ALWAYS`\npolicy except for small, manually supervised test jobs.\n\nIf unknown or unspecified, the service will attempt to choose a reasonable\ndefault.\n\n### WorkerHarnessContainerImage\n\n public string WorkerHarnessContainerImage { get; set; }\n\nRequired. Docker container image that executes the Cloud Dataflow worker\nharness, residing in Google Container Registry.\n\nDeprecated for the Fn API path. Use sdk_harness_container_images instead.\n\n### Zone\n\n public string Zone { get; set; }\n\nZone to run the worker pools in. If empty or unspecified, the service\nwill attempt to choose a reasonable default."]]