public sealed class BigQueryDestination : IMessage<BigQueryDestination>, IEquatable<BigQueryDestination>, IDeepCloneable<BigQueryDestination>, IBufferMessage, IMessage
Reference documentation and code samples for the Google Cloud Asset Inventory v1 API class BigQueryDestination.
Required. The BigQuery dataset in format
"projects/projectId/datasets/datasetId", to which the snapshot result
should be exported. If this dataset does not exist, the export call returns
an INVALID_ARGUMENT error. Setting the contentType for exportAssets
determines the
schema
of the BigQuery table. Setting separateTablesPerAssetType to TRUE also
influences the schema.
If the destination table already exists and this flag is TRUE, the
table will be overwritten by the contents of assets snapshot. If the flag
is FALSE or unset and the destination table already exists, the export
call returns an INVALID_ARGUMEMT error.
[partition_spec] determines whether to export to partitioned table(s) and
how to partition the data.
If [partition_spec] is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If [partition_spec] is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table's schema as necessary by appending additional
columns. Then, if [force] is TRUE, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE, it will append the data. An
error will be returned if the schema update or data appension fails.
public bool SeparateTablesPerAssetType { get; set; }
If this flag is TRUE, the snapshot results will be written to one or
multiple tables, each of which contains results of one asset type. The
[force] and [partition_spec] fields will apply to each of them.
Field [table] will be concatenated with "" and the asset type names (see
https://cloud.google.com/asset-inventory/docs/supported-asset-types for
supported asset types) to construct per-asset-type table names, in which
all non-alphanumeric characters like "." and "/" will be substituted by
"". Example: if field [table] is "mytable" and snapshot results
contain "storage.googleapis.com/Bucket" assets, the corresponding table
name will be "mytable_storage_googleapis_com_Bucket". If any of these
tables does not exist, a new table with the concatenated name will be
created.
When [content_type] in the ExportAssetsRequest is RESOURCE, the schema of
each table will include RECORD-type columns mapped to the nested fields in
the Asset.resource.data field of that asset type (up to the 15 nested level
BigQuery supports
(https://cloud.google.com/bigquery/docs/nested-repeated#limitations)). The
fields in >15 nested levels will be stored in JSON format string as a child
column of its parent RECORD column.
If error occurs when exporting to any table, the whole export call will
return an error but the export results that already succeed will persist.
Example: if exporting to table_type_A succeeds when exporting to
table_type_B fails during one export call, the results in table_type_A will
persist and there will not be partial results persisting in a table.
Required. The BigQuery table to which the snapshot result should be
written. If this table does not exist, a new table with the given name
will be created.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["\u003cp\u003eThe latest version of the \u003ccode\u003eBigQueryDestination\u003c/code\u003e class is 3.12.0, but multiple previous versions are also available for use, ranging back to version 2.7.0.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eBigQueryDestination\u003c/code\u003e class is used for exporting assets to BigQuery, and it implements several interfaces including \u003ccode\u003eIMessage\u003c/code\u003e, \u003ccode\u003eIEquatable\u003c/code\u003e, \u003ccode\u003eIDeepCloneable\u003c/code\u003e, and \u003ccode\u003eIBufferMessage\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eThis class supports defining the target dataset, table, and whether to overwrite existing tables, with the \u003ccode\u003eDataset\u003c/code\u003e, \u003ccode\u003eTable\u003c/code\u003e, and \u003ccode\u003eForce\u003c/code\u003e properties, respectively.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003ePartitionSpec\u003c/code\u003e property allows for partitioning the exported data, and \u003ccode\u003eSeparateTablesPerAssetType\u003c/code\u003e determines whether data is split into multiple tables based on asset type.\u003c/p\u003e\n"],["\u003cp\u003eThe schema of the exported data can change depending on the \u003ccode\u003econtentType\u003c/code\u003e of the exportAssets, as well as if \u003ccode\u003eseparateTablesPerAssetType\u003c/code\u003e is set to \u003ccode\u003eTRUE\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Google Cloud Asset Inventory v1 API - Class BigQueryDestination (3.13.0)\n\nVersion latestkeyboard_arrow_down\n\n- [3.13.0 (latest)](/dotnet/docs/reference/Google.Cloud.Asset.V1/latest/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.12.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.12.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.11.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.11.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.10.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.10.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.9.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.9.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.8.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.8.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.7.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.7.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.6.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.6.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.5.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.5.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.4.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.4.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.3.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.3.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.2.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.2.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.1.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.1.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [3.0.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/3.0.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [2.11.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/2.11.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [2.10.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/2.10.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [2.9.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/2.9.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [2.8.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/2.8.0/Google.Cloud.Asset.V1.BigQueryDestination)\n- [2.7.0](/dotnet/docs/reference/Google.Cloud.Asset.V1/2.7.0/Google.Cloud.Asset.V1.BigQueryDestination) \n\n public sealed class BigQueryDestination : IMessage\u003cBigQueryDestination\u003e, IEquatable\u003cBigQueryDestination\u003e, IDeepCloneable\u003cBigQueryDestination\u003e, IBufferMessage, IMessage\n\nReference documentation and code samples for the Google Cloud Asset Inventory v1 API class BigQueryDestination.\n\nA BigQuery destination for exporting assets to. \n\nInheritance\n-----------\n\n[object](https://learn.microsoft.com/dotnet/api/system.object) \\\u003e BigQueryDestination \n\nImplements\n----------\n\n[IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage-1.html)[BigQueryDestination](/dotnet/docs/reference/Google.Cloud.Asset.V1/latest/Google.Cloud.Asset.V1.BigQueryDestination), [IEquatable](https://learn.microsoft.com/dotnet/api/system.iequatable-1)[BigQueryDestination](/dotnet/docs/reference/Google.Cloud.Asset.V1/latest/Google.Cloud.Asset.V1.BigQueryDestination), [IDeepCloneable](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IDeepCloneable-1.html)[BigQueryDestination](/dotnet/docs/reference/Google.Cloud.Asset.V1/latest/Google.Cloud.Asset.V1.BigQueryDestination), [IBufferMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IBufferMessage.html), [IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage.html) \n\nInherited Members\n-----------------\n\n[object.GetHashCode()](https://learn.microsoft.com/dotnet/api/system.object.gethashcode) \n[object.GetType()](https://learn.microsoft.com/dotnet/api/system.object.gettype) \n[object.ToString()](https://learn.microsoft.com/dotnet/api/system.object.tostring)\n\nNamespace\n---------\n\n[Google.Cloud.Asset.V1](/dotnet/docs/reference/Google.Cloud.Asset.V1/latest/Google.Cloud.Asset.V1)\n\nAssembly\n--------\n\nGoogle.Cloud.Asset.V1.dll\n\nConstructors\n------------\n\n### BigQueryDestination()\n\n public BigQueryDestination()\n\n### BigQueryDestination(BigQueryDestination)\n\n public BigQueryDestination(BigQueryDestination other)\n\nProperties\n----------\n\n### Dataset\n\n public string Dataset { get; set; }\n\nRequired. The BigQuery dataset in format\n\"projects/projectId/datasets/datasetId\", to which the snapshot result\nshould be exported. If this dataset does not exist, the export call returns\nan INVALID_ARGUMENT error. Setting the `contentType` for `exportAssets`\ndetermines the\n[schema](/asset-inventory/docs/exporting-to-bigquery#bigquery-schema)\nof the BigQuery table. Setting `separateTablesPerAssetType` to `TRUE` also\ninfluences the schema.\n\n### Force\n\n public bool Force { get; set; }\n\nIf the destination table already exists and this flag is `TRUE`, the\ntable will be overwritten by the contents of assets snapshot. If the flag\nis `FALSE` or unset and the destination table already exists, the export\ncall returns an INVALID_ARGUMEMT error.\n\n### PartitionSpec\n\n public PartitionSpec PartitionSpec { get; set; }\n\n\\[partition_spec\\] determines whether to export to partitioned table(s) and\nhow to partition the data.\n\nIf \\[partition_spec\\] is unset or \\[partition_spec.partition_key\\] is unset or\n`PARTITION_KEY_UNSPECIFIED`, the snapshot results will be exported to\nnon-partitioned table(s). \\[force\\] will decide whether to overwrite existing\ntable(s).\n\nIf \\[partition_spec\\] is specified. First, the snapshot results will be\nwritten to partitioned table(s) with two additional timestamp columns,\nreadTime and requestTime, one of which will be the partition key. Secondly,\nin the case when any destination table already exists, it will first try to\nupdate existing table's schema as necessary by appending additional\ncolumns. Then, if \\[force\\] is `TRUE`, the corresponding partition will be\noverwritten by the snapshot results (data in different partitions will\nremain intact); if \\[force\\] is unset or `FALSE`, it will append the data. An\nerror will be returned if the schema update or data appension fails.\n\n### SeparateTablesPerAssetType\n\n public bool SeparateTablesPerAssetType { get; set; }\n\nIf this flag is `TRUE`, the snapshot results will be written to one or\nmultiple tables, each of which contains results of one asset type. The\n\\[force\\] and \\[partition_spec\\] fields will apply to each of them.\n\nField \\[table\\] will be concatenated with \"*\" and the asset type names (see\n\u003chttps://cloud.google.com/asset-inventory/docs/supported-asset-types\u003e for\nsupported asset types) to construct per-asset-type table names, in which\nall non-alphanumeric characters like \".\" and \"/\" will be substituted by\n\"*\". Example: if field \\[table\\] is \"mytable\" and snapshot results\ncontain \"storage.googleapis.com/Bucket\" assets, the corresponding table\nname will be \"mytable_storage_googleapis_com_Bucket\". If any of these\ntables does not exist, a new table with the concatenated name will be\ncreated.\n\nWhen \\[content_type\\] in the ExportAssetsRequest is `RESOURCE`, the schema of\neach table will include RECORD-type columns mapped to the nested fields in\nthe Asset.resource.data field of that asset type (up to the 15 nested level\nBigQuery supports\n(\u003chttps://cloud.google.com/bigquery/docs/nested-repeated#limitations\u003e)). The\nfields in \\\u003e15 nested levels will be stored in JSON format string as a child\ncolumn of its parent RECORD column.\n\nIf error occurs when exporting to any table, the whole export call will\nreturn an error but the export results that already succeed will persist.\nExample: if exporting to table_type_A succeeds when exporting to\ntable_type_B fails during one export call, the results in table_type_A will\npersist and there will not be partial results persisting in a table.\n\n### Table\n\n public string Table { get; set; }\n\nRequired. The BigQuery table to which the snapshot result should be\nwritten. If this table does not exist, a new table with the given name\nwill be created."]]