Failed to create subnetwork. Router status is temporarily
unavailable. Please try again later. Help Token: [token-ID] エラーのメッセージが表示された場合は、Cloud SQL インスタンスをもう一度作成してみてください。
PostgreSQL バージョン 15 以降では、ターゲット データベースが template0 から作成されている場合、データのインポートが失敗し、permission denied for schema public エラー メッセージが表示されることがあります。この問題を解決するには、GRANT ALL ON SCHEMA public TO cloudsqlsuperuser SQL コマンドを実行して、cloudsqlsuperuser ユーザーに公開スキーマ権限を付与します。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["必要な情報がない","missingTheInformationINeed","thumb-down"],["複雑すぎる / 手順が多すぎる","tooComplicatedTooManySteps","thumb-down"],["最新ではない","outOfDate","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["サンプル / コードに問題がある","samplesCodeIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-08-08 UTC。"],[],[],null,["# Known issues\n\n\u003cbr /\u003e\n\n[MySQL](/sql/docs/mysql/known-issues \"View this page for the MySQL database engine\") \\| PostgreSQL \\| [SQL Server](/sql/docs/sqlserver/known-issues \"View this page for the SQL Server database engine\")\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nThis page lists known issues with Cloud SQL for PostgreSQL, along with\nways you can avoid or recover from these issues.\nIf you are experiencing issues with your instance, make sure you also review the information in [Diagnosing Issues](/sql/docs/postgres/diagnose-issues).\n\n### Instance connection issues\n\n- Expired SSL/TLS certificates\n\n\n If your instance is configured to use SSL, go to the\n [Cloud SQL Instances page](https://console.cloud.google.com/sql/instances)\n in the Google Cloud console and open the instance. Open its **Connections** page, select the\n **Security** tab and make sure that your server certificate is valid. If it has expired, you must\n add a new certificate and rotate to it.\n\n \u003cbr /\u003e\n\n- Cloud SQL Auth Proxy version\n\n If you are connecting using the Cloud SQL Auth Proxy, make sure you are using the\n most recent version. For more information, see\n [Keeping the Cloud SQL Auth Proxy up to date](/sql/docs/postgres/sql-proxy#keep-current).\n- Not authorized to connect\n\n If you try to connect to an instance that does not exist in that project,\n the error message only says that you are not authorized to access that\n instance.\n- Can't create a Cloud SQL instance\n\n If you see the `Failed to create subnetwork. Router status is temporarily\n unavailable. Please try again later. Help Token: [token-ID]` error\n message, try to create the Cloud SQL instance again.\n\n\u003c!-- --\u003e\n\n- The following only works with the default user ('postgres'):\n `gcloud sql connect --user`\n\n If you try to connect using this command with any other user, the error\n message says \u003cvar translate=\"no\"\u003eFATAL: database 'user' does not exist\u003c/var\u003e. The\n workaround is to connect using the default user ('postgres'), then use\n the `\"\\c\"` psql command to reconnect as the different user.\n\n\u003c!-- --\u003e\n\n- PostgreSQL connections hang when IAM db proxy authentication is enabled.\n\n When the [Cloud SQL Auth Proxy is started using TCP sockets](/sql/docs/postgres/connect-auth-proxy#start-proxy) and with the `-enable_iam_login` flag,\n then a PostgreSQL client hangs during TCP connection. One workaround\n is to use `sslmode=disable` in the PostgreSQL connection\n string. For example: \n\n ```bash\n psql \"host=127.0.0.1 dbname=postgres user=me@google.com sslmode=disable\"\n ```\n\n Another workaround is to [start the Cloud SQL Auth Proxy using Unix sockets](/sql/docs/postgres/connect-auth-proxy#start-proxy).\n This turns off PostgreSQL SSL encryption and lets the Cloud SQL Auth Proxy do the SSL\n encryption instead.\n\n### Administrative issues\n\n- Only one long-running Cloud SQL import or export operation can run at a time on an instance. When you start an operation, make sure you don't need to perform other operations on the instance. Also, when you start the operation, you can [cancel it](/sql/docs/postgres/import-export/cancel-import-export).\n\n PostgreSQL imports data in a single transaction. Therefore, if you cancel the import operation, then Cloud SQL doesn't persist data from the import.\n\n### Issues with importing and exporting data\n\n- If your Cloud SQL instance uses PostgreSQL 17, but your databases use PostgreSQL 16 and earlier, then you can't use Cloud SQL to import these databases into your instance. To do this, use [Database Migration Service](/database-migration/docs).\n\n- If you use Database Migration Service to import a PostgreSQL 17 database into Cloud SQL, then it's imported as a PostgreSQL 16 database.\n\n- For PostgreSQL versions 15 and later, if the target database is created from `template0`, then importing data might fail and you might see a `permission denied for schema public` error message. To resolve this issue, provide public schema privileges to the `cloudsqlsuperuser` user by running the `GRANT ALL ON SCHEMA public TO cloudsqlsuperuser` SQL command.\n\n- Exporting many large objects cause instance to become unresponsive\n\n If your database contains many large objects (blobs), exporting the database\n can consume so much memory that the instance becomes unresponsive. This can\n happen even if the blobs are empty.\n\n \u003cbr /\u003e\n\n- Cloud SQL doesn't support customized tablespaces but it does support data migration from customized tablespaces to the default tablespace, `pg_default`, in destination instance. For example, if you own a tablespace named `dbspace` is located at `/home/data`, after migration, all the data inside `dbspace` is migrated to the `pg_default`. But Cloud SQL will not create a tablespace named \"dbspace\" on its disk.\n\n- If you're trying to import and export data from a large database (for example,\n a database that has 500 GB of data or greater), then the import and export\n operations might take a long time to complete. In addition, other operations\n (for example, the backup operation) aren't available for you to perform\n while the import or export is occurring. A potential option to improve the\n performance of the import and export process is to [restore a previous backup](/sql/docs/postgres/backup-recovery/restoring#projectid) using `gcloud`\n or the API.\n\n\u003c!-- --\u003e\n\n- Cloud Storage supports a [maximum single-object size up five terabytes](/storage-transfer/docs/known-limitations-transfer#object-limit). If you have databases larger than 5TB, the export operation to Cloud Storage fails. In this case, you need to break down your export files into smaller segments.\n\n### Transaction logs and disk growth\n\nLogs are purged once daily, not continuously. When the number of days of log\nretention is configured to be the same as the number of backups, a day of\nlogging might be lost, depending on when the backup occurs. For example, setting\nlog retention to seven days and backup retention to seven backups means that\nbetween six and seven days of logs will be retained.\n\nWe recommend setting the number of backups to at least one more than the days of\nlog retention to guarantee a minimum of specified days of log retention.\n| **Note:** Replica instances see a storage increase when replication is suspended and then resumed later. The increase is caused when the primary instance sends the replica the transaction logs for the period of time when replication was suspended. The transaction logs updates the replica to the current state of the primary instance.\n\n\u003cbr /\u003e\n\n### Issues related to Cloud Monitoring or Cloud Logging\n\nInstances with the following region names are displayed incorrectly in certain\ncontexts, as follows:\n\n- `us-central1` is displayed as `us-central`\n- `europe-west1` is displayed as `europe`\n- `asia-east1` is displayed as `asia`\n\nThis issue occurs in the following contexts:\n\n- Alerting in Cloud Monitoring\n- Metrics Explorer\n- Cloud Logging\n\nYou can mitigate the issue for Alerting in Cloud Monitoring, and for Metrics\nExplorer, by using\n[Resource metadata labels](https://cloud.google.com/monitoring/api/v3/metric-model#meta-labels).\nUse the system metadata label `region` instead of the\n[cloudsql_database](https://cloud.google.com/monitoring/api/resources#tag_cloudsql_database)\nmonitored resource label `region`.\n\n### Issue related to deleting a PostgreSQL database\n\nWhen you delete a database created in Google Cloud console using your\n`psql` client, you may encounter the following error: \n\n ERROR: must be owner of database [DATABASE_NAME]\n\nThis is a permission error since the owner of a database created using a\n`psql` client doesn't have Cloud SQL `superuser` attributes.\nDatabases created using the Google Cloud console are owned by\n`cloudsqlsuperuser` and databases created using a `psql` client are owned\nby users connected to that database. Since Cloud SQL is a managed service,\ncustomers cannot create or have access to users with `superuser` attributes.\nFor more information, see\n[Superuser restrictions and privileges](/sql/docs/postgres/users#superuser-restrictions-and-privileges).\n\nDue to this limitation, databases created using the Google Cloud console can\nonly be deleted using the Google Cloud console, and databases created using\na `psql` client can only be deleted by connecting as the owner of the\ndatabase.\n\nTo find the owner of a database, use the following command: \n\n SELECT d.datname as Name,\n pg_catalog.pg_get_userbyid(d.datdba) as Owner\n FROM pg_catalog.pg_database d\n WHERE d.datname = '\u003cvar translate=\"no\"\u003eDATABASE_NAME\u003c/var\u003e';\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eDATABASE_NAME\u003c/var\u003e: the name of the database that you want to find owner information for.\n\nIf the owner of your database is `cloudsqlsuperuser`, then use\nGoogle Cloud console to delete your database. If the owner of the database\nis a `psql` client database user, then connect as the database owner and\nrun the `DROP DATABASE` command."]]