Setting up a GCP Integration
CloudQuery Platform authenticates with GCP through Service Accounts using JSON key files.
The GCP integration supports additional authentication methods — including Workload Identity and Workload Identity Federation — when used with the CloudQuery CLI. CloudQuery Platform does not yet support these methods through the UI. See the GCP integration documentation for the full list of authentication options.
Prerequisites
- A CloudQuery Platform account with admin access
- A GCP project with permissions to create service accounts and manage IAM
- Access to the GCP Console
Step 1: Set up a service account
CloudQuery uses a service account to read resources from your GCP environment. Follow these steps to set up a new service account with read-only access:
- Open the GCP Service Accounts page
- Select the project to create the service account in (you can assign access to other projects later)
- Click Create Service Account
- Enter the details:
- Service account display name, e.g.
CloudQuery Readonly - Service account ID, e.g.
cloudquery-readonly - A description, e.g.
Service account for CloudQuery to fetch resources in GCP - Click Create and Continue
- Service account display name, e.g.

- Under Basic, select the
Viewerrole for the service account.

- Click Continue and Done.
- In the service accounts list, click on the new service account, then go to the Keys tab. Click Add Key → Create New Key.
- Select JSON and click Create. This downloads a JSON key file to your computer. You need this file when configuring the integration in CloudQuery Platform.

Store the JSON key file securely. It contains credentials that grant access to your GCP resources. You will upload the contents to CloudQuery Platform in Step 2, after which you can delete the local file.
Optional: Assign organization or folder-wide access
To sync resources across all your GCP projects, grant the Viewer role to the service account at the organization or folder level:
- In the GCP Console project selection screen, select your top-level organization (or folder)
- Go to IAM and Admin → IAM, and click Grant Access
- Paste the email address of the service account you created above in the New Principals text box. Assign the Viewer role.
- Click Save

Optional: Assign access to individual projects
You can also grant access to specific projects if you don’t need organization-wide access:
- In the GCP Console project selection screen, select the relevant project
- Go to IAM and Admin → IAM, and click Grant Access
- Paste the email address of the service account in the New Principals text box. Assign the Viewer role.
- Click Save
Step 2: Create the integration
- In CloudQuery Platform, go to Data Pipelines → Integrations. Click Create Integration and type GCP to find the GCP integration.

- Choose a name for your integration (e.g.
GCP) and update the YAML configuration. Here is a complete example:
kind: source
spec:
name: gcp
path: cloudquery/gcp
registry: cloudquery
version: "v22.0.0"
tables:
- gcp_compute_instances
- gcp_storage_buckets
spec:
service_account_key_json: |
${SERVICE_ACCOUNT_KEY_JSON}The pipe symbol (|) followed by a newline in the YAML is required for the service_account_key_json field. Without it, YAML parsing will not work as expected because the JSON key contains special characters.
The tables list above is an example. Customize it to include the tables you need. See the full GCP table list for all available tables. Use ["*"] to sync all tables.
- Add a new secret with Key
SERVICE_ACCOUNT_KEY_JSON. - In a text editor, open the JSON key file you downloaded from GCP in Step 1, and copy-paste the entire contents into the Value field:

- Click Test Connection to verify the setup.
After a successful test connection, you can safely delete the JSON key file from your local disk. The credentials are stored securely in CloudQuery Platform.
What gets synced
The GCP integration can sync hundreds of tables across GCP services. Some of the most commonly used tables include:
| Category | Tables | Description |
|---|---|---|
| Compute | gcp_compute_instances, gcp_compute_disks | VM instances, persistent disks |
| Storage | gcp_storage_buckets | Cloud Storage buckets |
| Networking | gcp_compute_networks, gcp_compute_firewalls, gcp_compute_subnetworks | VPCs, firewall rules, subnets |
| Containers | gcp_container_clusters | GKE clusters |
| Identity | gcp_iam_service_accounts | IAM service accounts |
See the full GCP table list for all available tables.
Verify the integration
After your first sync completes, open the SQL Console and run these queries to confirm data arrived:
-- Count synced compute instances
SELECT count(*) FROM gcp_compute_instances-- List synced GCP projects
SELECT DISTINCT project_id FROM gcp_compute_instances-- View storage buckets
SELECT project_id, name, location FROM gcp_storage_buckets LIMIT 10You can also browse your GCP resources in the Asset Inventory under the Compute, Storage, Networking, and other categories.
Troubleshooting
| Issue | Cause | Fix |
|---|---|---|
| Invalid JSON key | Malformed or incomplete JSON | Verify you copied the entire contents of the JSON key file, including the opening { and closing }. Re-download the key from GCP if needed. |
| Permission denied | Service account lacks Viewer role | Verify the service account has the Viewer role on the project, folder, or organization you want to sync. |
| Project not found | Service account not granted access | The service account can only access projects where it has been granted IAM permissions. Grant the Viewer role on each project, or at the organization/folder level. |
| API not enabled | Required GCP API is disabled | Some resources require specific APIs to be enabled in the project (e.g., Compute Engine API, Cloud Storage API). Enable them in the GCP API Library. |
| No data for some resources | Tables not specified | Check that the relevant tables are listed in the tables field of your YAML configuration. |
Next steps
- Set up a sync to schedule when your GCP data is fetched
- Browse synced resources in the Asset Inventory
- Run advanced queries in the SQL Console
- See the GCP integration documentation for full configuration options and table reference