New
Join our webinar! Building a customizable and extensible cloud asset inventory at scale
CloudQuery News

Announcing the CloudQuery Snowflake Destination Integration

Yevgeny Pats

Yevgeny Pats

Introduction #

Cloud infrastructure data exploded over the last decade. Some teams using CloudQuery now collect data on more than 50 million (!) of their organization's cloud resources on a daily basis. This explosion has created the need to store this data in data warehouses and data lakes for better scalability, analysis and reporting. Today I'm excited to announce the release of the new CloudQuery Snowflake destination integration, which enables you to achieve this by syncing all supported CloudQuery source integrations directly to a Snowflake database.

Use Cases #

Cloud Infrastructure data lake and data warehouse #

Snowflake can already be used as a "Security data lake". CloudQuery brings all your cloud infrastructure configuration data to the same place, consolidating your security and infrastructure data in one place and enabling new insights.

Historical data #

Maintaining historical data is a common use case for data warehouses. By using Snowflake as a destination, you can now store all your cloud infrastructure data in a data warehouse for long term storage, analysis and investigation use-cases.

Syncing data #

Syncing data to snowflake can be done as with any other destination plugin, so check out the CloudQuery Quickstart Guide and Snowflake plugin documentation.
There are two ways to sync data to Snowflake:
  1. Direct (easy but not recommended for production or large data sets): This is the default mode of operation where the CQ plugin will stream the results directly to the Snowflake database. There is no additional setup needed apart from authentication to Snowflake.
  2. Loading via CSV/JSON from a remote storage: This is the standard way of loading data into Snowflake, it is recommended for production and large data sets. This mode requires a remote storage (e.g. S3, GCS, Azure Blob Storage) and a Snowflake stage to be created. The CQ plugin will stream the results to the remote storage. You can then load those files via a cronjob or via SnowPipe. This method is still in the works and will be updated soon with a guide.
Once data is synced you can start querying either with the native Snowflake interface, code or with any BI tool that supports Snowflake. Here is the "Hello World" of cloud infrastructure queries:

Summary #

This is the first Snowflake destination plugin release. More improvements are coming, and I'd love to hear your use-cases and ideas on how to leverage it. Feel free to open an issue on GitHub or join the CloudQuery Community to share your ideas.

Ready to dive deeper? Contact CloudQuery here or join the CloudQuery Community to connect with other users and experts. You can also try out CloudQuery locally with our quick start guide or explore the CloudQuery Platform (currently in beta) for a more scalable solution.
Yevgeny Pats

Written by Yevgeny Pats

Yevgeny Pats is the Co-Founder & CEO at CloudQuery. Prior to establishing CloudQuery, he successfully founded and exited other startups. He has a background in software engineering and cybersecurity.

Turn cloud chaos into clarity

Find out how CloudQuery can help you get clarity from a chaotic cloud environment with a personalized conversation and demo.

Join our mailing list

Subscribe to our newsletter to make sure you don't miss any updates.

Legal

© 2024 CloudQuery, Inc. All rights reserved.

We use tracking cookies to understand how you use the product and help us improve it. Please accept cookies to help us improve. You can always opt out later via the link in the footer.