announcement
product

Introducing the new Splunk Destination Plugin

Joe Karlsson

Joe Karlsson

Our new Splunk Destination Plugin is the easiest way to sync data from any data source directly into your Splunk instance. Designed for cloud architects, security teams, and DevOps professionals, this plugin makes it easy to harness the full power of Splunk’s analytics by integrating real-time data from cloud environments, including AWS, Azure, GCP, and many others. Whether you’re monitoring cloud assets or tracking security threats, this integration enables you to centralize and visualize data like never before.
In this guide, we’ll walk through setting up the AWS Source Plugin with the Splunk Destination Plugin to create an AWS Cloud Asset Inventory Dashboard in Splunk. You’ll learn how to sync data from AWS to Splunk and build dynamic dashboards to track and analyze your cloud infrastructure efficiently.

Why Splunk + CloudQuery? #

Integrating Splunk with CloudQuery significantly enhances your data analytics and security capabilities by providing deeper insights and real-time monitoring. CloudQuery brings in detailed cloud asset data that enriches Splunk’s analysis, helping contextualize incidents and reduce false positives. With CloudQuery’s connectors, you can build tailored security solutions like Cloud Asset Inventories, CSPM, CIEM, CNAPP, and FinOps tools, all customized to your needs. This integration allows you to detect new threats more effectively and unify data from multiple sources into a single view, enabling advanced threat detection, response, and compliance management.
CloudQuery’s flexible data collection framework and Splunk’s powerful querying and dashboard capabilities provide a centralized and real-time monitoring solution for your security and cloud infrastructure. By bringing your data to Splunk, you can proactively handle your security management through customizable dashboards and reports, enhancing visibility and control. Additionally, it enables the implementation of custom security policies, comprehensive policy enforcement, and cost optimization strategies, helping you manage your cloud environment more efficiently and securely.
  • Enriched Incident Context: Detailed cloud data enhances incident analysis.
  • Custom Security Solutions: Tailor asset inventories, CSPM, CIEM, and more.
  • Unified View: Consolidate multi-cloud data for better monitoring.
  • Real-Time Insights: Continuous monitoring for immediate threat detection.
  • Custom Dashboards: Create insights tailored to your compliance and security needs.
  • Cost Optimization: Integrated FinOps tools to optimize cloud spending.
This integration makes Splunk not just a data analysis tool but a comprehensive platform for security, compliance, and cloud management.

How to create an AWS Cloud Asset Inventory Dashboard in Splunk #

Getting Started with the Splunk Destination Plugin #

  1. Download CloudQuery: Head over to our download page to get started.
  2. Set Up Your Configuration: Use the init function to generate the necessary configuration files quickly. Run the following command to set up the AWS source with the Splunk destination:
    cloudquery init --source=aws --destination=splunk
  3. Authenticate to AWS: Authenticate to your AWS instance. Follow the authentication guide for detailed instructions.
  4. Authenticate to Splunk: To authenticate Splunk with CloudQuery, you will use HTTP Event Collector( HEC). You can find comprehensive documentation for authenticating your Splunk instance in the Splunk documentation.
    Be sure you copy the token value to hec_token in the Splunk destination plugin specification and set the index parameter to the name of the index chosen for this HEC
  5. Sync your AWS data to Splunk: Run the sync function to start syncing data from the AWS API to your Splunk database! 🚀
    cloudquery sync aws_to_splunk.yaml
    See the CloudQuery documentation portal for more deployment guides, options, and further tips.

How to show S3 Buckets Per Region in Splunk #

Now that you have loaded your AWS data into your Splunk instance with CloudQuery, you can create dashboards to display our AWS data.
  1. Create a new Dashboard in Splunk: For the most up-to-date information, you can refer to the Create a Dashboard documentation in Splunk.
  2. Create a new Pie Chart: Refer to the Splunk documentation for how to create a pie chart in Splunk.
  3. Create your query: You can use this query to group all your AWS S3 Buckets per region:
    ((index="awsinventory") (sourcetype="_json")) | fields "_time", "cq_table_name", "region" | search "cq_table_name" = "*aws_s3_buckets*" | stats count by "region"
  4. Done! Now, you can see all the ratios of S3 buckets per region.

How to Compare Resources Across Different Cloud Providers with Splunk #

For this example, you will make a comparison of storage related resources across AWS and GCP using CloudQuery to fetch the data and Splunk for querying/visualization.
  1. Fetching AWS S3 and related resources: Change your in the AWS spec in your aws_to_splunk.yaml and set the required tables to tables: ["aws_s3*"].
  2. Run the sync:
    cloudquery sync aws_to_splunk.yaml
  3. Set Up Your Configuration: Use the init function to generate the necessary configuration files quickly. Run the following command to set up the GCP source with the Splunk destination:
    cloudquery init --source=gcp --destination=splunk
  4. Authenticate to GCP: Authenticate to your GCP instance. Follow the authentication guide for detailed instructions.
  5. Fetch the GCP Storage resources: You will need to define your GCP source tables: ["gcp_storage"].
  6. Run the GCP top Splunk sync:
    cloudquery sync gcp_to_splunk.yaml
  7. Create a new Dashboard in Splunk: For the most up-to-date information, you can refer to the Create a Dashboard documentation in Splunk.
  8. Create a new Bar Chart: Refer to the Splunk documentation for how to create a bar chart in Splunk.
  9. Set up a search for GCP resources: You can use this query to group all your GCP Storage Buckets per location.
    index="storage" | search "cq_table_name"="gcp_storage_buckets" | stats count by location | sort count
    Which results in this view:
  10. Set up a search for AWS resources: You can use this query to group all your AWS S3 Buckets per region:
    index="storage" | search "cq_table_name"="aws_s3_buckets"
    | stats count by region
    | sort count
Which results in a new bar chart next to your GCP chart.

How to find resources that might need your attention? #

  1. Sort GCP buckets from oldest:
    index="storage" | search "cq_table_name"="gcp_storage_buckets"
    | fields + name created project_id
    | fields - _raw _bkt _indextime _si _sourcetype _time _cd _serial
    | sort created
  2. Do the Same with AWS:
    index="storage" | search "cq_table_name"="aws_s3_buckets"
    | fields + name creation_date arn
    | fields - _raw _bkt _cd _si _serial _sourcetype _indextime _time
    | sort creation_date
Which results in these tables:

How to Find Non-Essential Cloud Resources #

The CloudQuery Splunk Destination Plugin also allows you to get a personalized view of your data tailored to your needs. Let’s assume you’re after a more generic view of the situation. For example, you might want to see the total resources per cloud provider so you can try to find non-essential resources that you could potentially cut back on.
Taking into account:
AWSGCP
S3 BucketsStorage Buckets
EC2 InstancesCompute Instances
RDS DatabasesSQL Databases
Lambda FunctionsCloud Functions
IAM UsersIAM Service accounts
If you followed the example above, you already have the bucket storage data for both AWS and GCP cloud providers so that they will be omitted in the next sync. To get additional resources, you will need to update the table value on your CloudQuery configuration files with these additional tables:
AWS:
tables: ['aws_ec2_instances', 'aws_rds_instances', 'aws_lambda_functions', 'aws_iam_users']
GCP:
tables:
  [
    'gcp_compute_instances',
    'gcp_sql_instances',
    'gcp_functionsv2_functions',
    'gcp_iam_service_accounts',
  ]
After syncing, you will need to add these searches to your Splunk dashboard:
GCP:
index="storage" | search cq_source_name=gcp AND cq_table_name IN (gcp_storage_buckets, gcp_compute_instances, gcp_sql_instances, gcp_functionsv2_functions, gcp_iam_service_accounts)
| stats count by cq_table_name
| sort count
AWS:
index="storage" | search cq_source_name=aws AND cq_table_name IN (aws_s3_buckets, aws_ec2_instances, aws_rds_instances, aws_lambda_functions,aws_iam_users)
| stats count by cq_table_name
| sort count
This results in the following view:

Summary #

The Splunk Destination Plugin simplifies data integration from any source directly into your Splunk instance. Integrating detailed asset data from AWS and other cloud providers into Splunk allows you to monitor, visualize, and analyze your infrastructure more effectively.
This tutorial showed you how to sync your AWS cloud data into Splunk using CloudQuery, set up a Cloud Asset Inventory dashboard, and create custom visualizations like pie charts and bar charts to track your AWS S3 buckets and compare resources across different cloud providers. With this setup, you can now leverage Splunk’s querying and dashboard capabilities to maintain visibility and control over your cloud infrastructure, ensuring optimized performance and security.
You can try CloudQuery locally with our quick start guide or explore CloudQuery Cloud for a more scalable solution.
Want help getting started? Join the CloudQuery Discord community to connect with other users and experts, or drop us a message here!

FAQs #

Q: What is the Splunk Destination Plugin for CloudQuery?
The Splunk Destination Plugin allows you to sync data from any CloudQuery source plugin, such as AWS, directly into your Splunk instance for real-time analysis and monitoring.
Q: Do I need to authenticate my AWS instance before using the plugin?
Yes, you need to authenticate to your AWS instance. Follow our authentication guide to ensure proper setup.
Q: Can I use the Splunk Destination Plugin with data sources other than AWS?
Absolutely. While this guide uses the AWS plugin, CloudQuery supports various data sources. You can easily configure other sources like GCP, Azure, and more with the Splunk plugin.
Q: What are the key benefits of integrating CloudQuery data into Splunk?
Integrating CloudQuery with Splunk provides enriched incident context, reduces false positives, and allows for centralized data aggregation from multiple cloud sources, enabling real-time monitoring and customized security solutions.
Joe Karlsson

Written by Joe Karlsson

Joe Karlsson (He/They) is an Engineer turned Developer Advocate (and massive nerd). Joe empowers developers to think creatively when building applications, through demos, blogs, videos, or whatever else developers need.

Start your free trial today

Experience Simple, Fast and Extensible Data Movement.