For data teams, the pattern is almost always the same. You move to Snowflake for performance and scale. But then the first bill lands, and suddenly your Snowflake warehouse costs are far higher than forecast. What went wrong?
The first step to regaining control is understanding how Snowflake costs are calculated. This guide breaks down the cost structure and gives you five practical steps to optimize spend, so you only pay for the resources you actually need and can design a sustainable Snowflake FinOps practice before your next renewal.
But first, what Snowflake is used for?
Snowflake is a cloud data platform that enables organisations to store, process, and analyse data at scale. It operates on the three leading cloud providers (Amazon Web Services, Google Cloud Platform, and Microsoft Azure), giving businesses flexibility in how they deploy and expand their environments, whether as a greenfield implementation or as part of a larger Snowflake data platform rollout.
As a fully managed service, Snowflake removes the burden of infrastructure management. Users do not need to handle hardware, software updates, or tuning. Instead, they can focus entirely on working with their data while the platform manages performance, security, and scalability in the background - often with a lean internal team supported by a specialised Snowflake platform team.
One of Snowflake’s defining features is the separation of storage and compute, which allows each to scale independently. This design supports efficient resource usage, quick provisioning of additional capacity when needed, and automatic suspension of idle compute clusters known as virtual warehouses. These capabilities reduce costs while maintaining high performance when they’re configured with a cost-optimisation strategy.
Why your Snowflake cost keeps growing?
Before we discuses optimization, let's decode what you're actually paying for. Because if you're like most data teams, you're probably overpaying for things you didn't know you were buying.
Snowflake’s pay-as-you-go model is built on two primary components: compute and storage, along with a smaller component, cloud services.
1. Compute costs
This is typically the largest portion of your Snowflake bill. Compute is measured in Snowflake credits, an abstract unit that's consumed when a virtual warehouse is active.
Here's how the math works:
- Virtual Warehouses: These are the compute clusters (EC2 instances on AWS, for example) that run your queries, data loads, and other operations.
- "T-Shirt" Sizing: Warehouses come in sizes like X-Small, Small, Medium, Large, etc. Each size up doubles the number of servers in the cluster and, therefore, doubles the credit consumption per hour.
- Per-Second Billing: You're billed for credits on a per-second basis after the first 60 seconds of a warehouse running.
The formula for calculating the cost:
Credits Consumed = (Credits per hour for the warehouse) × (Total runtime in seconds) ÷ 3600
Real example: Running a Large warehouse (8 credits/hour) for 30 minutes (1800 seconds) would consume (8 * 1800)÷3600 = 4 credits. If you're paying $3 per credit, that half-hour just cost you $12. Scale that across dozens of queries per day, and you can see how costs spiral.
2. Storage Costs
At first, storage looks inexpensive compared to compute, but as data grows costs can rise quickly. Snowflake calculates storage charges based on the average monthly volume of data you store in terabytes. Because your data is automatically compressed, you’re billed on the compressed size, which most teams overlook.
You're paying for three different types of storage:
- Active Storage: The live data in your databases and tables.
- Time-Travel: Data kept to allow you to query or restore historical data from a specific point in the past. The default retention period is 1 day, but it can be configured up to 90 days for Enterprise editions.
- Fail-safe: A 7-day period of historical data storage after the Time-Travel window closes, used for disaster recovery by Snowflake support. This is not user-configurable.
3. Cloud services costs
The cloud services layer provides essential functions like authentication, query parsing, access control, and metadata management. For the most part, this layer is free. You only begin to incur costs if your usage of the cloud services layer exceeds 10% of your daily compute credit consumption. This is rare but can happen with an extremely high volume of very simple, fast queries.
5 steps to optimize your Snowflake warehouse costs
Now that you know what you're paying for, here are five steps to significantly reduce your spend.
Step 1: right-size your virtual warehouses
Running an oversized warehouse is like using a sledgehammer to crack a nut - it's expensive and unnecessary.
- Start small: Don't default to a
Largewarehouse. Begin with anX-SmallorSmalland only scale up if performance is inadequate. It's often more efficient to run a query for slightly longer on a smaller warehouse than for a few seconds on a larger one. Look for slow queries in the Query History that generate a lot of “Bytes spilled to local storage”. For large joins or window functions, going for aSmalltoLargewarehouse might be 4 times more expensive, but 10x faster, resulting in a 60% cost reduction. - Set aggressive auto-suspend policies: An active warehouse consumes credits even when it's inactive. Configure your warehouses to auto-suspend quickly when not in use. A setting of 1 to 5 minutes is a good starting point for most workloads. This single change can have a massive impact on your bill.
- Separate your workloads: Don't use one giant warehouse for everything. Create separate warehouses for different teams and tasks: (e.g.,
ELT_WHfor data loading,BI_WHfor analytics dashboards,DATASCIENCE_WHfor ad-hoc exploration). This prevents a resource-intensive data science query from slowing down critical business reports and allows you to tailor the size and settings for each specific workload. - Use multi-cluster warehouses for high concurrency: If you have many users running queries simultaneously (like a popular BI dashboard), instead of using a larger warehouse (scaling up), configure a multi-cluster warehouse (scaling out). This will automatically spin up additional clusters of the same size to handle the concurrent load and spin them down as demand decreases.
Step 2: optimize your queries and workloads
Inefficient queries are a primary driver of wasted compute credits. A poorly written query can run for minutes on a large warehouse when a well-written one could finish in seconds on a smaller one.
- Use the Query Profile: This is your best friend for optimization. Before trying to fix a slow query, run it and then analyse its performance in the Query Profile. This tool provides a detailed graphical breakdown of each step of query execution, showing you exactly where the bottlenecks are (e.g., a table scan that should be a prune, an exploding join).
- Avoid
SELECT *: Only select the columns you actually need. Pulling unnecessary columns increases I/O and can prevent Snowflake from performing "column pruning," a key optimization technique. - Be careful with
JOINs: Ensure you are joining on keys that are well-distributed. Accidental Cartesian products (cross-joins) are a notorious cause of runaway queries that can burn through credits. - Materialize complex views: If you have a complex view that is queried frequently, consider materializing it into a table. While this uses more storage, the compute savings from not having to re-calculate the view on every query can be substantial. Use Materialized Views for this, as Snowflake will automatically keep them up-to-date.
Step 3: manage your data storage lifecycle
While cheaper than compute, storage costs can creep up. Proactive data management is key.
- Configure Time-Travel Sensibly: Do you really need 90 days of Time-Travel for every table? For staging tables or transient data, a 1-day retention period is often sufficient. Align the Time-Travel window with your actual business requirements for data recovery.
- Use Transient and Temporary Tables: For data that doesn't need to be recovered (like staging data from an ELT process), use transient tables. These tables do not have a Fail-safe period and only have a Time-Travel period of 0 or 1 day. This can significantly reduce your storage footprint for intermediate data.
- Periodically Review and Purge Data: Implement a data retention policy and periodically archive or delete data that is no longer needed for analysis.
Step 4: maximize caching to get free compute
Snowflake has multiple layers of caching that can dramatically reduce credit consumption if leveraged correctly. When a query result is served from a cache, it consumes zero compute credits.
- The Result Cache: Snowflake automatically caches the results of every query you run. If another user submits the exact same query within 24 hours (and the underlying data has not changed), Snowflake returns the cached result almost instantly without starting a warehouse. This is perfect for popular dashboards where many users view the same report.
- Local Disk Cache (Warehouse Cache): When a warehouse executes a query, it caches the data it retrieved from storage on its local SSD. If a new query requires some of the same data, it can be read from this much faster local cache instead of remote storage, speeding up the query and reducing compute time. This cache is cleared when the warehouse is suspended.
Step 5: implement robust governance and monitoring
You can't optimize what you can't measure. Use Snowflake's built-in tools to monitor usage and enforce budgets.
- Set up Resource Monitors: This is your primary safety net. A Resource Monitor can be assigned to one or more warehouses to track their credit consumption. You can configure it to send alerts at certain thresholds (e.g., 75% of budget) and, most importantly, to suspend the warehouse when it hits its limit, preventing runaway spending.
- Analyse your usage data: Snowflake provides a wealth of metadata in the
SNOWFLAKEdatabase, specifically within theACCOUNT_USAGEschema. Views likeWAREHOUSE_METERING_HISTORY,QUERY_HISTORY, andSTORAGE_USAGEare invaluable. Query this data to find your most expensive queries, identify your busiest warehouses, and track your storage costs over time. - Tag everything for cost allocation: Use Snowflake's tagging feature to assign metadata tags to warehouses, databases, and other objects. You can tag objects by department (
finance,marketing), project, or user. This allows you to query the usage views and accurately allocate costs back to the teams responsible, creating accountability.
Bringing it all together
So what’s your next step? These five practices will help you reduce costs and build smarter habits, but turning them into measurable savings at scale takes more than a checklist. It requires the right expertise and execution.
For example, a leading financial services company was spending more than $800K per month on cloud costs with no clear view of where the money was going. Within 90 days of working with our experts, they gained full visibility, reduced ingestion latency by 80%, and built a governed, AI-ready platform while bringing costs back under control.
👉 Read the full case study here
At Snowstack, we bring certified Snowflake expertise and proven delivery methods to help enterprises cut spend, improve performance, and prepare their platforms for AI and advanced analytics.
Ready to make your Snowflake environment cost-efficient and future-proof?


.webp)

