Blog
0 min read

Databricks vs Snowflake: Which one is better in 2025?

A few years ago, choosing a data platform was about storage limits and running reports. In 2025, the game has changed. Data speed is now business speed, and the platform running your analytics and AI determines how fast you can innovate, control costs, and outpace competitors. Databricks and Snowflake are the two biggest names in this space, each offering a different path to turning data into a competitive edge. The real challenge is deciding which one fits your strategy better.

Picking between Databricks and Snowflake is less about comparing features and more about deciding how your business will compete. This guide shows you which platform can give you the advantage.

What is Databricks?

Created by the team behind Apache Spark, Databricks unifies data engineering, data science, and machine learning in a single “lakehouse” platform. It handles structured and unstructured data at scale, excelling in complex pipelines, streaming analytics, and AI/ML workloads. By 2025, new features like Agent Bricks for domain-specific AI agents, Lakebase for AI-native applications, and expanded Unity Catalog governance have turned it into a full data intelligence platform for both technical and business users.

What is Snowflake?

Snowflake redefined cloud data warehousing with its separate compute and storage architecture, making it easy to scale and manage. Originally built for SQL analytics, it has evolved into an AI Data Cloud supporting BI and advanced AI applications. In 2025, enhancements like Cortex AISQL, the Arctic LLM, document AI, and improved Python integration extend its reach to data scientists, while keeping its simplicity, automation, and strong governance.

Databricks vs Snowflake: similarities

Both platforms have matured significantly by 2025, converging on several key capabilities that make them viable options for modern data architectures. Both offer:

  • Cloud-native architecture with automatic scaling and multi-cloud deployment options
  • Enterprise-grade security including encryption, compliance certifications, and granular access controls
  • Data sharing capabilities for secure collaboration across teams and organizations
  • Support for both structured and unstructured data with varying degrees of optimization
  • Integration ecosystems connecting to popular BI tools, data orchestration platforms, and cloud services
  • Pay-as-you-consume pricing models with cost optimization features
  • Streaming data ingestion for real-time analytics and decision-making
  • Machine learning capabilities though with different approaches and levels of sophistication

Databricks vs Snowflake: differences

While these platforms share similarities, their design and intended uses provide each with advantages in specific scenarios.

Performance

Snowflake is built for fast, predictable SQL at high concurrency. Multi-cluster warehouses and automatic optimization keep dashboards responsive. In June 2025, Snowflake introduced Adaptive Compute and Gen2 warehouses to further boost price-performance for interactive analytics. Databricks is strongest on heavy transformations, ML, and streaming; Photon closes much of the SQL gap but still benefits from tuning.

Winner: Snowflake for interactive SQL/BI and concurrent users; Databricks for heavy data processing, ML, and low-latency streaming.

Scalability

Snowflake scales with virtual warehouses and multi-cluster warehouses that add or remove clusters automatically, suspend when idle, and resume on demand, which makes high-concurrency BI straightforward with little operational overhead. It is simple to run for many concurrent users. Databricks scales massive distributed jobs and offers autoscaling and serverless options across jobs, SQL, and pipelines. What users report:

“Snowflake had great performance consistency and easier scaling… Databricks gave us the best bang for buck on large-scale transformations and streaming.”

Winner: Snowflake for easy, high-concurrency analytics; Databricks for large-scale data processing and ML.

Ease of Use

Snowflake is SQL-first with a clean web UI, so analysts can start fast and most tuning is automatic. Databricks is notebook- and code-centric, great for engineers and data scientists, but it asks more from the team. Across the data community the pattern is consistent:

“Snowflake seems so much easier to manage … the fastest way to deliver stakeholder value,” while Databricks earns favour with teams that have deep technical know-how.

Winner: Snowflake for business users and quick deployment; Databricks for technical teams requiring flexibility

Security

Snowflake ships enterprise controls out of the box, including RBAC, dynamic masking, row access, encryption, and detailed usage history. In 2025 updates added Trust Center email alerts for policy violations; and Access History plus built-in lineage views support auditing. Databricks centralizes security and lineage in Unity Catalog with fine-grained policies and customer-managed keys. Now including attribute-based access control (ABAC) policies.

Winner: Snowflake for turnkey, compliance-ready governance; Databricks for flexible, policy-rich control across data and AI when you have the engineering depth.

Integration

Snowflake connects cleanly to the BI stack and runs data and native apps inside the platform.  Its Marketplace and Native App Framework let vendors ship apps that run inside Snowflake, and 2025 updates expanded in-market apps and data products. Databricks, on the other hand, leans on open formats and APIs, integrating broadly with Spark tools, ML frameworks, and engines that read Delta or Iceberg (and even Snowflake for reads).

Winner: Snowflake for BI and in-platform apps; Databricks for ML/AI ecosystem depth and open, cross-engine interoperability.

AI

Snowflake integrates AI directly into analytics workflows, allowing teams to call large language models (LLMs) directly from SQL through Cortex AISQL. It also offers its own Arctic LLM family and, starting in 2025, supports running Snowflake ML models within Native Apps. Meanwhile, Databricks focuses on end-to-end AI application development. Its Mosaic AI Agent Framework enables retrieval-augmented generation (RAG) and agent workflows, and it recently launched DBRX, an open LLM designed for enterprise customization.

Winner: Snowflake for AI in analytics with governance and low MLOps overhead. Databricks for custom AI apps, agents, and RAG at scale.

Cost

Snowflake charges per-second compute with auto-suspend and clear usage views, which makes BI spend predictable when set up well. Cost visibility is built in through Snowsight dashboards, usage views, resource monitors, and new cost-anomaly detection, and Cortex AI features are metered by tokens with documented credit rates and guardrails like the 10% cloud-services threshold. Databricks uses DBUs that vary by workload and tier; it can be cheaper for large, long-running pipelines if you actively tune and monitor. The company is phasing out the Standard tier on AWS and GCP with Premium becoming the base on October 1, 2025, which makes governance features standard but still requires active monitoring and optimization for steady costs. As one user said:

“DBU pricing is confusing; you need active monitoring to understand what work maps to which cost.”

Winner: Snowflake for clearer, more predictable analytics spend and native cost controls; Databricks for cost efficiency on large, long-running data engineering and ML when tuned well.

So, which one is better in 2025?

The decision between Databricks vs Snowflake ultimately depends on your organization's primary use cases, team composition, and strategic priorities.

Choose Snowflake if:

  • Your primary focus is business intelligence, reporting, and governed analytics
  • You have mixed technical teams, including business analysts who need self-service capabilities
  • You prioritize ease of use, quick deployment, and minimal maintenance overhead
  • Data governance, compliance, and security are top priorities with limited dedicated resources
  • You need predictable, transparent pricing for analytical workloads
  • Your AI initiatives involve augmenting existing analytics rather than building custom models

Consider a hybrid approach if:

  • You have both heavy ML/data science workloads AND extensive BI requirements
  • Different teams have varying technical capabilities and use case requirements
  • You're transitioning between platforms and need time to migrate workloads
  • Specific regulatory or data residency requirements dictate platform choice by region

Need expert guidance for your data platform decision?

Your data platform is not an IT purchase. It is a strategy decision. At Snowstack, we help data leaders design, build, and run modern platforms with a core focus on Snowflake and the surrounding stack. We handle migrations, performance tuning, governance, and AI readiness so your team ships faster, spends smarter, and stays compliant.

What you get: clear architecture choices, a cost model you can trust, and a roadmap that fits your team and timelines.

Let’s align your platform to your strategy and deliver measurable results.

FAQs

Q: Can you use both Databricks and Snowflake together?

Absolutely. A common architecture uses Databricks for ETL and AI workloads, then loads into Snowflake for SQL analytics and business-level insights firebolt.io.

Q:  Does Snowflake have a competitive advantage?

Yes. Snowflake holds a competitive edge where governed, high-concurrency analytics and easy operations matter most.

Q: Is Snowflake better than Databricks for AI?

For AI workloads in 2025, the answer depends on your specific implementation approach. Snowflake is better for adding AI into analytics. With Cortex AISQL, the Arctic LLM, document AI, and stronger Python support, it lets teams use AI for insights, governed deployments, and SQL-based applications without deep ML expertise.

Q: Does Snowflake support unstructured data?

Yes, it now supports semi‑structured data and AI via Cortex and Crunchy Data’s PostgreSQL extension - but Databricks remains stronger with unstructured workloads like streaming and ML

Learn more about Snowflake from top experts

Join data leaders who get Snowflake insights and updates delivered straight to their inbox.

Thanks for joining us!

We’ll keep you posted with fresh updates and resources.

Oops! Something went wrong while submitting the form.
Insights

Learnings for data leaders

Blog
5 min read

From zero to production: a comprehensive guide to managing Snowflake with Terraform

Manual clicks don’t scale. As Snowflake environments grow, managing them through the UI or ad-hoc scripts quickly leads to drift, blind spots, and compliance risks. What starts as a quick fix often becomes a challenge that slows delivery and exposes the business to security gaps.

Read more

Manual clicks don’t scale. As Snowflake environments grow, managing them through the UI or ad-hoc scripts quickly leads to drift, blind spots, and compliance risks. What starts as a quick fix often becomes a challenge that slows delivery and exposes the business to security gaps.

Infrastructure as Code with Terraform solves these challenges by bringing software engineering discipline to Snowflake management. Using Terraform’s declarative language, engineers define the desired state of their Snowflake environment, track changes with version control, and apply them consistently across environments. Terraform communicates with Snowflake’s APIs through the official snowflakedb/snowflake provider, translating configuration into the SQL statements and API calls that keep your platform aligned and secure.

This guide provides a complete walkthrough of how to manage Snowflake with Terraform. From provisioning core objects like databases, warehouses, and schemas to building scalable role hierarchies and implementing advanced governance policies such as dynamic data masking.

Section 1: bootstrapping Terraform for secure Snowflake automation

The initial setup of the connection between Terraform and Snowflake is the most critical phase of the entire process. A secure and correctly configured foundation is paramount for reliable and safe automation. This section focuses on establishing this connection using production-oriented best practices, specifically tailored for non-interactive, automated workflows typical of CI/CD pipelines.

1.1 The principle of least privilege: the terraform service role

Terraform should not operate using a personal user account. Instead, a dedicated service user must be created specifically for Terraform automation. Before any Terraform code can be executed, a one-time manual bootstrapping process must be performed within the Snowflake UI or via SnowSQL. This involves using the ACCOUNTADMIN role to create the dedicated service user and a high-level role for Terraform's initial operations.

The following SQL statements will create a TERRAFORM_SVC user and grant it the necessary system-defined roles:

-- Use the highest-level role to create users and grant system roles
USE ROLE ACCOUNTADMIN;

-- Create a dedicated service user for Terraform
-- The RSA_PUBLIC_KEY will be set in the next step
CREATE USER TERRAFORM_SVC    
COMMENT = 'Service user for managing Snowflake infrastructure via Terraform.'    
RSA_PUBLIC_KEY = '<YOUR_PUBLIC_KEY_CONTENT_HERE>';

-- Grant the necessary system roles to the Terraform service user
GRANT ROLE SYSADMIN TO USER TERRAFORM_SVC;
GRANT ROLE SECURITYADMIN TO USER TERRAFORM_SVC;

Granting SYSADMIN and SECURITYADMIN to the service user is a necessary starting point for the infrastructure management. The SYSADMIN role holds the privileges required to create and manage account-level objects like databases and warehouses. The SECURITYADMIN role is required for managing security principals, including users, roles, and grants.

1.2 Authentication: the key to automation

The choice of authentication method is important. The Snowflake provider supports several authentication mechanisms, including basic password, OAuth, and key-pair authentication. For any automated workflow, especially within a CI/CD context, key-pair authentication is the industry-standard and recommended approach.

A CI/CD pipeline, such as one running in GitHub Actions, is a non-interactive environment. Basic password authentication is a significant security risk and not recommended. This leaves key-pair authentication as the only method that is both highly secure, as it avoids transmitting passwords, and fully automatable.

The following table provides a comparative overview of the primary authentication methods available in the Snowflake provider, reinforcing the recommendation for key-pair authentication in production automation scenarios.

Table 1: Snowflake provider authentication methods

To implement key-pair authentication, an RSA key pair must be generated. The following openssl commands will create a 2048-bit private key in the required PKCS#8 format and its corresponding public key:

Bash

# Navigate to a secure directory, such as ~/.ssh
cd ~/.ssh

# Generate an unencrypted 2048-bit RSA private key in PKCS#8 format
openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -out snowflake_terraform_key.p8 -nocrypt

# Extract the public key from the private key
openssl rsa -in snowflake_terraform_key.p8 -pubout -out snowflake_terraform_key.pub

After generating the keys, the content of the public key file (snowflake_terraform_key.pub), including the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- headers, must be copied and pasted into the ALTER USER statement from the previous step to associate it with the TERRAFORM_SVC user. For enhanced security, the private key itself can be encrypted with a passphrase. The Snowflake provider supports this by using the private_key_passphrase argument in the provider configuration.

1.3 Provider configuration: connecting Terraform to Snowflake

With the service user created and the key-pair generated, the final step is to configure the Snowflake provider in the Terraform project. This is typically done in a providers.tf file.

The foundational configuration requires defining the snowflakedb/snowflake provider and setting the connection parameters.

terraform {  
required_providers {    
snowflake = {      
source  = "snowflakedb/snowflake"      
version = ">= 1.0.0" // Best practice: pin to a major version to avoid breaking changes    
    }  
  }
}

provider "snowflake" {  
organization_name = var.snowflake_org_name  
account_name      = var.snowflake_account_name  
user              = var.snowflake_user         // e.g., "TERRAFORM_SVC"  
role              = "SYSADMIN"                 // Default role for the provider's operations  
authenticator     = "SNOWFLAKE_JWT"  
private_key       = var.snowflake_private_key
}

It is critical that sensitive values, especially the private_key, are never hardcoded in configuration files. The recommended approach is to define them as input variables marked as sensitive = true and supply their values through secure mechanisms like environment variables (e.g., TF_VAR_snowflake_private_key) or integration with a secrets management tool like GitHub Secrets or AWS Secrets Manager.

A common source of initial connection failures is the incorrect identification of the organization_name and account_name. These values can be retrieved with certainty by executing the following SQL queries in the Snowflake UI: SELECT CURRENT_ORGANIZATION_NAME(); and SELECT CURRENT_ACCOUNT_NAME();. Providing these simple but effective commands can prevent significant user frustration.

For more mature IaC implementations that strictly adhere to the principle of least privilege, Terraform supports the use of aliased providers. This powerful pattern allows for the definition of multiple provider configurations within the same project, each assuming a different role. This mirrors Snowflake's own best practices, where object creation (SYSADMIN) is separated from security management (SECURITYADMIN).

The following example demonstrates how to configure aliased providers:

# Default provider uses SYSADMIN for object creation (e.g., databases, warehouses)
provider "snowflake" {  
alias             = "sysadmin"  
organization_name = var.snowflake_org_name  
account_name      = var.snowflake_account_name  
user              = var.snowflake_user  
private_key       = var.snowflake_private_key  
authenticator     = "SNOWFLAKE_JWT"  
role              = "SYSADMIN"
}

# Aliased provider for security-related objects (e.g., roles, users, grants)
provider "snowflake" {  
alias             = "securityadmin"  
organization_name = var.snowflake_org_name  
account_name      = var.snowflake_account_name  
user              = var.snowflake_user  
private_key       = var.snowflake_private_key  
authenticator     = "SNOWFLAKE_JWT"  
role              = "SECURITYADMIN"
}

When using aliased providers, individual resource blocks must explicitly specify which provider to use via the provider meta-argument (e.g., provider = snowflake.securityadmin). This ensures that each resource is created with the minimum necessary privileges, enforcing a robust security posture directly within the code.

Section 2: provisioning core Snowflake infrastructure

Once the secure connection is bootstrapped, Terraform can be used to define and manage the fundamental building blocks of the Snowflake environment. This section provides code examples for creating databases, virtual warehouses, and schemas - the foundational components for any data workload.

2.1 Laying the foundation: databases

The database is the top-level container for schemas and tables in Snowflake. The snowflake_database resource is used to provision and manage these containers.

The following HCL example creates a primary database for analytics workloads, demonstrating the use of the aliased sysadmin provider and an optional parameter for data retention.

‍resource "snowflake_database" "analytics_db" {  
provider = snowflake.sysadmin // Explicitly use the sysadmin provider for object creation  

name    = "ANALYTICS"  
comment = "Primary database for analytics workloads managed by Terraform."  

// Optional: Configure Time Travel data retention period.  
// This setting can have cost implications.  
data_retention_time_in_days = 30
}

A core strength of Terraform is its ability to manage dependencies implicitly through resource references. In this example, once the analytics_db resource is defined, other resources, such as schemas, can reference its attributes (e.g., snowflake_database.analytics_db.name).

2.2 Compute power: warehouses

Virtual warehouses are the compute engines in Snowflake, responsible for executing queries and data loading operations. The snowflake_warehouse resource provides comprehensive control over their configuration, enabling a balance between performance and cost.

This example defines a standard virtual warehouse for analytics and business intelligence tools, showcasing parameters for cost optimization and scalability.

resource "snowflake_warehouse" "analytics_wh" {  
provider = snowflake.sysadmin  

name    = "ANALYTICS_WH"  
comment = "Warehouse for the analytics team and BI tools."  

// Define the compute capacity of the warehouse.  
warehouse_size = "X-SMALL"  

// Cost-saving measures: suspend the warehouse when idle.  
auto_suspend = 60 // Suspend after 60 seconds of inactivity.  
auto_resume  = true  

// Optional: Configure for multi-cluster for higher concurrency.  
min_cluster_count = 1  
max_cluster_count = 4  
scaling_policy    = "ECONOMY" // Prioritize conserving credits over starting clusters quickly.
}

The parameters in this resource directly impact both performance and billing. warehouse_size determines the raw compute power and credit consumption per second. auto_suspend is a critical cost-control feature, ensuring that credits are not consumed when the warehouse is idle. For workloads with high concurrency needs, the min_cluster_count, max_cluster_count, and scaling_policy parameters allow the warehouse to dynamically scale out to handle query queues, and then scale back in to conserve resources. Managing these settings via Terraform ensures that cost and performance policies are consistently applied and version-controlled.

2.3 Organizing your data: schemas

Schemas are logical groupings of database objects like tables and views within a database. The snowflake_schema resource is used to create and manage these organizational units.

The following HCL creates a RAW schema within the ANALYTICS database defined earlier.

resource "snowflake_schema" "raw_data" {  
provider = snowflake.sysadmin  

// Create an explicit dependency on the database resource.  
database = snowflake_database.analytics_db.name  

name    = "RAW"  
comment = "Schema for raw, unprocessed data ingested from source systems."
}

It is important to note that when a new database is created in Snowflake, it automatically includes a default schema named PUBLIC. While this schema is created outside of Terraform's management, administrators should be aware of its existence. For environments that require strict access control, it is a common practice to immediately revoke all default privileges from the

PUBLIC schema to ensure it is not used inadvertently. Terraform can be used to manage this revocation if desired, but the schema itself will not be in the Terraform state unless explicitly imported.

Section 3: mastering access control with role hierarchies

Effective access control is a cornerstone of data governance and security. Snowflake's Role-Based Access Control (RBAC) model is exceptionally powerful, particularly its support for role hierarchies. Managing this model via Terraform provides an auditable, version-controlled, and scalable approach to permissions management. This section details how to construct a robust RBAC framework using a best-practice model of functional and access roles.

3.1 The building blocks: creating account roles

The foundation of the RBAC model is the creation of roles. A recommended pattern is to create two distinct types of roles:

  • Functional roles: These roles represent a job function or a persona, such as DATA_ANALYST or DATA_ENGINEER. Users are granted these roles.
  • Access roles: These roles represent a specific set of privileges on a specific set of objects, such as SALES_DB_READ_ONLY or RAW_SCHEMA_WRITE. These roles are granted to functional roles, not directly to users.

This separation decouples users from direct permissions, making the system vastly more scalable and easier to manage. The snowflake_account_role resource is used to create both types of roles.

// Define a functional role representing a user persona.
resource "snowflake_account_role" "data_analyst" {  
provider = snowflake.securityadmin // Use the securityadmin provider for role management 

name    = "DATA_ANALYST"  
comment = "Functional role for users performing data analysis and reporting."
}

// Define an access role representing a specific set of privileges.
resource "snowflake_account_role" "analytics_db_read_only" {  
provider = snowflake.securityadmin  

name    = "ANALYTICS_DB_READ_ONLY"  
comment = "Grants read-only access to all objects in the ANALYTICS database."
}

3.2 Constructing the hierarchy: granting roles to roles

The true power of Snowflake's RBAC model is realized by creating hierarchies of roles. By granting access roles to functional roles, a logical and maintainable privilege structure is formed. If a data analyst needs access to a new data source, the corresponding access role is granted to the DATA_ANALYST functional role once, rather than granting privileges to every individual analyst. This pattern is essential for managing permissions at scale.

The snowflake_grant_account_role resource is used to create these parent-child relationships between roles. It is important to use this resource, as the older snowflake_role_grants resource is deprecated.

The following example demonstrates how to grant the ANALYTICS_DB_READ_ONLY access role to the DATA_ANALYST functional role, and then nest the functional role under the system SYSADMIN role to complete the hierarchy.

// Grant the access role to the functional role.
// This gives all members of DATA_ANALYST the privileges of ANALYTICS_DB_READ_ONLY.
resource "snowflake_grant_account_role" "grant_read_access_to_analyst" {  
provider = snowflake.securityadmin  

role_name        = snowflake_account_role.analytics_db_read_only.name  
parent_role_name = snowflake_account_role.data_analyst.name
}

// Grant the functional role to SYSADMIN to create a clear role hierarchy.
// This allows system administrators to manage and assume the functional role.
resource "snowflake_grant_account_role" "grant_analyst_to_sysadmin" {  
provider = snowflake.securityadmin  
role_name        = snowflake_account_role.data_analyst.name  
parent_role_name = "SYSADMIN"
}

3.3 Assigning privileges to access roles

With the role structure in place, the final step is to grant specific object privileges to the access roles. The snowflake_grant_privileges_to_account_role resource is a consolidated and powerful tool for this purpose. This resource has evolved significantly in the Snowflake provider; older versions required separate grant resources for each object type (e.g., snowflake_database_grant), which resulted in verbose and repetitive code. The modern resource uses a more complex but flexible block structure (on_account_object, on_schema, etc.) to assign privileges. Users migrating from older provider versions may find this a significant but worthwhile refactoring effort.

This example grants the necessary USAGE and SELECT privileges to the ANALYTICS_DB_READ_ONLY access role.

// Grant USAGE privilege on the database to the access role.
resource "snowflake_grant_privileges_to_account_role" "grant_db_usage" {  
provider          = snowflake.securityadmin  
account_role_name = snowflake_account_role.analytics_db_read_only.name  
privileges        =    on_account_object {    

object_type = "DATABASE"    
object_name = snowflake_database.analytics_db.name  
  }
 }
 
 // Grant USAGE privilege on the schema to the access role.
 resource "snowflake_grant_privileges_to_account_role" "grant_schema_usage" {  
 provider          = snowflake.securityadmin  
 account_role_name = snowflake_account_role.analytics_db_read_only.name  
 privileges        =  
 
 on_schema {    
 // Use the fully_qualified_name for schema-level objects.    
 schema_name = snowflake_schema.raw_data.fully_qualified_name  
  }
 }
 
 // Grant SELECT on all existing tables in the schema.
 resource "snowflake_grant_privileges_to_account_role" "grant_all_tables_select" {    
 provider          = snowflake.securityadmin    
 privileges        =    
 account_role_name = snowflake_account_role.analytics_db_read_only.name    
 on_schema_object {        
 all {            
 object_type_plural = "TABLES"            
 in_schema          = snowflake_schema.raw_data.fully_qualified_name    
   }  
  }
 }
 
 // Grant SELECT on all FUTURE tables created in the schema.
 resource "snowflake_grant_privileges_to_account_role" "grant_future_tables_select" {  
 provider          = snowflake.securityadmin  
 account_role_name = snowflake_account_role.analytics_db_read_only.name  
 privileges        =  
 
 on_schema_object {    
 future {      
 object_type_plural = "TABLES"      
 in_schema          = snowflake_schema.raw_data.fully_qualified_name   
   }  
  }
 }

A particularly powerful feature demonstrated here is the use of the future block. Granting privileges on future objects ensures that the access role will automatically have the specified permissions on any new tables created within that schema. This dramatically reduces operational overhead, as permissions do not need to be manually updated every time a new table is deployed. However, it is important to understand Snowflake's grant precedence: future grants defined at the schema level will always take precedence over those defined at the database level. This can lead to "insufficient privilege" errors if not managed carefully across different roles and grant levels.

3.4 An optional "Audit" role for bypassing data masks

In certain scenarios, such as internal security audits or compliance reviews, it may be necessary for specific, highly-trusted users to view data that is normally protected by masking policies. Creating a dedicated "audit" role for this purpose provides a controlled and auditable mechanism to bypass data masking when required.

This role should be considered a highly privileged functional role and granted to users with extreme care.

// Define a special functional role for auditing PII data.
resource "snowflake_account_role" "pii_auditor" {  
provider = snowflake.securityadmin  

name    = "PII_AUDITOR"  
comment = "Functional role for users who need to view unmasked PII for audit purposes."
}

Crucially, creating this role is not enough. For it to be effective, every relevant masking policy must be explicitly updated to include logic that unmasks data for members of the PII_AUDITOR role. This ensures that the ability to view sensitive data is granted on a policy-by-policy basis. An example of how to modify a masking policy to incorporate this audit role is shown in the following section.

Section 4: advanced data governance with dynamic data masking

Moving beyond infrastructure provisioning, Terraform can also codify and enforce sophisticated data governance policies. Snowflake's Dynamic Data Masking is a powerful feature for protecting sensitive data at query time. By managing these policies with Terraform, organizations can ensure that data protection rules are version-controlled, auditable, and consistently applied across all environments.

4.1 Defining the masking logic

A masking policy is a schema-level object containing SQL logic that determines whether a user sees the original data in a column or a masked version. The decision is made dynamically at query time based on the user's context, most commonly their active role.

The snowflake_masking_policy resource is used to define this logic. The policy's body contains a CASE statement that evaluates the user's session context and returns the appropriate value.

The following example creates a policy to mask email addresses for any user who is not in the DATA_ANALYST or PII_AUDITOR role.

resource "snowflake_masking_policy" "email_mask" {  
provider = snowflake.sysadmin // Policy creation often requires SYSADMIN or a dedicated governance role  n

ame     = "EMAIL_MASK"  
database = snowflake_database.analytics_db.name  
schema   = snowflake_schema.raw_data.name    

// Defines the signature of the column the policy can be applied to.  
// The first argument is always the column value to be masked.  
argument {    
name = "email_val"    
type = "VARCHAR"  }    

// The return data type must match the input data type.  
return_type = "VARCHAR" 

// The core masking logic is a SQL expression.  
body = <<-EOF    
CASE      
WHEN IS_ROLE_IN_SESSION('DATA_ANALYST') OR IS_ROLE_IN_SESSION('PII_AUDITOR') THEN email_val      
ELSE '*********'   
END  
EOF  

comment = "Masks email addresses for all roles except DATA_ANALYST and PII_AUDITOR."
}

The SQL expression within the body argument offers immense flexibility. It can use various context functions (like CURRENT_ROLE() or IS_ROLE_IN_SESSION()) and even call User-Defined Functions (UDFs) to implement complex logic. However, this flexibility means the logic itself is not validated by Terraform's syntax checker; it is sent directly to Snowflake for validation during the

terraform apply step. It is also a strict requirement that the data type defined in the argument block and the return_type must match the data type of the column to which the policy will eventually be applied.

4.2 Applying the policy to a column

Creating a masking policy is only the first step; it does not protect any data on its own. The policy must be explicitly applied to one or more table columns. This crucial second step is often a point of confusion for new users, who may create a policy and wonder why data is still unmasked. The snowflake_table_column_masking_policy_application resource creates this essential link between the policy and the column.

The following example demonstrates how to apply the EMAIL_MASK policy to the EMAIL column of a CUSTOMERS table.

// For this example, we assume a 'CUSTOMERS' table with an 'EMAIL' column
// already exists in the 'RAW' schema. In a real-world scenario, this table
// might also be managed by Terraform or by a separate data loading process.
// We use a data source to reference this existing table.
data "snowflake_table" "customers" {  
database = snowflake_database.analytics_db.name  
schema   = snowflake_schema.raw_data.name  
name     = "CUSTOMERS"
}

// Apply the masking policy to the specific column.resource "snowflake_table_column_masking_policy_application" "apply_email_mask" {  
provider = snowflake.sysadmin  

table_name  = "\"${data.snowflake_table.customers.database}\". \"${data.snowflake_table.customers.schema}\". \"${data.snowflake_table.customers.name}\""  
column_name = "EMAIL" // The name of the column to be masked  

masking_policy_name = snowflake_masking_policy.email_mask.fully_qualified_name    

// An explicit depends_on block ensures that Terraform creates the policy  
// before attempting to apply it, preventing race conditions.  
depends_on = [    
snowflake_masking_policy.email_mask  
]
}

This two-step process—defining the policy logic and then applying it - provides a clear and modular approach to data governance. The same policy can be defined once and applied to many different columns across multiple tables, ensuring that the masking logic is consistent and centrally managed.

Conclusion: the path to mature Snowflake IaC

This guide has charted a course from the initial, manual bootstrapping of a secure connection to the automated provisioning and governance of a production-grade Snowflake environment. To ensure the long-term success and scalability of managing Snowflake with Terraform, several key practices should be adopted as standard procedure:

  • Version control: All Terraform configuration files must be stored in a version control system like Git. This provides a complete, auditable history of all infrastructure changes and enables collaborative workflows such as pull requests for peer review before any changes are applied to production.
  • Remote state management: The default behaviour of Terraform is to store its state file locally. In any team or automated environment, this is untenable. A remote backend, such as an Amazon S3 bucket with a DynamoDB table for state locking, must be configured. This secures the state file, prevents concurrent modifications from corrupting the state, and allows CI/CD pipelines and team members to work from a consistent view of the infrastructure.
  • Modularity: As the number of managed resources grows, monolithic Terraform configurations become difficult to maintain. Code should be refactored into reusable modules. For instance, a module could be created to provision a new database along with a standard set of access roles and default schemas. This promotes code reuse, reduces duplication, and allows for more organized and scalable management of the environment.
  • Provider versioning: The Snowflake Terraform provider is actively evolving. To prevent unexpected breaking changes from new releases, it is crucial to pin the provider to a specific major version in the terraform block (e.g., version = "~> 1.0"). This allows for intentional, planned upgrades. When upgrading between major versions, it is essential to carefully review the official migration guides, as significant changes, particularly to grant resources, may require a concerted migration effort.

With this robust foundation in place, the path is clear for expanding automation to encompass even more of Snowflake's capabilities. The next logical steps include using Terraform to manage snowflake_network_policy for network security, snowflake_row_access_policy for fine-grained data filtering, and snowflake_task for orchestrating SQL workloads. Ultimately, the entire workflow should be integrated into a CI/CD pipeline, enabling a true GitOps model where every change to the Snowflake environment is proposed, reviewed, and deployed through a fully automated and audited process. By embracing this comprehensive approach, organizations can unlock the full potential of their data platform, confident in its security, scalability, and operational excellence.

Why Snowstack for Terraform and Snowflake

Automation without expertise can still fail. Terraform gives you the tools, but it takes experience and the right design patterns to turn Snowflake into a secure, cost-efficient, and scalable platform.

Managing Snowflake with Terraform is powerful, but putting it into practice at enterprise scale requires experience, discipline, and the right patterns. That is where Snowstack comes in. As a Snowflake-first consulting partner, we help organizations move beyond trial-and-error scripts to fully automated, production-grade environments. Our engineers design secure architectures, embed Terraform best practices, and ensure governance and cost controls are built in from day one.

👉 Book a strategy call with Snowstack and see how we can take your Snowflake platform from manual operations to enterprise-ready automation.

Blog
5 min read

How Snowflake cost is calculated: 5 steps to optimize your data warehouse costs before your next renewal

For data teams, the pattern is almost always the same. You move to Snowflake for performance and scale. But then the first bill lands, and suddenly your Snowflake warehouse costs are far higher than forecast. What went wrong?

Read more

For data teams, the pattern is almost always the same. You move to Snowflake for performance and scale. But then the first bill lands, and suddenly your Snowflake warehouse costs are far higher than forecast. What went wrong?

The first step to regaining control is understanding how Snowflake costs are calculated. This guide breaks down the cost structure and gives you five practical steps to optimize spend, so you only pay for the resources you actually need.

But first, what Snowflake is used for?

Snowflake is a cloud data platform that enables organizations to store, process, and analyze data at scale. It operates on the three leading cloud providers (Amazon Web Services, Google Cloud Platform, and Microsoft Azure) giving businesses flexibility in how they deploy and expand their environments.

As a fully managed service, Snowflake removes the burden of infrastructure management. Users do not need to handle hardware, software updates, or tuning. Instead, they can focus entirely on working with their data while the platform manages performance, security, and scalability in the background.

One of Snowflake’s defining features is the separation of storage and compute, which allows each to scale independently. This design supports efficient resource usage, quick provisioning of additional capacity when needed, and automatic suspension of idle compute clusters known as virtual warehouses. These capabilities reduce costs while maintaining high performance.

Why your Snowflake cost keeps growing?

Before we discuses optimization, let's decode what you're actually paying for. Because if you're like most data teams, you're probably overpaying for things you didn't know you were buying.

Snowflake’s pay-as-you-go model is built on two primary components: compute and storage, along with a smaller component, cloud services.

1. Compute costs

This is typically the largest portion of your Snowflake bill. Compute is measured in Snowflake credits, an abstract unit that's consumed when a virtual warehouse is active.

Here's how the math works:

  • Virtual Warehouses: These are the compute clusters (EC2 instances on AWS, for example) that run your queries, data loads, and other operations.
  • "T-Shirt" Sizing: Warehouses come in sizes like X-Small, Small, Medium, Large, etc. Each size up doubles the number of servers in the cluster and, therefore, doubles the credit consumption per hour.
  • Per-Second Billing: You're billed for credits on a per-second basis after the first 60 seconds of a warehouse running.

The formula for calculating the cost:

Credits Consumed = (Credits per hour for the warehouse) × (Total runtime in seconds) ÷ 3600

Real example: Running a Large warehouse (8 credits/hour) for 30 minutes (1800 seconds) would consume (8 * 1800)÷3600 = 4 credits. If you're paying $3 per credit, that half-hour just cost you $12. Scale that across dozens of queries per day, and you can see how costs spiral.

2. Storage Costs

At first, storage looks inexpensive compared to compute, but as data grows costs can rise quickly. Snowflake calculates storage charges based on the average monthly volume of data you store in terabytes. Because your data is automatically compressed, you’re billed on the compressed size, which most teams overlook.

You're paying for three different types of storage:

  1. Active Storage: The live data in your databases and tables.
  2. Time-Travel: Data kept to allow you to query or restore historical data from a specific point in the past. The default retention period is 1 day, but it can be configured up to 90 days for Enterprise editions.
  3. Fail-safe: A 7-day period of historical data storage after the Time-Travel window closes, used for disaster recovery by Snowflake support. This is not user-configurable.

3. Cloud services costs

The cloud services layer provides essential functions like authentication, query parsing, access control, and metadata management. For the most part, this layer is free. You only begin to incur costs if your usage of the cloud services layer exceeds 10% of your daily compute credit consumption. This is rare but can happen with an extremely high volume of very simple, fast queries.

5 steps to optimize your Snowflake warehouse costs

Now that you know what you're paying for, here are five steps to significantly reduce your spend.

Step 1: right-size your virtual warehouses

Running an oversized warehouse is like using a sledgehammer to crack a nut - it's expensive and unnecessary.

  • Start small: Don't default to a Large warehouse. Begin with an X-Small or Small and only scale up if performance is inadequate. It's often more efficient to run a query for slightly longer on a smaller warehouse than for a few seconds on a larger one. Look for slow queries in the Query History that generate a lot of “Bytes spilled to local storage”. For large joins or window functions, going for a Small to Large warehouse might be 4 times more expensive, but 10x faster, resulting in a 60% cost reduction.
  • Set aggressive auto-suspend policies: An active warehouse consumes credits even when it's  inactive. Configure your warehouses to auto-suspend quickly when not in use. A setting of 1 to 5 minutes is a good starting point for most workloads. This single change can have a massive impact on your bill.
  • Separate your workloads: Don't use one giant warehouse for everything. Create separate warehouses for different teams and tasks: (e.g., ELT_WH for data loading, BI_WH for analytics dashboards, DATASCIENCE_WH for ad-hoc exploration). This prevents a resource-intensive data science query from slowing down critical business reports and allows you to tailor the size and settings for each specific workload.
  • Use multi-cluster warehouses for high concurrency: If you have many users running queries simultaneously (like a popular BI dashboard), instead of using a larger warehouse (scaling up), configure a multi-cluster warehouse (scaling out). This will automatically spin up additional clusters of the same size to handle the concurrent load and spin them down as demand decreases.

Step 2: optimize your queries and workloads

Inefficient queries are a primary driver of wasted compute credits. A poorly written query can run for minutes on a large warehouse when a well-written one could finish in seconds on a smaller one.

  • Use the Query Profile: This is your best friend for optimization. Before trying to fix a slow query, run it and then analyse its performance in the Query Profile. This tool provides a detailed graphical breakdown of each step of query execution, showing you exactly where the bottlenecks are (e.g., a table scan that should be a prune, an exploding join).
  • Avoid SELECT *: Only select the columns you actually need. Pulling unnecessary columns increases I/O and can prevent Snowflake from performing "column pruning," a key optimization technique.
  • Be careful with JOINs: Ensure you are joining on keys that are well-distributed. Accidental Cartesian products (cross-joins) are a notorious cause of runaway queries that can burn through credits.
  • Materialize complex views: If you have a complex view that is queried frequently, consider materializing it into a table. While this uses more storage, the compute savings from not having to re-calculate the view on every query can be substantial. Use Materialized Views for this, as Snowflake will automatically keep them up-to-date.

Step 3: manage your data storage lifecycle

While cheaper than compute, storage costs can creep up. Proactive data management is key.

  • Configure Time-Travel Sensibly: Do you really need 90 days of Time-Travel for every table? For staging tables or transient data, a 1-day retention period is often sufficient. Align the Time-Travel window with your actual business requirements for data recovery.
  • Use Transient and Temporary Tables: For data that doesn't need to be recovered (like staging data from an ELT process), use transient tables. These tables do not have a Fail-safe period and only have a Time-Travel period of 0 or 1 day. This can significantly reduce your storage footprint for intermediate data.
  • Periodically Review and Purge Data: Implement a data retention policy and periodically archive or delete data that is no longer needed for analysis.

Step 4: maximize caching to get free compute

Snowflake has multiple layers of caching that can dramatically reduce credit consumption if leveraged correctly. When a query result is served from a cache, it consumes zero compute credits.

  • The Result Cache: Snowflake automatically caches the results of every query you run. If another user submits the exact same query within 24 hours (and the underlying data has not changed), Snowflake returns the cached result almost instantly without starting a warehouse. This is perfect for popular dashboards where many users view the same report.
  • Local Disk Cache (Warehouse Cache): When a warehouse executes a query, it caches the data it retrieved from storage on its local SSD. If a new query requires some of the same data, it can be read from this much faster local cache instead of remote storage, speeding up the query and reducing compute time. This cache is cleared when the warehouse is suspended.

Step 5: implement robust governance and monitoring

You can't optimize what you can't measure. Use Snowflake's built-in tools to monitor usage and enforce budgets.

  • Set up Resource Monitors: This is your primary safety net. A Resource Monitor can be assigned to one or more warehouses to track their credit consumption. You can configure it to send alerts at certain thresholds (e.g., 75% of budget) and, most importantly, to suspend the warehouse when it hits its limit, preventing runaway spending.
  • Analyse your usage data: Snowflake provides a wealth of metadata in the SNOWFLAKE database, specifically within the ACCOUNT_USAGE schema. Views like WAREHOUSE_METERING_HISTORY, QUERY_HISTORY, and STORAGE_USAGE are invaluable. Query this data to find your most expensive queries, identify your busiest warehouses, and track your storage costs over time.
  • Tag everything for cost allocation: Use Snowflake's tagging feature to assign metadata tags to warehouses, databases, and other objects. You can tag objects by department (finance, marketing), project, or user. This allows you to query the usage views and accurately allocate costs back to the teams responsible, creating accountability.

Bringing it all together

So what’s your next step? These five practices will help you reduce costs and build smarter habits, but turning them into measurable savings at scale takes more than a checklist. It requires the right expertise and execution.

For example, a leading financial services company was spending more than $800K per month on cloud costs with no clear view of where the money was going. Within 90 days of working with our experts, they gained full visibility, reduced ingestion latency by 80%, and built a governed, AI-ready platform while bringing costs back under control.

👉 Read the full case study here

At Snowstack, we bring certified Snowflake expertise and proven delivery methods to help enterprises cut spend, improve performance, and prepare their platforms for AI and advanced analytics.

Ready to make your Snowflake environment cost-efficient and future-proof?

Blog
5 min read

Best practices for protecting your data: Snowflake role hierarchy

One stolen password can bring down an entire enterprise. As businesses move more of their data to the cloud and centralize it on platforms like Snowflake, a critical question emerges: who should have access, and how do you manage it at scale without slowing the business or weakening security?

Read more

One stolen password can bring down an entire enterprise. The 2024 Snowflake breaches revealed how fragile weak access controls are, with 165 organizations and millions of users affected. The breaches were not the result of advanced attacks. They happened because stolen passwords went unchecked, and multi-factor authentication was missing. As businesses move more of their data to the cloud and centralize it on platforms like Snowflake, a critical question emerges: who should have access, and how do you manage it at scale without slowing the business or weakening security?

In this article, we’ll break down the Snowflake Role Hierarchy, explain why it matters, and share best practices for structuring roles that support security, compliance, and day-to-day operations.

What is Snowflake’s role hierarchy?

Snowflake’s role hierarchy is a structured framework that defines how permissions and access controls are organized within the platform. In Snowflake, access to data and operations is governed entirely by roles. Using the Role-Based Access Control (RBAC) model, you grant privileges to roles, and then assign users to those roles, simplifying administration, ensuring consistency, and making audit access easier. RBAC is generally recommended for production environments and enterprise-level governance.

The hierarchy operates on a parent-child relationship model where higher-level roles inherit privileges from subordinate roles, creating a tree-like structure. This structure provides granularity, clarity, and reusability, but it requires thoughtful planning to avoid sprawl or over-permissioned users.

Core components of Snowflake RBAC

  • Roles: The fundamental building blocks that encapsulate specific privileges
  • Privileges: Defined levels of access to securable objects (databases, schemas, tables)
  • Users: Identities that can be assigned roles to access resources
  • Securable Objects: Entities like databases, tables, views, and warehouses that require access control
  • Role Inheritance: The mechanism allowing roles to inherit privileges from other roles

Understanding Snowflake's system-defined roles

Understanding the default role structure is crucial for building secure hierarchies:

ACCOUNTADMIN

SYSADMIN

  • Full control over database objects and users
  • Recommended parent for all custom roles
  • Manages warehouses, databases, and schemas

SECURITYADMIN

  • Manages user and role grants
  • Controls role assignment and privilege distribution
  • Essential for maintaining RBAC governance

Custom roles

  • Created for specific teams or functions within an organization (e.g ANALYST_READ_ONLY, ETL_WRITER).

Best practices for designing a secure Snowflake role hierarchy

A well-structured role hierarchy minimizes risk, supports compliance, and makes onboarding/offboarding easier. Here’s how one should do it right:

1. Follow the Principle of Least Privilege

Grant only the minimum required permissions for each role to perform its function. Avoid blanket grants like GRANT ALL ON DATABASE.

Do this:

  • Specific, targeted grants
  • Avoid cascading access down the role tree unless absolutely needed
  • Regularly audit roles to ensure they align with actual usage
GRANT SELECT ON TABLE SALES_DB.REPORTING.MONTHLY_REVENUE TO ROLE ANALYST_READ;
GRANT USAGE ON SCHEMA SALES_DB.REPORTING TO ROLE ANALYST_READ;
GRANT USAGE ON DATABASE SALES_DB TO ROLE ANALYST_READ;

Not this:

  • Overly broad permissions
GRANT ALL ON DATABASE SALES_DB TO ROLE ANALYST_READ;

Why does it matter?

Least privilege prevents accidental (or malicious) misuse of sensitive data. It also supports data governance and compliance with various regulations like GDPR or HIPAA.

2. Use a layered role design

Design your roles using a layered and modular approach, often structured like this:

  • Functional Roles (what the user does):
CREATE ROLE ANALYST_READ;
CREATE ROLE ETL_WRITE;
CREATE ROLE DATA_SCIENTIST_ALL;
  • Environment Roles (where the user operates)
CREATE ROLE DEV_READ_WRITE;
CREATE ROLE PROD_READ_ONLY;

Composite or Team Roles (Group users by department or team, assigning multiple functional/environment roles under one umbrella)

CREATE ROLE MARKETING_TEAM_ROLE → includes PROD_READ_ONLY + ANALYST_READ

3. Avoid granting privileges directly to users

Always assign privileges to roles and not users. Then, assign users to those roles.

Why it matters?

This keeps access transparent and auditable. If a user leaves or changes teams, simply revoke or change the role. There’s no need to hunt down granular permissions.

4. Establish consistent naming conventions

Enforce naming conventions as consistent role and object naming makes automation and governance far easier to scale.

Recommended Naming Pattern:

  • Access Roles: {ENV}_{DATABASE}_{ACCESS_LEVEL} (e.g., PROD_SALES_READ)
  • Functional Roles: {FUNCTION}_{TEAM} (e.g., DATA_ANALYST, ETL_ENGINEER)
  • Service Roles: {SERVICE}_{PURPOSE}_ROLE (e.g., FIVETRAN_LOADER_ROLE)

5. Use separate roles for Administration vs. Operations

Split roles that manage infrastructure (e.g., warehouses, roles, users) from roles that access data.

  • Admins: SYSADMIN, SECURITYADMIN
  • Data teams: DATA_ENGINEER_ROLE, ANALYST_ROLE, etc.

Why it matters? This separation of duties limits the potential impact of security incidents and supports audit compliance. Administrators should not have access to sensitive data unless it's absolutely necessary for their role.

6. Secure the top-level roles

Roles like ACCOUNTADMIN and SECURITYADMIN should be assigned to the fewest people possible, protected with MFA, and monitored for any usage.

Implementation Checklist:

  • Limit ACCOUNTADMIN to 2-3 emergency users maximum
  • Enable MFA for all administrative accounts
  • Set up monitoring and alerting for admin role usage
  • Regular access reviews and privilege audits
  • Document and justify all administrative access

Monitoring, auditing & compliance: keeping your Snowflake hierarchy healthy

Even the best-designed role trees can get messy over time. Here’s how to maintain security:

1. Regular access reviews

Implement quarterly access reviews to maintain security hygiene:

  • Role Effectiveness Analysis: Identify unused or over-privileged roles
  • User Access Validation: Verify users have appropriate role assignments
  • Privilege Scope Review: Ensure roles maintain least privilege principles
  • Compliance Mapping: Document role mappings to business functions

2. Logging and monitoring

Enable Access History and Login History in Snowflake to track activity and implement automation tools for role assignments during employee transitions.

3. Onboarding/offboarding automation

Implement automation tools or scripts to efficiently manage role assignments during employee transitions.

4. Object Tagging for enhanced security

Use object tagging to classify sensitive data and control access accordingly.

Measuring RBAC Success: Key Performance Indicators

1. Security Metrics

  • Access Review Coverage: % of roles reviewed quarterly
  • Privilege Violations: Number of excessive privilege grants identified
  • Failed Authentication Attempts: Monitor for unauthorized access patterns
  • Role Utilization Rate: % of active roles vs. total created roles

2. Operational Metrics

  • User Onboarding Time: Average time to provision new user access
  • Role Management Efficiency: Time to modify/update role permissions
  • Audit Response Time: Speed of access review and remediation
  • Automation Coverage: % of role operations automated vs. manual

3. Compliance Metrics

  • SOC 2 Readiness: Role hierarchy documentation completeness
  • GDPR/Data Privacy: Data access control effectiveness
  • Industry Compliance: Sector-specific requirement adherence
  • Change Management: Role modification approval and documentation

Future-Proofing Your RBAC Strategy

The way you manage access today will define how secure and scalable your Snowflake environment is tomorrow. The strength of Snowflake’s RBAC model lies in its flexibility, but that power comes with responsibility. As AI features mature, as multi-cloud deployments become the norm, and as regulators tighten expectations around data privacy, static role hierarchies quickly fall behind. A poorly structured role hierarchy can lead to data leaks, audit failures, higher operational costs, and stalled innovation.

At Snowstack, we specialize in building RBAC strategies that are not only secure today but ready for what’s next. Our team of Snowflake-first engineers has designed role models that scale across continents, safeguard sensitive data for regulated industries, and enable AI without exposing critical assets. We continuously monitor Snowflake’s roadmap and fold new security capabilities into your environment before they become business risks.

Don’t wait for the next breach to expose the cracks in your access controls. Let’s design an RBAC strategy that keeps you secure, compliant, and future-ready.

👉 Book a free RBAC assessment

FAQ: Snowflake role hierarchy best practices

Q: What's the difference between RBAC and UBAC in Snowflake?

RBAC provides scalability and centralized control by granting privileges to roles, which are then assigned to users. UBAC allows privileges to be assigned directly to individual users and is intended for collaborative scenarios like building Streamlit applications.

Q: How many roles should I create in Snowflake?

Follow the "Role of Three" principle: create Access Roles (data-centric), Functional Roles (business-centric), and Service Roles (system-centric). This approach avoids role explosion while maintaining necessary granularity.

Q: Should I grant privileges directly to users?

Always assign privileges to roles and not users. Then, assign users to those roles. This keeps access transparent and auditable.

Q: What's the best way to manage role inheritance?

Design with hierarchy in mind - role ownership and grant structure should align with your intended control model. Map business functions to role layers and ensure clear inheritance paths.

Q: How do I handle emergency access situations?

Create emergency “break-glass” roles with elevated privileges that are heavily monitored and logged, require additional approval workflows, automatically expire after a set period of time, and immediately notify the security team when activated.

Q: How often should I review role permissions?

Conduct comprehensive access reviews every quarter, perform monthly spot checks on high-privilege administrative roles, service accounts, recently modified permissions, and roles tied to employees who have left the company.

Q: Can I automate Snowflake role management?

Yes, automation is critical for scaling. Create stored procedures for role provisioning, use CI/CD pipelines for role deployment, and integrate with identity providers for user lifecycle management.

Q: How do I ensure compliance with data privacy regulations?

Classify and tag sensitive data, enforce row-level security and column masking, maintain detailed audit logs with supporting access documentation, run regular compliance assessments and gap analyses, and document the business justification for every access role granted.

Explore our latest blog posts for valuable insights.
View more insights
Stay up to date

Top data insights, delivered to your inbox

 Thanks for joining us!

We’ll keep you posted with fresh updates and resources.

Oops! Something went wrong while submitting the form.

Transform your data with Snowflake

You don't need to hire a data army or wait months to see results. Our Snowflake specialists will get you up and running fast, so you can make better decisions, cut costs, and beat competitors who are still stuck with spreadsheets and legacy systems

Learn more