Snowflake

This article describes how to configure your Snowflake account to allow Birdie to export data to a dedicated database and schema.

Prerequisite: Before configuring the export integration, you should already have completed the setup described in Data Ingestion with Snowflake. The steps below assume a Birdie user already exists and has access to a warehouse.


Overview

Birdie can load processed data (e.g., enriched feedback, areas, opportunities) into your Snowflake account. We recommend creating:

  • A dedicated write role (BIRDIE_WRITE)

  • A separate database and schema for Birdie exports

This isolates write access from your existing data.

Step 1 – Create a write role for Birdie

Create a dedicated role for write operations:

CREATE ROLE BIRDIE_WRITE;

Step 2 – Grant the write role to the Birdie user

Assuming you have already created a Birdie user (e.g., BIRDIE_USER) as part of the import setup, grant the write role to that user:

GRANT ROLE BIRDIE_WRITE TO USER BIRDIE_USER;

Step 3 – Create a dedicated database and schema for exports

Create a database and schema for Birdie to write to:

Step 4 – Grant required privileges to the write role

Grant the necessary privileges for Birdie to create and manage tables, internal stages, and file formats:

Step 5 – Share export configuration with Birdie

Share the following details securely with Birdie:

  • Export database name (e.g., BIRDIE_EXPORT_DB)

  • Export schema name (e.g., BIRDIE_EXPORT_SCHEMA)

  • Write role name (e.g., BIRDIE_WRITE)

If not already shared during the data ingestion setup, also include:

  • Account identifier (e.g., <ACCOUNT_IDENTIFIER>)

  • User name (e.g., BIRDIE_USER)

  • Warehouse name (e.g., BIRDIE_WH)

  • Password or OAuth client credentials (client ID/secret, token endpoint)

Birdie will configure the export connector and confirm once the integration is active.

If you don't know how to generate/obtain any of these items, please refer to the Snowflake data ingestion setup.

How the Export Works

Birdie uses Snowflake's bulk loading pipeline:

  1. Stage creation: A temporary stage holds the CSV file.

  2. File upload: The CSV is uploaded via PUT.

  3. Schema inference: Table schema is inferred from CSV headers using INFER_SCHEMA.

  4. Data loading: Data is loaded into a temporary table via COPY INTO with column name matching.

  5. Merge: For incremental loads, rows with matching primary keys are deleted and re-inserted.

  6. Commit: The process runs in a transaction for atomicity.

Exported Data Format

Birdie creates one table per CSV file. Table and column naming follows our standard CSV export conventions.

Data is de-duplicated on each load, so it will be ready to use. For details on joining tables, refer to our data exports documentation.

Some data (e.g. custom fields) may require additional handling due to their object-like nature.

References

Last updated