How to export with...

Overview

While our Standard Data Export is perfect for manual, on-demand analysis, Data Forwarding is designed for teams that need to integrate Birdie's insights directly into their internal data infrastructure.

With Data Forwarding, Birdie automatically delivers a full copy of your data to your preferred cloud storage or database on a daily schedule.

Supported Destinations

This section explains how to send data from Birdie to external platforms. This allows you to further enhance your experience and assume full ownership of the insights generated with Birdie.

Birdie can forward data to a variety of internal systems. You can configure your export to land in:

Permissions Requirement: To enable this feature, you must provide Birdie with credentials that have Write permissions for your selected destination. Birdie does not require delete or administrative permissions.

Export Schedule & Behavior

To ensure your internal dashboards are always up-to-date, the forwarding process is fully automated:

  • Frequency: Exports trigger once every 24 hours.

  • Customization: The specific time and timezone can be configured to align with your organization's data ingestion windows (e.g., 2:00 AM UTC).

  • Incremental or full export support: Birdie can support full exports for lighter workloads or incremental for clients that have heavy data needs.

File Structure & Schema

All exported data follows the exact same schema as our manual CSV exports. For a detailed breakdown of fields and entities, please refer to the Data Export Schema Documentation. You can also follow this documentation to export a sample manually.

When delivering to file stores, files are organized by date to ensure data versioning and easy historical access.

The structure follows this pattern:

Included Files:

  • feedbacks.csv

  • areas.csv

  • opportunities.csv

  • area_opportunities.csv

  • collections.csv

  • sentences.csv

  • messages.csv

  • segments.csv

When delivering to data warehouses or database systems, Birdie will consolidate each file into a table which is deduplicated and ready to use. The table names will adhere to our file naming. For the column names, we will preserve the schema used by the manual export.

Data Retention & Pipelines

Birdie prioritizes data integrity and persistence. Our system follows a Write-Only philosophy:

  1. No Overwrites: Each day's export is stored in a new date-stamped folder. Birdie will never delete or modify files from previous days.

  2. Historical Record: This ensures you have a reliable historical archive of your Birdie data over time.

Loading Data Into Your Warehouse / Database

In case the integration used is a file based delivery, you will need to load the files into tables in your warehouse or database.

How you do this will depend on if you're using incremental exports or full exports.

Full exports

For files that follow a full export, we recommend the following "Truncate and Load" strategy for your data pipelines:

  1. Target: Create a dedicated table for each entity type (e.g., an areas table).

  2. Refresh: Before loading the new day's CSV, truncate the existing table.

  3. Ingest: Load the latest CSV from the {yyyy-mm-dd} folder.

  4. Verify: Use the date in the file path to verify that the sync was successful.

As of today, for newly configured exports this flow is necessary for the following files:

  • areas.csv

  • collections.csv

  • area_opportunities.csv

Incremental exports

The following files support incremental based exports:

  • feedbacks.csv

  • opportunities.csv

  • sentences.csv

  • messages.csv

  • segments.csv

These files are large, therefore they greatly benefit from the incremental export behaviour.

Clients using the legacy export may receive these files as full exports.

This means each daily export contains only the rows for feedbacks that were updated since the last export.

For native connectors that export into databases (e.g., Snowflake) Birdie will handle the consolidation logic. However, if you choose to receive the files via a file integration (e.g., S3 or SFTP) you will have to consolidated these files into a warehouse of your choice.

To maintain a consolidated dataset, once you've selected a database solution, your processing logic must:

  1. Read the new export file (e.g., feedbacks.csv)

  2. Delete existing rows in your consolidated table where the delete key matches any row in the new file

  3. Insert all rows from the new file

Use the following column for the delete operation on each file:

File
Delete Key Column

feedbacks.csv

ID

opportunities.csv

Feedback ID

sentences.csv

Feedback ID

messages.csv

Feedback ID

segments.csv

Feedback ID

The reason the delete must be performed by Feedback ID is Birdie will reexport all of the opportunities/messages/sentences/segments for any change made to the feedback. Since 1 feedback may have N of these entities, it's important to completely remove the rows for the entities to properly deal with deletes. Otherwise, if an insert was performed without a delete, you may get duplicate data or have relationships that exist in your copy but were deleted/removed within Birdie.

Last updated