Security Measures in Birdie’s GenAI Solution
Mariana Carrero Rodrigues
Last Update 한 달 전
Birdie employs a combination of Generative AI models to ensure optimal performance and security in its solution. The currently integrated models include:
OpenAI GPT (via API)
Anthropic Claude (via API)
Google Gemini (via API)
Meta LLaMA (self-hosted)
Whisper (self-hosted)
In addition to public models, Birdie also develops and uses proprietary models based on SLMs (Self-Hosted Language Models), such as:
DeBERTa (documentation)
RoBERTa (documentation)
These models are utilized for specific natural language processing tasks and to enhance the Birdie solution.
Generative AI technology is integrated into Birdie in various ways, including:
Data processing via data pipelines
Integration with RAG (Retrieval-Augmented Generation) in the application interface
Birdie’s Generative AI is mainly used for:
Data enrichment: Automatic classification and extraction of relevant information.
In-app assistant (RAG): Generating contextualized responses based on proprietary data.
Birdie ensures that data used in tests (Proof of Concept - PoC) will not be used for refining or training AI models. The training process follows best practices, including:
Base model training only occurs with a trigger and human supervision.
Use of synthetic data for training.
Implementation of cross-validation techniques (10-fold cross-validation, train-validation-test).
Continuous LLM adaptation via RAG, allowing customization based on client feedback without affecting the global model.
Models are evaluated using client data, but neither Birdie nor third-party providers train on client data.
To ensure the safe use of Generative AI, Birdie implements several guardrails in its solution, including:
- Using prompt engineering techniques to mitigate Prompt Injection.
Implementing data cleaning and filtering to reduce Indirect Prompt Injection.
- Limited data access by design: The LLM has no autonomy to apply arbitrary filters to data.
Sensitive data removal during ingestion: The LLM does not have access to confidential customer information.
Guiding responses based on real data to minimize hallucinations and ensure accurate information.
Birdie adopts a robust set of measures to ensure that Generative AI is used securely, effectively, and in alignment with industry best practices. For more information on our security and privacy policies, please contact our support team.
