Birdie’s GenAI security
GenAI Models Used
Birdie employs a combination of Generative AI models to ensure optimal performance and security in its solution. The currently integrated models include:
OpenAI GPT (via API)
Anthropic Claude (via API)
Google Gemini (via API)
Meta LLaMA (self-hosted)
Whisper (self-hosted)
Proprietary Models and Foundation Models
In addition to public models, Birdie also develops and uses proprietary models based on SLMs (Self-Hosted Language Models), such as:
DeBERTa (documentation)
RoBERTa (documentation)
These models are utilized for specific natural language processing tasks and to enhance the Birdie solution.
GenAI Integration in the Solution
Generative AI technology is integrated into Birdie in various ways, including:
Data processing via data pipelines
Integration with RAG (Retrieval-Augmented Generation) in the application interface
GenAI Use Cases in Birdie
Birdie’s Generative AI is mainly used for:
Data enrichment: Automatic classification and extraction of relevant information.
In-app assistant (RAG): Generating contextualized responses based on proprietary data.
Data Usage Policy and Model Training
Birdie ensures that data used in tests (Proof of Concept - PoC) will not be used for refining or training AI models. The training process follows best practices, including:
Base model training only occurs with a trigger and human supervision.
Use of synthetic data for training.
Implementation of cross-validation techniques (10-fold cross-validation, train-validation-test).
Continuous LLM adaptation via RAG, allowing customization based on client feedback without affecting the global model.
Models are evaluated using client data, but neither Birdie nor third-party providers train on client data.
Security Measures and Guardrails
To ensure the safe use of Generative AI, Birdie implements several guardrails in its solution, including:
Prevention of Prompt Injection Attacks
Using prompt engineering techniques to mitigate Prompt Injection.
Implementing data cleaning and filtering to reduce Indirect Prompt Injection.
Access Control and Privacy
Limited data access by design: The LLM has no autonomy to apply arbitrary filters to data.
Sensitive data removal during ingestion: The LLM does not have access to confidential customer information.
Hallucination Reduction
Guiding responses based on real data to minimize hallucinations and ensure accurate information.
Conclusion
Birdie adopts a robust set of measures to ensure that Generative AI is used securely, effectively, and in alignment with industry best practices. For more information on our security and privacy policies, please contact our support team.
Last updated