propvivo-logo
Go/links
| Generative AI Trust Center
Last updated Friday March 15th
Last updated Friday March 15th
Go/links
Our organization prioritizes the security of Generative AI, actively addressing and mitigating the associated risks. We aim to remain at the forefront of tackling Generative AI security challenges, constantly updating our strategies to counter emerging threats. This page offers a clear outline of our methods to safeguard against risks related to Large Language Models (LLMs), serving as a resource for vendors and customers alike. Through our continuous efforts, we ensure the integrity and reliability of our Generative AI systems, maintaining a secure environment for all stakeholders.
Compliance
image 54
LSOC Type I
Model Provider
image 54
OpenAI GPT 3
image 54
Anthropic Claude 3
Data Sent to LLMs
Controls
Customer Data Privacy
No training on customer data
Untrusted sources checked for risks
Links sanitized before LLM input
End Customer Security
Profane content risk mitigated
National security risk mitigated
User phishing risk mitigated
LLM Application Risk Assessment
Production outputs evaluated on regular basis
Metrics on LLM performance measured
Customer reports of malfunction handled in timely manner
Controlling IP
Model architecture exfiltration risk mitigated
Training data sources kept confidential
End customers protected from IP and copyright issues
Access Controls
Role based access controls established
Consent protocols for customers established
LLM Application access controls well maintained
Risk Communication
Procedure in place to communicate LLM Security risks to customers
Customers and end users can access LLM Security posture easily
LLM Model Security
Base models vetted before use for privacy considerations
Training data access controlled and policies established
propvivo-logo