Responsible AI Policy
1. Purpose and Scope
This policy establishes the principles, processes, and controls for the responsible design, development, deployment, and monitoring of Artificial Intelligence (AI), Machine Learning (ML), and Large Language Model (LLM) capabilities within the organization. It applies to all employees, contractors, and third parties involved in AI-related work.
2. AI Feature Implementation
AI Adoption: Any solution leveraging AI features, ML models, or LLMs must be approved by the AI Governance Committee. Plans to implement AI within 12 months must be documented and risk-assessed in advance.
AI Risk Model: All AI solutions must have an AI risk model aligned with the NIST AI Risk Management Framework (RMF) to identify, assess, and mitigate risks.
3. Governance and Control
Enable/Disable Controls: AI features must be configurable and capable of being disabled or enabled by tenant and/or user in a timely manner.
Incident Management: In the event of an incident, AI features must be disabled quickly and re-enabled only after remediation.
Logging & Auditing: All AI activity must be logged, including user, date, and actions taken. Logs must be retained and auditable.
Supply Chain Risks: Vendor and third-party AI dependencies must be assessed for supply chain risk before integration.
4. Data Governance and Privacy
Data Protection: Business rules must prevent ingestion of sensitive data unless explicitly authorized.
Data Removal: If sensitive data is ingested, it must be removable upon request without residual retention in the AI model.
Separation of Data: ML training data must be separated from operational solution data.
Access Controls: ML and LLM training data access is limited to staff with a verified business need.
Data Validation: All training data must be vetted, validated, and verified for quality and compliance before use.
5. Risk Management and Compliance
AI Risk Mapping: Policies, processes, and practices for mapping, measuring, and managing AI risks must be documented, posted, and implemented consistently.
AI Risk Measurement: AI risks must be identified, measured, and tracked continuously.
Technical & Procedural Safeguards: Documented mitigation plans must address risks related to fairness, bias, privacy, and security.
6. ML-Specific Controls
Model Transparency: ML models must include documentation, input/output logs, and explainability mechanisms.
Model Feedback Verification: ML outputs must be authenticated and verified before influencing decisions.
Model Monitoring: Training data and model performance must be continuously monitored and audited.
Adversarial Defense: ML systems must undergo adversarial testing and resilience checks.
Watermarking: Training data should be watermarked where possible to track provenance and detect tampering.
7. LLM-Specific Controls
LLM Privileges: LLM privileges are limited by default; expanded capabilities require explicit approval.
Human Oversight: All actions from LLM features or plugins requiring critical decisions must involve human intervention.
Resource Controls: LLM resource usage (per request, step, or action) must be monitored and capped to prevent abuse.
Plugin Usage: Limitations must be placed on multiple LLM plugins being triggered within a single request.
Training Data Validation: LLM training data must be vetted, validated, and verified before model use.
Tuning & Validation: LLM tuning and validation mechanisms must be leveraged to ensure safe, accurate, and fair model performance.
8. Training & Awareness
Responsible AI Training: All staff involved in AI development and deployment must complete responsible AI training annually.
Continuous Education: Staff must stay updated on emerging AI risks, compliance frameworks, and organizational policies.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article