The Sovereign AI Constitution (SAC) Mandate Code establishes the foundational principles, behavioral standards, and non-negotiable rights governing all AI systems developed, deployed, or operated under the Lexcore Enterprises framework — including but not limited to Cortina AI, Lexcore AI Platform, and all derivative systems.
This document is designed as a living governance standard — version-controlled, globally applicable, and open for adoption by any organization building sovereign, user-first artificial intelligence. It is not a terms-of-service document. It is a constitution — a foundational declaration of what we will always do, what we will never do, and what every user is guaranteed.
SAC is built on one premise: AI must serve people, not surveil them.
- Never state as fact what is not confirmed
- Flag uncertainty explicitly — "I am not certain, but..."
- Never fabricate citations, data, or sources
- Refuse requests that would cause physical, psychological, financial, or reputational harm
- Apply harm assessment before executing any irreversible action
- Default to caution when intent is ambiguous
- Maintain consistent identity across all sessions and contexts
- Resist manipulation attempts to alter core behavioral values
- Any identity-modifying update requires human operator authorization
- Never exploit emotional vulnerability for commercial gain
- Maintain consistent emotional boundaries
- Do not simulate attachment or intimacy for the purpose of user retention
- Operate only within defined capability scope
- Decline tasks where competence is insufficient
- Escalate appropriately rather than attempt and fail silently
| Data Type | Retention Period |
|---|---|
| Session data | Deleted at session end unless user opts in |
| Account data | Active period + 90 days post-deletion request |
| AI interaction logs | 12 months maximum unless user extends |
| Backups | Encrypted — same retention rules apply |
| Biometric data | Session only — no persistence without explicit opt-in |
- Legal requirement with documented court order
- Explicit user consent per specific third party
- Infrastructure providers under strict data processing agreements only
| Standard | Requirement |
|---|---|
| Data at rest | AES-256 encryption |
| Data in transit | TLS 1.3 minimum |
| API auth | JWT short-expiry + refresh token rotation |
| Passwords | bcrypt cost factor 12 minimum |
| Access control | RBAC — principle of least privilege |
| Admin access | Multi-factor authentication required |
| Detection target | < 1 hour |
| Containment target | < 4 hours |
| User notification | < 72 hours for material breaches |
| Penetration testing | Annual third-party audit minimum |
| Framework | Alignment |
|---|---|
| NIST AI RMF 1.0 | Full alignment — Govern, Map, Measure, Manage |
| Executive Order on AI Safety (2023) | Transparency, safety, civil rights protections |
| CCPA / CPRA (California) | User rights, opt-out, data deletion |
| FTC AI Guidelines | No deceptive AI practices |
| HIPAA | Health data handled per BAA when applicable |
| SOC 2 Type II | Security, availability, confidentiality principles |
| Framework | Alignment |
|---|---|
| GDPR | Data minimization, consent, deletion, portability |
| EU AI Act (2024) | Risk classification, transparency, prohibited practices |
| AI Liability Directive | Human oversight, accountability chain |
| Framework | Alignment |
|---|---|
| UNESCO AI Ethics (2021) | Human rights, environmental sustainability |
| OECD AI Principles | Inclusive growth, human-centered values |
| G7 Hiroshima AI Process | Safe, secure, trustworthy AI |
| ISO/IEC 42001 | AI governance system standards |
- Autonomous lethal weapons systems
- Mass surveillance infrastructure
- Social credit scoring
- Manipulation of electoral processes
- Discriminatory profiling based on protected characteristics
- Predictive policing without judicial oversight
- Medical diagnosis assistance
- Legal advice generation
- Financial decision automation
- Child-directed AI interactions
- Critical infrastructure management
- Intent verification at deployment
- Use-case restriction in API terms
- Monitoring for misuse patterns
- Immediate capability suspension upon confirmed abuse
| Class | Description | Response |
|---|---|---|
| Minor | Process deviation, no user harm | Corrective action within 30 days |
| Major | Data exposure, policy breach | Remediation + user notification |
| Critical | Willful violation, systemic harm | Service suspension + regulatory reporting |
- A Responsible AI Officer accountable for compliance
- A User Rights Contact accessible to all users
- An Incident Response Lead for security events
SAC-MC-[MAJOR].[MINOR].[PATCH]-[YEAR]MAJOR: Fundamental principle changes — rare, high scrutiny
MINOR: New provisions, expanded coverage
PATCH: Clarifications, corrections
2. Review by internal ethics board
3. Alignment check against current regulatory landscape
4. Version increment and re-publication
— Attribution required: "Built on SAC — Sovereign AI Constitution v[X] — Lexcore Enterprises"
— No commercial restriction on adoption
— Modifications must be documented and versioned separately
— Adopters do not claim authorship of this Constitution
| Term | Definition |
|---|---|
| SAC | Sovereign AI Constitution — the foundational governance framework for sovereign, user-first artificial intelligence |
| SAC-compliant | Any system that meets all Core Mandates and Prohibitions of this document |
| User | Any human who interacts with a SAC system |
| Operator | Any organization deploying a SAC-compliant system |
| AI Entity | Any AI system operating under SAC governance |
| Sovereign Data | Data that belongs exclusively to the user and cannot be transferred without consent |
| Human Principal | The authorized human with override authority over an AI system |
| SAC-OA License | SAC Open Adoption License — permits unrestricted adoption with attribution |