Company AI Standard: Google Gemini
All employees use Gemini via Google Workspace. Licensed Claude users are encouraged to use Claude and Claude Code. This policy defines how to use AI tools safely β especially when connecting them to real systems via MCP or deploying autonomous agents.
1. Purpose & Scope
1.1 Purpose
This policy establishes OpenLM's framework for the responsible and effective use of Artificial Intelligence (AI) tools. It exists to:
- β’ Enable employees to benefit from AI productivity gains safely and confidently
- β’ Protect customer data, proprietary information, and OpenLM's intellectual property
- β’ Ensure compliance with applicable data protection regulations (GDPR and others)
- β’ Define clear accountability for AI-assisted work
- β’ Prevent security incidents caused by misconfigured AI integrations
1.2 Scope
This policy applies to all OpenLM employees, contractors, and third-party personnel. It covers all use of AI tools for work-related tasks β on company devices, personal devices, or cloud services β and all AI integrations, agents, plugins, and automated workflows that interact with OpenLM systems or data.
1.3 Philosophy
OpenLM embraces AI as a force multiplier. We expect employees to use AI tools to improve the quality and speed of their work. This policy is not a restriction on innovation β it is a framework that lets us innovate without creating risk. When in doubt: ask before you connect, ask before you automate, ask before you share.
2. Approved AI Tools
2.1 Company Standard: Google Gemini
Gemini is the approved company-standard AI assistant for all OpenLM employees. As an organization built on Google Workspace, Gemini is deeply integrated with our existing tools β Gmail, Docs, Sheets, Drive, Meet, and Calendar. Gemini in Google Workspace operates under Google's enterprise data protection agreements and does not use company data to train public models.
| Use Case | Tool |
|---|---|
| Drafting emails, proposals, documents | Gemini in Gmail / Docs |
| Summarizing meetings and notes | Gemini in Meet / Docs |
| Data analysis and formula assistance | Gemini in Sheets |
| General Q&A and research | Gemini (web / app) |
| Searching across Drive and email | Gemini with Drive context |
| Presentation drafting | Gemini in Slides |
2.2 Claude & Claude Code β Licensed Users
Employees holding a company-issued Anthropic Claude license are approved and encouraged to use Claude and Claude Code as a primary AI tool. See Section 5 for full guidance.
2.3 Approval Matrix
| Status | Tools |
|---|---|
| β Approved | Gemini (Google Workspace), Claude (licensed), Claude Code (licensed) |
| β οΈ Case-by-case | GitHub Copilot (requires IT approval + scope review) |
| β Not approved | ChatGPT (free/personal), Perplexity, Character.AI, unlicensed consumer tools |
Unlicensed employees using free-tier consumer AI tools must treat those as public internet tools β no customer data, no internal code, no confidential information.
3. General Usage Principles
π― You Own AI Output
Any AI-assisted work that leaves your hands β a commit, an email, a document β is your work. You are responsible for its accuracy, appropriateness, and quality.
π Disclose When It Matters
When AI materially contributed to a significant deliverable, note it internally. Do not misrepresent AI-generated content as independently researched human work.
π Verify Factual Claims
AI tools hallucinate. For any factual claim that affects decisions, products, customers, or legal matters β verify it independently.
π€ Human Oversight
AI may suggest actions β humans make decisions. Do not implement AI recommendations automatically in production, financial, or customer-facing operations without human review.
4. Data Classification & AI
| Level | Examples | AI Restrictions |
|---|---|---|
| Public | Marketing materials, public docs | Any approved tool |
| Internal | Processes, roadmap, org structure | Gemini or Claude (licensed) |
| Confidential | Customer data, financials, source code | Gemini or Claude (private workspace) |
| Restricted | PII, credentials, privileged legal | No AI without IT/Legal approval |
β Never Input Into Any AI Tool
- β’ Customer passwords, API keys, license keys, or auth tokens
- β’ PII beyond what the system is explicitly approved to handle
- β’ Unpublished product security vulnerabilities
- β’ Confidential NDA terms or privileged legal communications
- β’ System credentials (database URLs with passwords, SSH keys)
Gemini Workspace Data Protections
Gemini in Google Workspace (enterprise tier) is covered by Google's DPA. OpenLM data is not used to train public models. Admin controls are set at the domain level. Using Gemini outside Workspace (gemini.google.com with a personal account) uses consumer terms β treat as public.
5. Claude & Claude Code β Licensed Users
This section applies to employees with a company-issued Anthropic Claude license. Licensed users are encouraged to use Claude and Claude Code as primary tools.
5.2 Claude β Best Uses
- β’ Complex technical analysis & debugging
- β’ Architecture reviews & design documents
- β’ Research synthesis across long documents
- β’ Contract/legal summarization (internal drafts)
- β’ Writing and editing where nuance matters
- β’ Stronger reasoning & longer context than alternatives
5.3 Claude Code β Engineering Use
Claude Code is a terminal-integrated AI coding agent. Licensed engineering employees should use it actively as a development accelerator.
β Approved For
- β’ Writing, refactoring, and reviewing code
- β’ Generating tests and documentation
- β’ Debugging with full repository context
- β’ Exploring unfamiliar codebases
- β’ Automating repetitive development tasks
β οΈ Safe Practices
- 1. Always review generated code before committing. Claude Code is fast, but it can introduce subtle bugs, use deprecated APIs, or misunderstand business logic.
- 2. No write access to production systems. Dev environments only unless explicit Engineering Lead sign-off.
- 3. Be selective about context. Do not point it at files containing secrets or customer data.
- 4. Treat
.envfiles as out of scope. Add credential files to your Claude Code ignore configuration. - 5. Commit what you understand. If Claude Code produces a solution you don't fully understand β understand it first.
π€ Agentic Features in Claude Code
- β’ Auto-approve mode is permitted only in isolated dev environments
- β’ Never enable auto-approve against production databases, APIs, or systems where actions can't be easily reversed
- β’ When Claude Code asks to run a shell command in a sensitive environment β pause, read it, approve or reject consciously
6. Model Context Protocol (MCP) β Connecting AI to Systems
What Is MCP?
The Model Context Protocol (MCP) is an open standard that allows AI assistants to connect directly to external tools, databases, APIs, and services. An MCP server acts as a bridge: the AI can call it to read data, take actions, or interface with third-party systems β all from within the AI interface.
Examples: Claude reading your live database Β· AI creating GitHub issues or running CI/CD Β· AI querying CRM records and sending emails Β· AI coding assistant with Jira + Confluence + internal API docs simultaneously
6.2 MCP Risk Model
MCP introduces a new attack surface and failure mode:
| Risk | Description |
|---|---|
| Privilege escalation | Broad MCP access lets AI act beyond intended scope |
| Prompt injection | Malicious content in retrieved data redirects AI behavior |
| Unintended actions | AI misinterprets intent, performs destructive operations |
| Data exfiltration | MCP reads sensitive data, AI includes it in responses/logs |
| Audit trail gaps | AI actions via MCP may not appear in standard system logs |
6.3 Approval Process
No MCP server may be connected to OpenLM systems without IT Security approval.
- 1. Submit a request: what system, what actions (read/write), what data, which users, what software
- 2. IT Security reviews within 5 business days
- 3. Approval is environment-specific β dev approval β production access
6.4 Privilege Minimization β The Core Principle
Grant MCP servers the minimum privilege required for the intended task. No more.
| Instead of... | Do this... |
|---|---|
| DB MCP with full admin creds | Read-only user scoped to needed tables |
| API MCP with admin token | Service account with only required scopes |
| Filesystem access to entire server | Scoped to specific directory |
| MCP running as root | Least-privilege service account |
| Shared creds across all users | Per-user creds with individual audit |
6.5 Environment Tiers
- β’ MCP connections permitted with IT approval
- β’ Read/write access to dev databases and APIs allowed
- β’ Dev-only credentials (not shared with staging/prod)
- β’ Auto-approval permitted if engineer understands scope
- β’ No production data in dev environments
- β’ Requires IT Security + Engineering Lead sign-off
- β’ Read-only by default β write requires documented justification
- β’ Must use anonymized or synthetic data
- β’ All AI actions via MCP must be logged
- β’ Human confirmation required for any write operation
- β’ Highly restricted
- β’ Requires IT Security + VP Engineering sign-off + documented use case
- β’ Read-only access only β exceptional time-limited write with justification
- β’ Write access requires two-person authorization (AI suggests β human approves each action)
- β’ All actions logged to central audit with AI attribution
- β’ Quarterly review β revoke when no longer needed
- β’ No agentic auto-approval in production. Ever.
6.6 Prompt Injection Defense
When MCP retrieves external data, that data may contain instructions designed to redirect the AI's behavior (prompt injection).
- β’ Do not use MCP to process untrusted external content without sandboxing
- β’ Treat AI actions triggered by retrieved data with extra scrutiny
- β’ If AI suddenly proposes unusual actions after reading external data β stop and review manually
- β’ Use read-only MCP wherever possible to limit injection impact
6.7 Audit Requirements
Any MCP server connecting to production or staging must:
- β’ Log every tool call: timestamp, user, tool, parameters (sanitized), result
- β’ Retain logs β₯90 days
- β’ Report to central IT Security logging
- β’ Alert on error rates, unusual patterns, off-hours access
6.8 Revoking Access
MCP access must be revoked when: employee leaves/changes roles, project completed, use case no longer active, security incident suspected, or quarterly review finds no justification. IT Security maintains a registry and conducts quarterly audits.
7. Agentic AI β Autonomous & Semi-Autonomous Systems
What Is Agentic AI?
Agentic AI refers to AI systems that take sequences of actions autonomously β planning, executing, and adapting without step-by-step human direction. Unlike a chatbot that answers a question, an agent pursues a goal across multiple steps: using tools, browsing the web, running code, calling APIs, and making intermediate decisions.
Examples: AI agent that handles support tickets end-to-end Β· Coding agent that reads issues, writes fixes, and opens PRs Β· Data pipeline agent that monitors and enriches data on schedule Β· AI that drafts and sends proposals from a brief
7.2 The Agentic Risk Spectrum
| Level | Description | Risk | Approval |
|---|---|---|---|
| Level 0 β Assisted | AI suggests, human executes | Low | Standard |
| Level 1 β Supervised | AI executes single actions, human approves each | LowβMed | Standard |
| Level 2 β Semi-auto | AI executes sequences, human reviews at checkpoints | Medium | Standard |
| Level 3 β Autonomous | AI pursues goals independently, reports on completion | High | Eng Lead + IT Sec |
| Level 4 β Fully auto | AI acts without check-ins, self-schedules, self-modifies | Very High | Not approved |
7.3 Core Principles
π― Minimal Footprint
Minimum access, permissions, and resources for the task. An agent that reads 3 tables shouldn't access the entire schema. Prefer reversible over irreversible actions.
π Explicit Scope
Before deploying any agent: define what it can do, what it cannot do, document the scope, get it approved. An agent without defined scope will eventually go off-script.
π€ Human Checkpoints
Mandatory human gate before: external communications, data deletion/overwrite, production code execution, financial transactions, public content publishing.
π Fail Safe, Not Fail Open
Agents must stop and alert on ambiguous instructions, unexpected states, errors, or malformed data. Surface failures to humans β don't attempt autonomous workarounds.
π Audit Trail
Every significant agent action must be logged: what was done, what triggered it, what data was accessed/modified, what the outcome was, who authorized it. No audit trail = no production approval.
7.4 Deploying Agents
Authorization Required When Agent:
- β’ Connects to production systems
- β’ Sends external communications
- β’ Modifies data or configuration
- β’ Runs on a schedule without human initiation
Submit: purpose, tools/access, trigger mechanism, failure behavior, logging approach.
Agent Credentials
- β’ Dedicated service accounts β not employee personal credentials
- β’ Minimum permissions (same principle as MCP)
- β’ Rotated quarterly, immediately on any incident
- β’ No access to personal accounts without explicit consent
7.5 Multi-Agent Systems
When multiple agents interact, risks compound.
- Trust chains: Agent A cannot delegate authority it doesn't hold to Agent B
- Cross-agent injection: Data processed by Agent A can inject instructions into Agent B. Treat inter-agent data as untrusted.
- Accountability: Each agent's actions must be individually attributable in audit logs
- Footprint explosion: Review combined footprint of the full chain, not just individual agents
Multi-agent systems on production data require Engineering Lead + IT Security joint sign-off.
7.6 Scheduled Agents
Scheduled AI agents deserve special scrutiny β no human watches when they fire:
π€ On-call Owner
Named human responsible for each scheduled agent.
π Execution Summaries
Report what was done each run to a monitored channel.
π¦ Rate Limits
Hard limits on external API calls to prevent runaway loops.
π΄ Kill Switch
Single command to stop immediately. Documented.
Review scheduled agents every 90 days to verify behavior matches expectations.
7.7 Customer Data
Agents processing customer data are subject to GDPR and OpenLM's Privacy Policy. Additional: data minimization (only required fields), retention coverage for intermediate data, consent verification, and immediate incident reporting for data processing errors.
8. Code Generation & Review
Your Code = Your Commit
AI-generated code goes through your review, your commit, your CI. Same quality, security, and compliance standards as hand-written code.
Security Review
Watch for: SQL injection, insecure deserialization, hardcoded secrets, missing input validation. Pay extra attention to auth, database queries, external API calls, and file operations.
Secrets in Code
Never put real secrets in prompts. Use placeholders, describe structure, reference env vars. If AI generates hardcoded creds β replace them before committing.
License Compliance
AI may reproduce GPL or restrictively-licensed code. For significant commercial code blocks, flag potential license concerns to Legal.
9. Prohibited Uses
- β’ Misrepresentation: Submitting AI content as independently researched human work where authenticity is required
- β’ Data leakage: Inputting confidential customer data into unapproved AI tools
- β’ Circumvention: Using AI to bypass access controls, approvals, or audit requirements
- β’ Unauthorized agents: Deploying write-access production agents without required approvals
- β’ Credential sharing: Sharing AI tool API keys or credentials between employees
- β’ Personal accounts for work: Using unlicensed personal AI for company tasks with company data
- β’ Impersonation: Using AI to generate communications that impersonate another person
- β’ Hiding actions: Configuring MCP/agents to mask their actions from audit logs
- β’ Legal violations: Using AI in ways that violate GDPR, copyright, or provider ToS
- β’ Training data extraction: Attempting to extract training data or manipulate AI systems against provider terms
10. Incident Reporting
If you suspect an AI-related security or data incident β an agent that acted unexpectedly, MCP that accessed wrong data, AI that received confidential data by mistake β report immediately.
Report to
IT Security + your manager
Data breaches
Also notify DPO
Response time
4h initial Β· 24h full assessment
OpenLM does not penalize employees for reporting incidents in good faith. Early reporting is always better.
11. Compliance & Enforcement
- Attestation: Annual acknowledgment required. New employees within first 30 days.
- Monitoring: IT Security may audit AI tool usage, MCP connections, and agent activity.
- Violations: Range from access removal to termination, depending on severity. Regulatory reporting if data breach results.
- Updates: Annual review or when significant changes occur. Employees notified of material updates.
12. Definitions
| Term | Definition |
|---|---|
| AI Tool | Any software using ML/LLM to assist with tasks |
| Agentic AI | AI system that autonomously pursues goals across multiple steps and tools |
| MCP | Model Context Protocol β open standard for AIβsystem integration |
| Prompt Injection | Attack where malicious text in data redirects AI behavior |
| Privilege Minimization | Granting only minimum permissions needed for a task |
| Service Account | Non-personal account used by automated systems for authentication |
| Hallucination | AI-generated content that is factually incorrect but stated as fact |
| Auto-approve | AI executes actions without human confirmation prompt |
| Reversible action | Action whose effects can be undone (e.g., move vs. delete) |
| Read-only access | Permission to view data without create/modify/delete |
OpenLM AI Usage Policy v1.0 Β· March 2026
Owner: Operations / IT Security Β· Review cycle: Annual