What Is LockedCode?
LockedCode is a security-hardened fork of OpenCode, the open-source AI coding agent with over 150,000 GitHub stars and 6.5 million monthly users. OpenCode lets LLMs read files, write code, run shell commands, and autonomously build projects in your workspace. It's fast, flexible, and provider-agnostic.
It's also entirely trust-based. The LLM proposes an action, the developer gets a confirmation prompt, and the action executes. There are no hard boundaries on what the agent can access, no scanning of what it produces, no audit trail of what it did, and no protection against what it sends to the model provider. For solo developers using frontier models from trusted providers, that's fine.
For corporate development teams — especially those using open-source, self-hosted, or offline models to manage token costs — it's a non-starter.
LockedCode fixes this. It adds a comprehensive security layer between the LLM's intent and the agent's execution. Every file write passes through static scanning. Every shell command is parsed and analyzed. Every outbound context payload is checked for secrets and PII. Every action is confined to the project directory at the operating system level. Everything is logged in an immutable audit trail. And every security decision is configurable via declarative policy.
LockedCode is not a wrapper around OpenCode. It is OpenCode — every feature, every provider integration, every TUI capability — with a security layer that makes it safe for real codebases, real teams, and real compliance requirements.
Why Does This Exist?
The AI coding agent market is moving fast. Enterprises are under pressure from two directions: leadership wants AI-accelerated development, and finance wants to control the token spend that comes with routing everything through frontier model APIs. The obvious solution is running smaller, cheaper, or self-hosted models — Llama, Qwen, Mistral, DeepSeek, CodeGemma, local fine-tunes.
But the moment you step outside the frontier providers, you lose whatever implicit trust came with that relationship. Nobody at Anthropic or OpenAI is deliberately shipping a model that exfiltrates your source code. Can you say the same about a fine-tuned model someone uploaded to Hugging Face last week? Or a quantized variant running on an internal GPU cluster that was trained on data nobody fully audited?
This is the gap LockedCode fills. It treats every LLM output as potentially adversarial — regardless of the model's provenance, reputation, or provider — and enforces security at every layer:
- WritesWhat the model writes is scanned for malware patterns, encoded payloads, obfuscated network calls, credential theft, and crypto mining before it touches the filesystem.
- ExecutesWhat the model executes is parsed, analyzed, and validated against the project boundary before any shell command runs.
- SeesWhat the model sees is scanned for secrets, PII, and sensitive file content before it leaves the machine.
- ReadsWhat the model reads is checked for prompt injection attacks designed to manipulate the agent's behavior.
- ReachesWhere the model can reach is confined to the project directory at the kernel level — no amount of clever LLM output gets past a Landlock jail.
- LoggedEverything that happens is recorded in an immutable, content-hashed audit trail that satisfies compliance frameworks.
Who Is This For?
Non-Frontier Model Teams
You've chosen to run Llama, Qwen, Mistral, or a self-hosted fine-tune to manage costs or keep code on-premises. LockedCode makes that decision safe.
Security-Conscious Orgs
Your CISO or security team has blocked AI coding agents because there's no way to verify what the agent produces. LockedCode is what lets them say yes.
Regulated Industries
Finance, healthcare, government, defense — environments where AI-generated code must have a provable audit trail and compliance evidence.
Air-Gapped Environments
Running local models on local hardware behind firewalls with no internet access. LockedCode works fully offline with zero cloud dependencies.
Supply-Chain Security
If you wouldn't run an unaudited third-party script against your production codebase, you shouldn't run an unverified LLM agent against it either.
Feature Overview
Directory Confinement
Every file operation is confined to the project directory tree, enforced at the OS level — not just application-level path checks.
- Linux: Landlock LSM (kernel 5.13+) with Bubblewrap fallback
- Windows: Restricted tokens, NTFS ACLs, job objects
- macOS: Process-level sandbox profiles with FSEvents monitoring
- Aggressive path canonicalization defeats symlinks, relative paths, hardlinks, junction points, and mount traversal
- Controlled escape hatch for package managers, Docker, and build tools
- Process tree inheritance — child processes inherit confinement
Static Scanning Pipeline
Every piece of code the LLM proposes is scanned before it touches the filesystem.
- Semgrep integration with custom rulesets tuned for LLM failure modes
- YARA signature scanning for known malicious patterns
- Entropy analysis catches obfuscated payloads
- Scans fire before every file write and edit — malicious code never touches disk
Shell Command Interception
Every shell command is structurally parsed and analyzed before execution.
- Structural command parsing of pipes, redirects, subshells, and command substitution
- Hard-blocked patterns:
curl | bash, crontab modification, SSH config changes - Path extraction and validation against confinement boundary
- Environment variable protection and risk scoring
Outbound DLP
Protects what goes TO the model. Prevents the agent from sending sensitive codebase content to the LLM provider.
- Secret scanning on outbound context across 50+ credential formats
- PII detection — emails, phone numbers, SSNs, credit card numbers
- File-level sensitivity policy via configurable glob patterns
- Redaction mode — replace secrets with typed placeholders
Prompt Injection Detection
Scans files being ingested as context for embedded instructions designed to manipulate the LLM.
- Role-override attempts and system prompt markers in file content
- Unicode manipulation — invisible characters, bidirectional overrides, homoglyphs
- Dependency metadata scanning before the model sees them
- Configurable sensitivity levels per trust boundary
Secret Detection in Generated Code
LLMs sometimes hallucinate realistic-looking credentials or reproduce real secrets from training data.
- Pre-write scanning for 50+ credential formats
- Context-aware — distinguishes secrets from test fixtures
- Git-aware — secrets are caught before entering Git history
Audit Trail
Every action, every scan result, every approval decision — logged with content hashes and timestamps.
- Append-only, immutable audit records
- SHA-256 content hashing for tamper evidence
- JSON-structured entries, queryable by external tools
- Session correlation from model output to actual execution
- Local SQLite storage, zero cloud dependencies
Policy Engine
Declarative, configurable security rules that drive every security decision.
- YAML configuration at the project root (
lockedcode.yaml) - Policy hierarchy: global → organization → project → session
- Three strictness levels:
strict,standard,permissive - Sensible defaults — works out of the box without a policy file
Trust Scoring
Risk assessment for every LLM interaction, driving the approval UX.
- Per-action risk scores based on what the action does
- Score-driven UX — auto-approve, notify, require approval, or hard-block
- Session trust decay for repeated high-risk proposals
- Persistent per-model trust metrics
Multi-Agent Security Cascade
Security policies cascade to all sub-agents with no privilege escalation.
- Policy inheritance — child agents inherit parent security
- No privilege escalation — sub-agents can never exceed parent permissions
- Audit trail linkage with parent-child relationship markers
Air-Gap Mode
Full functionality with zero internet access. No cloud dependencies, no telemetry, no phone-home.
- Bundled Semgrep rules, YARA signatures, and detection patterns
- Zero runtime network dependencies
- Offline rule set updates via versioned packages
V2 Roadmap
These features are planned for V2 and are not yet implemented:
- Runtime Monitoring — eBPF-based syscall monitoring, process tree analysis, and network activity detection v2
- Model Provenance Tracking — per-file, per-line attribution of which model generated the code v2
- Model Registry & Approval Workflow — centralized control over which models developers can connect to v2
- Dependency Vetting — typosquatting detection, vulnerability checking, and approved package registry enforcement v2
- SIEM Integration — structured event export to Splunk, Datadog, Elastic in CEF, OCSF, and JSON formats v2
- Compliance Report Generation — automated evidence generation for SOC2, ISO 27001, HIPAA, and FedRAMP v2
- License Contamination Detection — code fingerprinting against known open-source bodies to flag license issues v2
- Session Recording & Replay — full session replay with visual timeline interface v2
- Custom Rule Authoring — SDK/DSL for writing organization-specific scanning rules v2
- Rollback & Quarantine — automated response including quarantine to a holding branch v2
Installation
# npm (recommended)
npm i -g lockedcode@latest
# Homebrew (macOS and Linux)
brew install LockedCodeAI/tap/lockedcode
# Scoop (Windows)
scoop install lockedcode
# Direct binary
# Download from https://github.com/LockedCodeAI/lockedcode/releases
Desktop App
LockedCode is also available as a desktop application. Download from the releases page or lockedcode.ai/download.
| Platform | Download |
|---|---|
| macOS (Apple Silicon) | lockedcode-desktop-mac-arm64.dmg |
| macOS (Intel) | lockedcode-desktop-mac-x64.dmg |
| Windows | lockedcode-desktop-windows-x64.exe |
| Linux | .deb, .rpm, or .AppImage |
# macOS (Homebrew)
brew install --cask lockedcode-desktop
# Windows (Scoop)
scoop bucket add extras; scoop install extras/lockedcode-desktop
Installation Directory
The install script respects the following priority order:
$LOCKEDCODE_INSTALL_DIR— Custom installation directory$XDG_BIN_DIR— XDG Base Directory Specification compliant path$HOME/bin— Standard user binary directory$HOME/.lockedcode/bin— Default fallback
# Examples
LOCKEDCODE_INSTALL_DIR=/usr/local/bin curl -fsSL https://lockedcode.ai/install | bash
XDG_BIN_DIR=$HOME/.local/bin curl -fsSL https://lockedcode.ai/install | bash
Post-Install: Security Capability Check
After installation, run:
lockedcode --security-check
This reports the security capabilities available on your platform: confinement backend, scanner availability, policy file detection, and air-gap mode status.
Quick Start: Security Configuration
LockedCode works out of the box with sensible defaults. For custom configuration, create a lockedcode.yaml at your project root:
security:
strictness: standard # strict | standard | permissive
confinement:
pre_approved_paths:
- ~/.npm # npm cache
- ~/.bun # bun cache
- /tmp # temp directory
scanning:
semgrep: true
yara: true
entropy: true
secrets: true
dlp:
redaction_mode: true # redact secrets instead of blocking files
sensitivity_patterns:
- "**/.env*"
- "**/credentials.*"
- "**/config/production/**"
trust:
auto_approve_below: 20 # auto-approve low-risk actions
prompt_above: 50 # require approval for medium+ risk
block_above: 90 # hard-block critical risk
audit:
retention_days: 90
Agents
LockedCode includes the same built-in agents as OpenCode, with security policies applied to all of them:
- buildDefault, full-access agent for development work. All actions pass through the security layer.
- planRead-only agent for analysis and code exploration. Denies file edits by default. Asks permission before running bash commands. Security scanning still applies to any approved operations.
- generalSubagent for complex searches and multistep tasks. Invoked using
@generalin messages. Security policies cascade with no privilege escalation.
Relationship to OpenCode
LockedCode is a fork of OpenCode (MIT License). It tracks upstream development and selectively merges new features and fixes. All original OpenCode capabilities — provider integrations, TUI, LSP support, MCP, client/server architecture, plugin system — are fully preserved.
The fork diverges in one dimension: LockedCode adds a security layer that OpenCode's maintainers are unlikely to add as a core feature, because deep security confinement adds friction by design.
LockedCode is for organizations and developers who need that friction — who need verifiable evidence that the agent can't do anything it wasn't supposed to.
vs OpenCode
Everything OpenCode does, plus:
- OS-level directory confinement (Landlock, sandbox-exec, restricted tokens)
- Static scanning of all LLM-generated code before it touches the filesystem
- Shell command parsing and analysis before execution
- Outbound DLP preventing secrets and PII from reaching the model
- Prompt injection detection on ingested files
- Secret detection in generated code
- Immutable, content-hashed audit trail
- Declarative policy engine with configurable strictness
- Trust scoring with score-driven approval UX
- Multi-agent security cascade
- Full air-gap mode with bundled rule sets
vs Claude Code
Everything that differentiates OpenCode from Claude Code, plus the full security layer above:
- 100% open source
- Not coupled to any provider — works with Claude, OpenAI, Google, or local models
- Security layer that treats every model as potentially adversarial
- Built-in opt-in LSP support
- Terminal-first TUI, built by neovim users
- Client/server architecture allowing remote operation from a mobile app
Documentation
For more info on how to configure LockedCode, including detailed security configuration, policy authoring, and platform-specific confinement setup, head over to our docs.
Contributing
If you're interested in contributing to LockedCode, please read our contributing docs before submitting a pull request. Security-related contributions — new scanner rules, confinement improvements, and audit trail enhancements — are especially welcome.
License
LockedCode is licensed under the MIT License. It includes attribution to the original OpenCode project by anomalyco.