envibe: Stop Leaking Secrets to AI Assistants
AI coding assistants like Claude Code, Cursor, and Copilot are incredibly powerful. They can write code, run tests, debug issues, and iterate on features faster than ever before.
But there's a problem: they need access to your .env file.
The All-or-Nothing Problem
To run your code, AI assistants need environment variables. Database URLs, API endpoints, feature flags - the assistant can't test anything without them.
Currently, it's all-or-nothing. Give the AI access to .env and it sees everything:
NODE_ENV=development # Fine, not sensitive
DATABASE_URL=postgres://... # Hmm, has credentials
OPENAI_API_KEY=sk-proj-... # Uh oh
STRIPE_SECRET_KEY=sk_live_... # Oh no
Your Stripe secret key. Your database password. Your API tokens. All visible to the AI, included in context, potentially logged or leaked.
This is a security nightmare hiding in plain sight.
Introducing envibe
envibe is a permission layer for AI coding assistants. It gives you granular, per-variable control over what the AI can see and modify.
Instead of all-or-nothing, you get five access levels:
| Level | AI Sees | AI Can Modify | Use Case |
|---|---|---|---|
| full | Actual value | Yes | NODE_ENV, PORT |
| read-only | Actual value | No | DB URLs, endpoints |
| placeholder | <VAR_NAME> | No | API keys |
| schema-only | Format hint | No | Complex config |
| hidden | Nothing | No | Payment secrets |
Try it yourself - click the access levels to see what the AI would see:
What does the AI see?
How It Works
envibe creates a simple flow:
.env (your secrets, gitignored)
↓
.env.manifest.yaml (access rules, safe to commit)
↓
.env.ai (filtered view for AI, auto-generated)
The manifest file defines rules, not values:
version: 1
variables:
NODE_ENV:
access: full
description: "Environment mode"
DATABASE_URL:
access: read-only
description: "Postgres connection"
OPENAI_API_KEY:
access: placeholder
description: "OpenAI API key for embeddings"
STRIPE_SECRET_KEY:
access: hidden
description: "Payment processing - never expose"
The AI reads .env.ai which contains only what you've allowed it to see.
Smart Auto-Classification
You don't have to manually classify every variable. envibe includes 40+ patterns that automatically categorize variables by name:
| Pattern | Access Level | Reason |
|---|---|---|
STRIPE_SECRET*, *_PRIVATE_KEY | hidden | Payment/crypto secrets |
*_API_KEY, *_TOKEN, *_SECRET | placeholder | API credentials |
DATABASE_URL, REDIS_URL | placeholder | May contain passwords |
*_HOST, *_ENDPOINT | read-only | Non-sensitive URLs |
NODE_ENV, DEBUG, PORT | full | Safe config values |
Unknown variables default to placeholder - fail-safe by design.
Getting Started
For Claude Code, it's one command:
claude mcp add envibe npx envibe-mcp
Then run setup in your project:
npx envibe setup
This creates a manifest from your existing .env.example (or an empty one if you don't have one), auto-classifies variables, and configures everything.
Works Everywhere
envibe integrates with all major AI coding tools via MCP (Model Context Protocol):
- Claude Code:
claude mcp add envibe npx envibe-mcp - Cursor: Add to Cursor MCP settings
- VS Code/Copilot: MCP configuration
- Windsurf:
~/.codeium/windsurf/mcp_config.json
The MCP server provides tools like env_list, env_get, env_set, and env_describe - all respecting your permission rules.
The Missing Security Layer
We're in a new era of AI-assisted development. The productivity gains are real. But so are the security implications.
envibe gives you the missing layer between "AI can't do anything useful" and "AI sees all my secrets." It's the principle of least privilege applied to AI coding assistants.
Check out the project on GitHub. Feedback and contributions welcome.