envibe: Stop Leaking Secrets to AI Assistants

AI coding assistants like Claude Code, Cursor, and Copilot are incredibly powerful. They can write code, run tests, debug issues, and iterate on features faster than ever before.

But there's a problem: they need access to your .env file.

Terminal
Loading recording...

The All-or-Nothing Problem

To run your code, AI assistants need environment variables. Database URLs, API endpoints, feature flags - the assistant can't test anything without them.

Currently, it's all-or-nothing. Give the AI access to .env and it sees everything:

NODE_ENV=development          # Fine, not sensitive
DATABASE_URL=postgres://...   # Hmm, has credentials
OPENAI_API_KEY=sk-proj-...    # Uh oh
STRIPE_SECRET_KEY=sk_live_... # Oh no

Your Stripe secret key. Your database password. Your API tokens. All visible to the AI, included in context, potentially logged or leaked.

This is a security nightmare hiding in plain sight.

Introducing envibe

envibe is a permission layer for AI coding assistants. It gives you granular, per-variable control over what the AI can see and modify.

Instead of all-or-nothing, you get five access levels:

LevelAI SeesAI Can ModifyUse Case
fullActual valueYesNODE_ENV, PORT
read-onlyActual valueNoDB URLs, endpoints
placeholder<VAR_NAME>NoAPI keys
schema-onlyFormat hintNoComplex config
hiddenNothingNoPayment secrets

Try it yourself - click the access levels to see what the AI would see:

What does the AI see?

# .env.ai (what AI sees)
NODE_ENV=developmentfull
PORT=3000full
DATABASE_URL=postgres://user:pass@localhost:5432/dbread-only
OPENAI_API_KEY=<OPENAI_API_KEY>placeholder
STRIPE_SECRET_KEYhidden

How It Works

envibe creates a simple flow:

.env (your secrets, gitignored)
    
.env.manifest.yaml (access rules, safe to commit)
    
.env.ai (filtered view for AI, auto-generated)

The manifest file defines rules, not values:

version: 1
variables:
  NODE_ENV:
    access: full
    description: "Environment mode"

  DATABASE_URL:
    access: read-only
    description: "Postgres connection"

  OPENAI_API_KEY:
    access: placeholder
    description: "OpenAI API key for embeddings"

  STRIPE_SECRET_KEY:
    access: hidden
    description: "Payment processing - never expose"

The AI reads .env.ai which contains only what you've allowed it to see.

Smart Auto-Classification

You don't have to manually classify every variable. envibe includes 40+ patterns that automatically categorize variables by name:

PatternAccess LevelReason
STRIPE_SECRET*, *_PRIVATE_KEYhiddenPayment/crypto secrets
*_API_KEY, *_TOKEN, *_SECRETplaceholderAPI credentials
DATABASE_URL, REDIS_URLplaceholderMay contain passwords
*_HOST, *_ENDPOINTread-onlyNon-sensitive URLs
NODE_ENV, DEBUG, PORTfullSafe config values

Unknown variables default to placeholder - fail-safe by design.

Getting Started

For Claude Code, it's one command:

claude mcp add envibe npx envibe-mcp

Then run setup in your project:

npx envibe setup

This creates a manifest from your existing .env.example (or an empty one if you don't have one), auto-classifies variables, and configures everything.

Works Everywhere

envibe integrates with all major AI coding tools via MCP (Model Context Protocol):

The MCP server provides tools like env_list, env_get, env_set, and env_describe - all respecting your permission rules.

The Missing Security Layer

We're in a new era of AI-assisted development. The productivity gains are real. But so are the security implications.

envibe gives you the missing layer between "AI can't do anything useful" and "AI sees all my secrets." It's the principle of least privilege applied to AI coding assistants.


Check out the project on GitHub. Feedback and contributions welcome.