Building envibe, Part 1: The Idea
Where It Started
It was December 2025, and I was thinking about vibe coding. Not doing it. Thinking about it. Specifically, thinking about what happens when you hand an AI assistant the keys to your project and tell it to build.
The thing that kept nagging me was .env. Every project has one. Every .env file is a flat list of secrets and config sitting side by side: your NODE_ENV next to your STRIPE_SECRET_KEY, your PORT next to your signing secrets. When an AI coding assistant reads your project, it reads all of it. That's the point. But .env has no concept of "this variable is safe to share" versus "this variable is a production credential."
The AI needs env vars to be useful. It needs NODE_ENV to run the dev server and PORT to configure networking. But it doesn't need the value of STRIPE_SECRET_KEY. It just needs to know the variable exists and that it shouldn't try to log it. The problem isn't that AI reads .env. The problem is that .env is all-or-nothing.
I wanted a permission layer. Not a vault, not encryption. Just a way to say "you can see this, but not that."
The Research
Before writing any code, I looked at what already existed.
dotenvx encrypts .env files. Good for securing secrets at rest, but encryption is binary. The AI either decrypts everything or sees nothing. No per-variable control.
Doppler, Infisical, HashiCorp Vault are secrets managers. They're built for teams and infrastructure: syncing secrets across environments, rotating keys, audit logs. All valuable, but they're heavy. They require a running service, accounts, configuration. I wasn't looking for a secrets platform. I was looking for something a solo developer could add to a project in five minutes.
AWS Secrets Manager, GCP Secret Manager are the same category. Cloud-native, team-oriented, overkill for "don't show my Stripe key to Claude."
None of these tools offered what I actually wanted: per-variable AI access control. A lightweight way to say "this variable is fine, this one is sensitive, this one should be completely invisible." No running service, no changing how .env works, no encryption.
The gap was clear. Not a vault. A permission layer.
The Design
The architecture came together as three files, each with a clear job:
.env → your real secrets (gitignored, as always)
.env.manifest.yaml → access rules per variable (safe to commit)
.env.ai → filtered view for AI (auto-generated)
The manifest is the important piece. It doesn't contain values, just rules. You commit it to your repo, and anyone on the team gets the same access control for their AI tools.
Five access levels, each designed around what an AI actually needs in different scenarios:
| Level | AI Sees | AI Can Modify | When You'd Use It |
|---|---|---|---|
full | Real value | Yes | NODE_ENV, PORT, DEBUG |
read-only | Real value | No | API endpoints, webhook URLs |
placeholder | <VAR_NAME> | No | API keys, tokens |
schema-only | Format hint | No | Complex config objects |
hidden | Nothing | No | Payment secrets, signing keys |
The critical design decision was default-to-safe: any variable not explicitly classified defaults to placeholder. The AI knows it exists, but can't see the value. You opt in to sharing, not out. This matters because developers forget things. A permission system that fails safe when you forget to configure it is fundamentally different from one that fails open.
What does the AI see?
Here's what a manifest looks like in practice:
version: 1
variables:
NODE_ENV:
access: full
description: "Application environment"
required: true
DATABASE_URL:
access: placeholder
description: "PostgreSQL connection string"
format: "url"
example: "postgres://user:pass@localhost:5432/db"
STRIPE_SECRET_KEY:
access: hidden
description: "Payment processing - never expose"
Naming: dotenv-guard to aienv
The first name was dotenv-guard. It described the function (guarding dotenv files) but it had a problem. The dotenv ecosystem is well-established: dotenv, dotenvx, dotenv-vault, dotenv-expand. Naming it dotenv-guard implied it was part of that family or built on top of dotenv. It wasn't. It was its own thing, a permission layer for AI, not a dotenv extension.
I dropped dotenv-guard to avoid brand confusion and settled on aienv. "AI" plus "env." The name was short, the npm package was available, and it described the concept: environment variables, but AI-aware.
The design was done. Time to build.
This is Part 1 of a 3-part series. Next up: Part 2: The Build, from design to first commit with Claude Code.
For the what-it-does version, read the product overview.