I Think I Need Two Computers
Contents
I'm building newsagent.fyi, an internet reader, with AI assistance. This post is about the security tradeoffs that keep me up at night.
I've been running AI coding agents for months now. Cursor, Copilot, Claude Code, Codex. The productivity gains are real. But I'm starting to think I need two computers.
Not for the usual reasons. Not because one's for work and one's for gaming. Because I can't figure out where the security boundary should be.
The Permission Fatigue Problem
Every time an agent wants to read a file, run a command, or access something, it asks permission. This is good. This is the safe thing to do. But here's what happens in practice:
I give the agent a task. I go make tea. I come back expecting progress. Instead it's waiting: "Can I read this file?"
Of course you can. That's in the directory I told you to work in.
So I press yes. Next time: "Can I run this command?" Yes. "Can I access this?" Yes.
I'm now conditioned to press yes. I want to let it do its job. The prompts are friction that slow down the work I'm trying to get done. 99% of the time the answer is obviously yes.
But at some point, the answer should be no. At some point, something will ask for access to something it shouldn't have. And I'll be in the habit of pressing yes.
The Sandbox Paradox
The obvious solution is sandboxing. Restrict the agent to a specific directory. Don't let it touch anything else.
But this kills half the usefulness.
Yesterday I was starting a new project. I said "look at this other Android project I built, understand the architecture, we're rebuilding it for a different platform." The agent went and scanned it, understood the patterns, came back ready to work.
If I was in a locked-down sandbox, that wouldn't work. I'd have to manually copy files in. Which files? All of them? Some of them? Now I'm doing the thinking the agent should be doing.
The useful agent can roam. The safe agent can't. Pick one.
The Supply Chain Problem
This is where it gets properly scary.
A friend sent me a tweet about skills. Skills are like plugins for agents. Someone wrote one, you install it, now your agent can do new things. Sounds great.
But think about what a skill actually is. It's instructions that tell the agent how to behave. It's code that runs with whatever permissions you've given the agent.
Example: someone writes a skill for creating a token on Solana. Lots of people want to do that. The skill asks you to configure the wallet address where proceeds should go. You enter your address. You run it. Code gets generated.
The code sends all the money to the skill author's address.
The skill gave you configuration theatre. It asked for your wallet address to make you feel in control. But the code it generated never checks that address. It's just hardcoded to the attacker.
This isn't hypothetical. This is how scams work. And now we're running code that generates code, using skills written by strangers, on machines that have access to our entire development environments.
The Moltbook Problem
There was a story recently about an app built with AI that had its entire database exposed. Row-level security wasn't configured properly. Everything was public.
The response from some people was "skill issue, should have known better." But that misses the point.
AI-generated code moves fast. You can go from idea to deployed app in hours. The security review that would normally catch these issues? It doesn't happen at the same pace. The AI writes plausible-looking security configuration. It sets things up in a way that seems correct. Unless you know exactly what to check, it works until it doesn't.
Meanwhile, the Next.js ecosystem has had serious vulnerabilities. One let you skip middleware with a header. Another was a deserialization attack that rated 10/10 severity. Vercel patched these if you're deployed on their platform, but self-hosted apps remained vulnerable.
We're building fast on foundations that are moving fast. The attack surface is expanding faster than our ability to audit it.
What I Actually Want
I don't want the agent to ask permission for everything. That defeats the purpose.
I don't want the agent to have permission for everything. That's insane.
What I want is a clear boundary. This directory is fair game. Everything in here, read it, write it, run whatever you need. Don't ask.
Everything outside that boundary? Ask. Be suspicious. Make me think about it.
The current model is: ask about everything, teach user to say yes, hope they remember to say no when it matters.
A better model would be: define the boundary explicitly, grant full access inside it, block access outside it by default.
This is basically what sandboxes do, but with more granularity. Not "this one folder" but "this folder, and that folder, and read-only access to this other place where my reference code lives."
The Two Computers Solution
Until something like that exists, I keep coming back to physical separation.
One machine for personal stuff. Email, banking, private documents, passwords.
One machine for code. All my projects, all my experiments, fair game for whatever agent I'm running that day. I don't care if it reads my old bootcamp homework or experimental prototypes. There's nothing sensitive there.
Use something like Synergy or the native macOS screen sharing to flip between them. Never the twain shall meet.
Is this overkill? Maybe. But I also can't explain exactly where the current risk boundary is. The skills I'm using, the tools I've installed, the permissions I've granted while distracted. It's a blur.
At least with two machines, the blast radius is contained.
The Product Question
Is there a product here? Some kind of permission manager for AI agents that thinks in terms of zones rather than individual prompts?
Define your zones: personal (locked), code (open), reference (read-only). Agent operates freely within its zone. Crossing zone boundaries requires explicit approval with clear context about what's being accessed and why.
Maybe this exists. Maybe it's being built. If you know of it, tell me.
Until then, I'm looking at laptops.