First impressions of Claude Cowork, Anthropic's general agent
Anthropic's new Claude Cowork brings powerful coding agents to non-developers while admitting prompt injection defenses aren't foolproof, and Fly's Sprites.dev solves both developer sandboxes and API sandboxes with the same persistent VM architecture.
Read Original Summary used for search
TLDR
โข Claude Cowork is Claude Code rebranded for regular users - same agent capabilities, friendlier UI, but Anthropic admits they can't guarantee protection against prompt injection attacks
โข Sprites.dev tackles two problems at once: safe sandboxes for running YOLO-mode coding agents AND a JSON API for executing untrusted code in isolated VMs
โข Sprites uses persistent VMs with checkpoints (300ms to snapshot), copy-on-write storage, scale-to-zero billing (~46 cents for 4 hours), and pre-installed dev tools
โข On AI-generated code ports: Keep original licenses, treat as derivative works, publish as "alpha slop" until battle-tested - this is how open source is supposed to work
โข The sandbox problem is finally getting production-ready solutions, unlocking entire categories of applications that were previously too risky to build
In Detail
Simon Willison examines Anthropic's Claude Cowork launch and Fly.io's new Sprites.dev service, both addressing critical infrastructure needs for AI agents. Cowork is essentially Claude Code with a less intimidating interface - it runs in containerized sandboxes (mounting files at paths like /sessions/zealous-bold-ramanujan/mnt/) and uses pre-installed Claude Skills to teach the agent about its own capabilities. Anthropic is refreshingly honest about security limitations, warning users about prompt injection risks but admitting "agent safety is still an active area of development." The problem: telling non-technical users to "monitor Claude for suspicious actions" isn't realistic, and we're likely headed for a "Challenger disaster" moment when something goes seriously wrong.
Sprites.dev solves two distinct problems with one architecture: safe environments for running coding agents in YOLO mode, and a production-ready API for executing untrusted code. Each Sprite is a persistent VM (8GB RAM, 8 CPUs) with pre-installed tools (Claude, Codex, Python 3.13, Node.js 22.20), automatic port forwarding, and scale-to-zero billing. The clever part is checkpoints - you can snapshot disk state in 300ms using copy-on-write, run untrusted code, then roll back to the clean state. Storage uses fast NVMe with background writes to object storage, and you only pay for blocks actually written. The API lets you configure network policies with DNS-based allow/deny lists. At ~46 cents for a 4-hour coding session, it's positioned as both a developer tool and infrastructure for building sandboxed applications.
On the ethics of LLM-generated code ports: Willison argues that porting open source code with AI assistance is legitimate if you maintain original licenses and copyright statements, treating the result as a derivative work. He advocates publishing AI-generated libraries as "alpha slop" until they're battle-tested in production, then removing the alpha label when you'd stake your reputation on them. The bigger concern isn't the ethics of porting - it's that LLMs are reducing demand for existing open source libraries because developers can now generate custom implementations faster than searching for and learning existing tools. This will "quite radically impact the shape of the open source library world over the next few years."