Steward is the simplest coding agent harness that could possibly work. A Python CLI that connects to an LLM, gives it tools (files, shell, web search), and runs a conversation loop. That's it. No framework, no plugin system, no abstractions — just a streaming tool-use loop in readable Python. Bootstrapped from an earlier Bun prototype, then rewritten in Python because sometimes you just want pip install and go.
One conversation loop: the user types a prompt, Steward sends it to the LLM with a list of available tools, the model calls tools, Steward executes them and feeds results back. Repeat until the model is done. Tools are plain Python functions with type annotations — the JSON schema the model sees is generated automatically. Configuration is a .env file. The whole thing fits in your head.
No framework, no plugin system, no abstractions. One file, one loop, readable Python. The whole agent fits in your head.
Azure OpenAI, OpenAI, any OpenAI-compatible host, and a local echo provider for testing.
File read/write, shell execution, web search, code running — the practical toolset for a coding agent.
No build step, no Node.js, no separate install beyond Python.
Real-time streaming output with tool call interleaving.