Project

python-steward

experimental

The simplest coding agent harness that could possibly work — Python, tool use, done.

Overview

Steward is the simplest coding agent harness that could possibly work. A Python CLI that connects to an LLM, gives it tools (files, shell, web search), and runs a conversation loop. That's it. No framework, no plugin system, no abstractions — just a streaming tool-use loop in readable Python. Bootstrapped from an earlier Bun prototype, then rewritten in Python because sometimes you just want pip install and go.

How it works

One conversation loop: the user types a prompt, Steward sends it to the LLM with a list of available tools, the model calls tools, Steward executes them and feeds results back. Repeat until the model is done. Tools are plain Python functions with type annotations — the JSON schema the model sees is generated automatically. Configuration is a .env file. The whole thing fits in your head.

Features
🧠
The simplest thing that works

No framework, no plugin system, no abstractions. One file, one loop, readable Python. The whole agent fits in your head.

🤖
Multi-provider

Azure OpenAI, OpenAI, any OpenAI-compatible host, and a local echo provider for testing.

🔧
Copilot-style tools

File read/write, shell execution, web search, code running — the practical toolset for a coding agent.

📦
pip installable

No build step, no Node.js, no separate install beyond Python.

🔄
Streaming

Real-time streaming output with tool call interleaving.

Architecture
User CLI prompt Steward conversation loop LLM Azure / OpenAI Tools files · shell · web streaming tool-use loop — Python CLI