An Agentic AI Harness is a minimal, extensible runtime that gives an LLM tools to read, write, edit, and execute code — then gets out of the way. You shape the workflow. It powers the work.
Connects a Large Language Model to your codebase with a standardized set of tools. Extend, customize, or restrict every aspect of its behavior — without forking the core.
Gives the model read, write, edit, and bash primitives. Sub-agents, plan mode, permission gates — extensions you choose to add.
Skills, Extensions, Themes, Prompt Templates. All hot-reloadable. Drop them in, take them out.
Every interaction is a node in a tree. Branch, fork, clone, or compact without losing history. Like Git for conversations.
Bundle extensions, skills, prompts, and themes into Pi Packages. Share via npm or Git in minutes.
Drop an AGENTS.md in your project. Auto-loaded as context with project-specific instructions and conventions.
Adaptable primitives that compose into any workflow without baking opinions into the core.
TypeScript modules that hook into lifecycle events, register custom tools the LLM can call,
add slash commands like /deploy, create permission gates, or replace the editor UI entirely.
Loaded from ~/.pi/agent/extensions/ or project-local .pi/extensions/.
Self-contained capability packages following the Agent Skills standard. A Markdown file with frontmatter and step-by-step instructions. Loaded on demand when a task matches — descriptions stay small in context; full instructions load only when needed.
Markdown files that act as reusable prompt snippets.
Type /review in the editor and it expands to a full prompt with variables filled in.
Great for code reviews, test generation, documentation, or any repeated workflow.
JSON files defining every color token in the TUI.
Modify the active theme file and see changes instantly — no restart.
Custom themes live in ~/.pi/agent/themes/.
Every conversation stored as JSONL with a tree structure.
Each entry has an id and parentId.
Use /tree to jump to any point, /fork to spin off a new session,
or /compact to summarize when context windows fill up.
Instead of shipping every possible feature and forcing a one-size-fits-all experience, the harness says: "Here's the engine. You build the car."
Build CLI tools with READMEs (Skills) or write an Extension that adds MCP support. The core doesn't need to know about MCP.
Spawn instances via tmux, build your own orchestration with Extensions, or install a package. Many valid approaches exist.
Run in a container for sandboxing, or build inline confirmations with an Extension that matches your security model.
Write plans to files, build plan-mode with Extensions, or use a TODO.md. Built-in to-do lists confuse models more than they help.
Register any tool the LLM can call via Extensions. From deploying to production to querying a database — if you can code it, the model can use it.
AGENTS.md and CLAUDE.md auto-load from your project and parent directories, giving the model persistent context without polluting every prompt.
The core handles LLM communication, tool dispatch, and session persistence — nothing more. Every feature you use is either removable or added by you.
Total control over the surface area. Add what you need, remove what you don't. The harness adapts to your workflow.
You speak, the model thinks, tools run, results return — repeat.
Type in the editor, paste images, reference files with @filename, or run bash inline with !command. The harness packages your input with system prompts, context files, and available skill descriptions.
The LLM receives the full context and decides whether to respond directly or call tools. It sees schemas for every available tool — built-in, extension-provided, and custom.
The harness dispatches tool calls in parallel where safe. Each tool runs with full filesystem and shell access — unless gated by an Extension. Results stream back in real time.
The model receives tool outputs and can issue follow-up calls or deliver its final response. Queue steering messages mid-flight or follow-ups for after current work completes.
The fastest way to experience an Agentic AI Harness. Install pi, set your API key, start coding.
Then add an AGENTS.md to your project, write your first Extension,
or install a package from the community.