A desktop app to command OpenAI Codex and other agents. Work in progress.
- Rust
- Tauri
- Leptos
To run the app locally, you'll need to set up a few dependencies:
-
Install Rust (if not already installed):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source ~/.cargo/env
-
Install Tauri CLI:
cargo install tauri-cli
-
Install Trunk (for WebAssembly frontend builds):
cargo install trunk
-
Add WebAssembly target:
rustup target add wasm32-unknown-unknown
Once you have all dependencies installed, you can run the development server:
cargo tauri dev
This will start both the Rust backend and the Leptos frontend with hot reload enabled.
- Overview of Codex systems docs: docs/codex/README.md
- Building a Chat UI with streaming: docs/codex-chat-ui.md
- Architecture: docs/codex/architecture.md
- Authentication: docs/codex/authentication.md
- Protocol overview: docs/codex/protocol-overview.md
- Prompts: docs/codex/prompts.md
- Sandbox: docs/codex/sandbox.md
- Tools: docs/codex/tools.md
- Testing: docs/codex/testing.md
- Purpose: Desktop chat UI that drives Codex via a streaming protocol.
- Crates:
openagents-ui
(Leptos/WASM, root crate) andopenagents
(Tauri v2,src-tauri/
) in a Cargo workspace. - Frontend:
- Entry points:
src/main.rs
mountsApp
fromsrc/app.rs
. - UI: Sidebar shows workspace/account/model/client/token usage, raw event log, and recent chats. Main pane renders transcript blocks (User, Assistant, Reasoning, Tool) with autoscroll.
- Controls: Reasoning level selector (Minimal/Low/Medium/High) invokes
set_reasoning_effort
; chat bar sends prompts viasubmit_chat
. - Markdown:
pulldown_cmark
for rendering; styling via Tailwind Play CDN and Berkeley Mono (seeindex.html
andpublic/fonts/
).
- Entry points:
- Desktop (Tauri):
- Entry:
src-tauri/src/main.rs
→openagents_lib::run()
insrc-tauri/src/lib.rs
. - Commands exposed to UI:
get_full_status
,list_recent_chats
,load_chat
,submit_chat
,set_reasoning_effort
,greet
. - Protocol process: Spawns
cargo run -p codex-cli -- proto
fromcodex-rs/
if present, elsecodex proto
. Forcesapproval_policy=never
,sandbox_mode=danger-full-access
,model=gpt-5
, and selected reasoning effort. - Streaming: Maps protocol JSON lines to UI events (assistant deltas, reasoning deltas/summaries, tool begin/delta/end, token counts).
- Entry:
- Auth & Sessions:
- Auth: Reads
~/.codex/auth.json
to detect ApiKey or ChatGPT; extracts email/plan fromid_token
. - Sessions: Scans
~/.codex/sessions
and~/.codex/archived_sessions
forrollout-*.jsonl
, parses meta (cwd, approval, sandbox, CLI version) and reconstructs chat items.
- Auth: Reads
- Config & Build:
- Trunk:
Trunk.toml
targetsindex.html
; dev served on1420
. - Tauri:
src-tauri/tauri.conf.json
runs Trunk in dev and uses../dist
in builds. - Workspace: Root
Cargo.toml
listssrc-tauri
as a member.
- Trunk:
- Vendored tooling:
codex-rs/
contains TUI and supporting crates used by the protocol runner.
- Dev (web):
trunk serve
→ http://localhost:1420 - Dev (desktop):
cd src-tauri && cargo tauri dev
- Build (web):
trunk build --release
→dist/
- Build (desktop):
cd src-tauri && cargo tauri build
- Tests (workspace):
cargo test
orcargo test -p openagents
Run these checks before committing:
- UI:
cargo check --target wasm32-unknown-unknown
- Tauri:
cd src-tauri && cargo check
You can exercise the Master Task flow without launching the desktop app using a small CLI that ships with the Tauri crate.
Prereqs:
- Rust toolchain installed
Useful commands (from repo root):
- Create a task (read-only sandbox):
cargo run -p openagents_lib --bin master_headless -- create "Readonly – Flow Test" read-only
- Plan with a simple goal (fallback planner):
cargo run -p openagents_lib --bin master_headless -- plan <task_id> "List top-level files; Summarize crates"
- Run one budgeted turn:
cargo run -p openagents_lib --bin master_headless -- run-once <task_id>
- Run until done (cap to N steps):
cargo run -p openagents_lib --bin master_headless -- run-until-done <task_id> 10
- List / Show:
cargo run -p openagents_lib --bin master_headless -- list
cargo run -p openagents_lib --bin master_headless -- show <task_id>
Notes:
- Headless mode uses a fallback planner and a simulated runner turn that enforces budgets and updates metrics without contacting the protocol.
- Real protocol-driven runs and UI streaming remain available via the desktop app.
- Live CLI (proto-backed) is available:
cargo run -p openagents --bin master_live -- <label> [max_seconds]
. Defaults to modelgpt-5
; override withCODEX_MODEL
. - Logs: headless operations append to
$(CODEX_HOME)/master-tasks/<task_id>.log
; live runs append to$(CODEX_HOME)/master-tasks/live-<label>-<timestamp>.log
.
See also:
- QA scenarios:
docs/qa/master-task-qa.md
- Sample read-only config idea:
docs/samples/master-task.json