CI has been red sincebe561bf('Use Anthropic count tokens for preflight') because that commit replaced the free-function preflight_message_request (byte-estimate guard) with an instance method that silently returns Ok on any count_tokens failure: let counted_input_tokens = match self.count_tokens(request).await { Ok(count) => count, Err(_) => return Ok(()), // <-- silent bypass }; Two consequences: 1. client_integration::send_message_blocks_oversized_requests_before_the_http_call has been FAILING on every CI run sincebe561bf. The mock server in that test only has one HTTP response queued (a bare '{}' to satisfy the main request), so the count_tokens POST parses into an empty body that fails to deserialize into CountTokensResponse -> Err -> silent bypass -> the oversized 600k-char request proceeds to the mock instead of being rejected with ContextWindowExceeded as the test expects. 2. In production, any third-party Anthropic-compatible gateway that doesn't implement /v1/messages/count_tokens (OpenRouter, Cloudflare AI Gateway, etc.) would silently disable the preflight guard entirely, letting oversized requests hit the upstream only to fail there with a provider- side context-window error. This is exactly the 'opaque failure surface' ROADMAP #22 asked us to avoid. Fix: call the free-function super::preflight_message_request(request)? as the first step in the instance method, before any network round-trip. This guarantees the byte-estimate guard always fires, whether or not the remote count_tokens endpoint is reachable. The count_tokens refinement still runs afterward when available for more precise token counting, but it is now strictly additive — it can only catch more cases, never silently skip the guard. Test results: - cargo test -p api --lib: 89 passed, 0 failed - cargo test --release -p api (all test binaries): 118 passed, 0 failed - cargo test --release -p api --test client_integration \ send_message_blocks_oversized_requests_before_the_http_call: passes - cargo fmt --check: clean This unblocks the Rust CI workflow which has been red on every push sincebe561bflanded.
Claw Code
ultraworkers/claw-code · Usage · Rust workspace · Parity · Roadmap · UltraWorkers Discord
Claw Code is the public Rust implementation of the claw CLI agent harness.
The canonical implementation lives in rust/, and the current source of truth for this repository is ultraworkers/claw-code.
Important
Start with
USAGE.mdfor build, auth, CLI, session, and parity-harness workflows. Makeclaw doctoryour first health check after building, userust/README.mdfor crate-level details, readPARITY.mdfor the current Rust-port checkpoint, and seedocs/container.mdfor the container-first workflow.
Current repository shape
rust/— canonical Rust workspace and theclawCLI binaryUSAGE.md— task-oriented usage guide for the current product surfacePARITY.md— Rust-port parity status and migration notesROADMAP.md— active roadmap and cleanup backlogPHILOSOPHY.md— project intent and system-design framingsrc/+tests/— companion Python/reference workspace and audit helpers; not the primary runtime surface
Quick start
cd rust
cargo build --workspace
./target/debug/claw --help
./target/debug/claw prompt "summarize this repository"
Authenticate with either an API key or the built-in OAuth flow:
export ANTHROPIC_API_KEY="sk-ant-..."
# or
cd rust
./target/debug/claw login
Run the workspace test suite:
cd rust
cargo test --workspace
Documentation map
USAGE.md— quick commands, auth, sessions, config, parity harnessrust/README.md— crate map, CLI surface, features, workspace layoutPARITY.md— parity status for the Rust portrust/MOCK_PARITY_HARNESS.md— deterministic mock-service harness detailsROADMAP.md— active roadmap and open cleanup workPHILOSOPHY.md— why the project exists and how it is operated
Ecosystem
Claw Code is built in the open alongside the broader UltraWorkers toolchain:
Ownership / affiliation disclaimer
- This repository does not claim ownership of the original Claude Code source material.
- This repository is not affiliated with, endorsed by, or maintained by Anthropic.
