Retire the stale bare-skill dispatch backlog item

ROADMAP #36 remained open even though current main already dispatches bare
skill names in the REPL through skill resolution instead of forwarding them
to the model. This change adds a direct regression test for that behavior
and marks the backlog item done with fresh verification evidence.

Constraint: User required fresh cargo fmt, cargo clippy --workspace --all-targets -- -D warnings, and cargo test --workspace before closeout
Rejected: Leave #36 open because the implementation already existed | keeps the immediate backlog inaccurate and invites duplicate work
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Reopen #36 only with a fresh repro showing a listed project skill still falls through to plain prompt handling on current main
Tested: cargo fmt --all --check; cargo clippy --workspace --all-targets -- -D warnings; cargo test --workspace
Not-tested: No interactive manual REPL session beyond the new bare-skill unit coverage
This commit is contained in:
Yeachan-Heo
2026-04-11 22:50:28 +00:00
parent a4921cb262
commit 264fdc214e
2 changed files with 70 additions and 21 deletions

View File

@@ -437,7 +437,7 @@ Model name prefix now wins unconditionally over env-var presence. Regression tes
35. **OpenAI gpt-5.x requires max_completion_tokens not max_tokens****done (verified 2026-04-11):** current `main` already carries the Rust-side OpenAI-compat fix. `build_chat_completion_request()` in `rust/crates/api/src/providers/openai_compat.rs` switches the emitted key to `"max_completion_tokens"` whenever the wire model starts with `gpt-5`, while older models still use `"max_tokens"`. Regression test `gpt5_uses_max_completion_tokens_not_max_tokens()` proves `gpt-5.2` emits `max_completion_tokens` and omits `max_tokens`. Re-verified against current `origin/main` `d40929ca`: `cargo test -p api gpt5_uses_max_completion_tokens_not_max_tokens -- --nocapture` passes. Historical proof: `eb044f0a` landed the request-field switch plus regression test on 2026-04-09. Source: rklehm in #claw-code 2026-04-09. 35. **OpenAI gpt-5.x requires max_completion_tokens not max_tokens****done (verified 2026-04-11):** current `main` already carries the Rust-side OpenAI-compat fix. `build_chat_completion_request()` in `rust/crates/api/src/providers/openai_compat.rs` switches the emitted key to `"max_completion_tokens"` whenever the wire model starts with `gpt-5`, while older models still use `"max_tokens"`. Regression test `gpt5_uses_max_completion_tokens_not_max_tokens()` proves `gpt-5.2` emits `max_completion_tokens` and omits `max_tokens`. Re-verified against current `origin/main` `d40929ca`: `cargo test -p api gpt5_uses_max_completion_tokens_not_max_tokens -- --nocapture` passes. Historical proof: `eb044f0a` landed the request-field switch plus regression test on 2026-04-09. Source: rklehm in #claw-code 2026-04-09.
36. **Custom/project skill invocation disconnected from skill discovery** -- dogfooded 2026-04-09. /skills lists custom skills (e.g. caveman) but bare skill-name invocation does not dispatch them; falls through to plain model prompt. Fix: audit classify_skills_slash_command, ensure any skill listed by /skills has a deterministic invocation path, or document the correct syntax. Source: gaebal-gajae dogfood 2026-04-09. 36. **Custom/project skill invocation disconnected from skill discovery** **done (verified 2026-04-11):** current `main` already routes bare-word skill input in the REPL through `resolve_skill_invocation()` instead of forwarding it to the model. `rust/crates/rusty-claude-cli/src/main.rs` now treats a leading bare token that matches a known skill name as `/skills <input>`, while `rust/crates/commands/src/lib.rs` validates the skill against discovered project/user skill roots and reports available-skill guidance on miss. Fresh regression coverage proves the known-skill dispatch path and the unknown/non-skill bypass. Historical proof: `8d0308ee` landed the REPL dispatch fix. Source: gaebal-gajae dogfood 2026-04-09.
37. **Claude subscription login path should be removed, not deprecated** -- dogfooded 2026-04-09. Official auth should be API key only (`ANTHROPIC_API_KEY`) or OAuth bearer token via `ANTHROPIC_AUTH_TOKEN`; the local `claw login` / `claw logout` subscription-style flow created legal/billing ambiguity and a misleading saved-OAuth fallback. **Done (verified 2026-04-11):** removed the direct `claw login` / `claw logout` CLI surface, removed `/login` and `/logout` from shared slash-command discovery, changed both CLI and provider startup auth resolution to ignore saved OAuth credentials, and updated auth diagnostics to point only at `ANTHROPIC_API_KEY` / `ANTHROPIC_AUTH_TOKEN`. Verification: targeted `commands`, `api`, and `rusty-claude-cli` tests for removed login/logout guidance and ignored saved OAuth all pass, and `cargo check -p api -p commands -p rusty-claude-cli` passes. Source: gaebal-gajae policy decision 2026-04-09. 37. **Claude subscription login path should be removed, not deprecated** -- dogfooded 2026-04-09. Official auth should be API key only (`ANTHROPIC_API_KEY`) or OAuth bearer token via `ANTHROPIC_AUTH_TOKEN`; the local `claw login` / `claw logout` subscription-style flow created legal/billing ambiguity and a misleading saved-OAuth fallback. **Done (verified 2026-04-11):** removed the direct `claw login` / `claw logout` CLI surface, removed `/login` and `/logout` from shared slash-command discovery, changed both CLI and provider startup auth resolution to ignore saved OAuth credentials, and updated auth diagnostics to point only at `ANTHROPIC_API_KEY` / `ANTHROPIC_AUTH_TOKEN`. Verification: targeted `commands`, `api`, and `rusty-claude-cli` tests for removed login/logout guidance and ignored saved OAuth all pass, and `cargo check -p api -p commands -p rusty-claude-cli` passes. Source: gaebal-gajae policy decision 2026-04-09.

View File

@@ -778,6 +778,22 @@ fn removed_auth_surface_error(command_name: &str) -> String {
) )
} }
fn try_resolve_bare_skill_prompt(cwd: &Path, trimmed: &str) -> Option<String> {
let bare_first_token = trimmed.split_whitespace().next().unwrap_or_default();
let looks_like_skill_name = !bare_first_token.is_empty()
&& !bare_first_token.starts_with('/')
&& bare_first_token
.chars()
.all(|c| c.is_alphanumeric() || c == '-' || c == '_');
if !looks_like_skill_name {
return None;
}
match resolve_skill_invocation(cwd, Some(trimmed)) {
Ok(SkillSlashDispatch::Invoke(prompt)) => Some(prompt),
_ => None,
}
}
fn join_optional_args(args: &[String]) -> Option<String> { fn join_optional_args(args: &[String]) -> Option<String> {
let joined = args.join(" "); let joined = args.join(" ");
let trimmed = joined.trim(); let trimmed = joined.trim();
@@ -2977,23 +2993,13 @@ fn run_repl(
// Bare-word skill dispatch: if the first token of the input // Bare-word skill dispatch: if the first token of the input
// matches a known skill name, invoke it as `/skills <input>` // matches a known skill name, invoke it as `/skills <input>`
// rather than forwarding raw text to the LLM (ROADMAP #36). // rather than forwarding raw text to the LLM (ROADMAP #36).
let bare_first_token = trimmed.split_whitespace().next().unwrap_or_default();
let looks_like_skill_name = !bare_first_token.is_empty()
&& !bare_first_token.starts_with('/')
&& bare_first_token
.chars()
.all(|c| c.is_alphanumeric() || c == '-' || c == '_');
if looks_like_skill_name {
let cwd = std::env::current_dir().unwrap_or_default(); let cwd = std::env::current_dir().unwrap_or_default();
if let Ok(SkillSlashDispatch::Invoke(prompt)) = if let Some(prompt) = try_resolve_bare_skill_prompt(&cwd, &trimmed) {
resolve_skill_invocation(&cwd, Some(&trimmed))
{
editor.push_history(input); editor.push_history(input);
cli.record_prompt_history(&trimmed); cli.record_prompt_history(&trimmed);
cli.run_turn(&prompt)?; cli.run_turn(&prompt)?;
continue; continue;
} }
}
editor.push_history(input); editor.push_history(input);
cli.record_prompt_history(&trimmed); cli.record_prompt_history(&trimmed);
cli.run_turn(&trimmed)?; cli.run_turn(&trimmed)?;
@@ -8176,10 +8182,11 @@ mod tests {
resolve_repl_model, resolve_session_reference, response_to_events, resolve_repl_model, resolve_session_reference, response_to_events,
resume_supported_slash_commands, run_resume_command, short_tool_id, resume_supported_slash_commands, run_resume_command, short_tool_id,
slash_command_completion_candidates_with_sessions, status_context, slash_command_completion_candidates_with_sessions, status_context,
summarize_tool_payload_for_markdown, validate_no_args, write_mcp_server_fixture, CliAction, summarize_tool_payload_for_markdown, try_resolve_bare_skill_prompt, validate_no_args,
CliOutputFormat, CliToolExecutor, GitWorkspaceSummary, InternalPromptProgressEvent, write_mcp_server_fixture, CliAction, CliOutputFormat, CliToolExecutor, GitWorkspaceSummary,
InternalPromptProgressState, LiveCli, LocalHelpTopic, PromptHistoryEntry, SlashCommand, InternalPromptProgressEvent, InternalPromptProgressState, LiveCli, LocalHelpTopic,
StatusUsage, DEFAULT_MODEL, LATEST_SESSION_REFERENCE, STUB_COMMANDS, PromptHistoryEntry, SlashCommand, StatusUsage, DEFAULT_MODEL, LATEST_SESSION_REFERENCE,
STUB_COMMANDS,
}; };
use api::{ApiError, MessageResponse, OutputContentBlock, Usage}; use api::{ApiError, MessageResponse, OutputContentBlock, Usage};
use plugins::{ use plugins::{
@@ -8420,6 +8427,16 @@ mod tests {
} }
} }
fn write_skill_fixture(root: &Path, name: &str, description: &str) {
let skill_dir = root.join(name);
fs::create_dir_all(&skill_dir).expect("skill dir should exist");
fs::write(
skill_dir.join("SKILL.md"),
format!("---\nname: {name}\ndescription: {description}\n---\n\n# {name}\n"),
)
.expect("skill file should write");
}
fn write_plugin_fixture(root: &Path, name: &str, include_hooks: bool, include_lifecycle: bool) { fn write_plugin_fixture(root: &Path, name: &str, include_hooks: bool, include_lifecycle: bool) {
fs::create_dir_all(root.join(".claude-plugin")).expect("manifest dir"); fs::create_dir_all(root.join(".claude-plugin")).expect("manifest dir");
if include_hooks { if include_hooks {
@@ -9713,6 +9730,38 @@ mod tests {
assert!(help.contains("works with --resume SESSION.jsonl")); assert!(help.contains("works with --resume SESSION.jsonl"));
} }
#[test]
fn bare_skill_dispatch_resolves_known_project_skill_to_prompt() {
let _guard = env_lock();
let workspace = temp_dir();
write_skill_fixture(
&workspace.join(".codex").join("skills"),
"caveman",
"Project skill fixture",
);
let prompt = try_resolve_bare_skill_prompt(&workspace, "caveman sharpen club")
.expect("known bare skill should dispatch");
assert_eq!(prompt, "$caveman sharpen club");
fs::remove_dir_all(workspace).expect("workspace should clean up");
}
#[test]
fn bare_skill_dispatch_ignores_unknown_or_non_skill_input() {
let _guard = env_lock();
let workspace = temp_dir();
fs::create_dir_all(&workspace).expect("workspace should exist");
assert_eq!(
try_resolve_bare_skill_prompt(&workspace, "not-a-known-skill do thing"),
None
);
assert_eq!(try_resolve_bare_skill_prompt(&workspace, "/status"), None);
fs::remove_dir_all(workspace).expect("workspace should clean up");
}
#[test] #[test]
fn repl_help_includes_shared_commands_and_exit() { fn repl_help_includes_shared_commands_and_exit() {
let help = render_repl_help(); let help = render_repl_help();