US-011: Performance optimization for API request serialization

Added criterion benchmarks and optimized flatten_tool_result_content:
- Added criterion dev-dependency and request_building benchmark suite
- Optimized flatten_tool_result_content to pre-allocate capacity and avoid
  intermediate Vec construction (was collecting to Vec then joining)
- Made key functions public for benchmarking: translate_message,
  build_chat_completion_request, flatten_tool_result_content,
  is_reasoning_model, model_rejects_is_error_field

Benchmark results:
- flatten_tool_result_content/single_text: ~17ns
- translate_message/text_only: ~200ns
- build_chat_completion_request/10 messages: ~16.4µs
- is_reasoning_model detection: ~26-42ns

All 119 unit tests and 29 integration tests pass.
cargo clippy passes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Yeachan-Heo
2026-04-16 11:11:45 +00:00
parent f65d15fb2f
commit 87b982ece5
6 changed files with 414 additions and 13 deletions

View File

@@ -107,3 +107,27 @@ US-010 COMPLETED (Add model compatibility documentation)
- Added testing section with example commands
- Cross-referenced with existing code comments in openai_compat.rs
- cargo clippy passes
US-011 COMPLETED (Performance optimization: reduce API request serialization overhead)
- Files:
- rust/crates/api/Cargo.toml (added criterion dev-dependency and bench config)
- rust/crates/api/benches/request_building.rs (new benchmark suite)
- rust/crates/api/src/providers/openai_compat.rs (optimizations)
- rust/crates/api/src/lib.rs (public exports for benchmarks)
- Optimizations implemented:
1. flatten_tool_result_content: Pre-allocate String capacity and avoid intermediate Vec
- Before: collected to Vec<String> then joined
- After: single String with pre-calculated capacity, push directly
2. Made key functions public for benchmarking: translate_message, build_chat_completion_request,
flatten_tool_result_content, is_reasoning_model, model_rejects_is_error_field
- Benchmark results:
- flatten_tool_result_content/single_text: ~17ns
- flatten_tool_result_content/multi_text (10 blocks): ~46ns
- flatten_tool_result_content/large_content (50 blocks): ~11.7µs
- translate_message/text_only: ~200ns
- translate_message/tool_result: ~348ns
- build_chat_completion_request/10 messages: ~16.4µs
- build_chat_completion_request/100 messages: ~209µs
- is_reasoning_model detection: ~26-42ns depending on model
- All tests pass (119 unit tests + 29 integration tests)
- cargo clippy passes