Smart Responses
How CodeSpar handles open-ended questions using Claude — context-aware, multilingual answers that leverage agent memory, audit history, and project state.
Smart Responses
When you ask CodeSpar an open-ended question instead of a structured command, the agent uses Smart Responses — context-aware answers powered by Claude that draw on the agent's full knowledge of your project.
When Smart Responses Activate
The agent detects an open-ended question when all three conditions are met:
- Contains a question mark — the message includes
? - Longer than 25 characters — short messages like "status?" are handled as commands
- NLU confidence below 1.0 — the natural language understanding engine is not fully confident that the message maps to a structured command
When these conditions are met, the agent routes the message to the Smart Response handler instead of the command parser.
Examples That Trigger Smart Responses
Examples That Do NOT Trigger Smart Responses
How It Works
When a Smart Response is triggered, the agent follows this process:
Step 1: Build Agent Context
The agent assembles a comprehensive context object from its current state:
Step 2: Send to Claude
The context and the user's question are sent to Claude Sonnet (configurable via SMART_MODEL):
Step 3: Return Response
Claude's response is sent back to the user through whatever channel they used (Slack, WhatsApp, Discord, Telegram, CLI).
Multilingual Support
Smart Responses work in any language. The system prompt instructs Claude to respond in the same language the user used. This means:
English
Portuguese
Spanish
Context Sources
Smart Responses are more useful than generic AI chat because they draw on real project data:
| Source | What It Provides | Example Use |
|---|---|---|
| Agent status | Current state, uptime, autonomy level | "Is the agent running?" |
| Recent audit | Last 20 actions (tasks, reviews, deploys) | "What happened in the last hour?" |
| Build history | CI results, failure details | "Why did the last build fail?" |
| PR data | Open/merged PRs, review status | "What PRs are open?" |
| Memory stats | Vector count, indexed files, modules | "What does the agent know about our codebase?" |
| Task results | Recent coding tasks, file changes | "What was the last thing the agent did?" |
Memory Integration
Smart Response conversations are stored in the agent's vector memory. This means:
- Context builds over time — the agent remembers previous questions and can reference them
- Team patterns emerge — the agent learns what questions the team frequently asks
- Knowledge compounds — answers about the codebase improve as the agent indexes more code
Configuration
| Variable | Default | Description |
|---|---|---|
SMART_MODEL | claude-sonnet-4-20250514 | Claude model for Smart Responses |
Model Selection
The default model (Claude Sonnet) balances speed and quality for conversational responses. For deployments where latency is critical, you can use a faster model. For complex analytical questions, a more capable model may provide better answers.
Limitations
| Limitation | Detail |
|---|---|
| Context window | Only the last 20 audit entries and 10 task results are included |
| No real-time data | Cannot query external systems (databases, monitoring tools) directly |
| Latency | Response time depends on Claude API latency (typically 2-5 seconds) |
| Cost | Each Smart Response requires a Claude API call |
Next Steps
- Natural Language Commands — how CodeSpar parses structured commands from natural language
- Autonomy Levels — configure agent proactivity
- Agent Architecture — how agents process messages