v0.5.0-alpha
v0.5.0-alpha
Released: March 2026
Agent refactored from activity to durable child workflow, multi-model LLM support, agent observability improvements, and Slack report notifications.
Breaking Changes
Agent: Activity to Child Workflow
The agent no longer runs as a single Temporal activity. It now runs as a Temporal child workflow (agent.workflow), giving it durable execution, per-iteration heartbeating, and signal-based observer integration.
Before (v0.4.0):
agent.Run(agent.RunInput{
ProviderType: "anthropic",
Model: "claude-sonnet-4-6",
MaxTokens: 8192,
SystemPrompt: prompt,
UserPrompt: core.Output("webhook.UserPrompt"),
MaxTurns: 30,
MCPServers: mcpServers,
CostLimits: costLimits,
})After (v0.5.0):
agent.Node("reviewer", agent.NodeConfig{
LLM: agent.LLMConfig{
ProviderType: "anthropic",
Model: "claude-sonnet-4-6",
MaxTokens: 8192,
},
SystemPrompt: prompt,
UserPrompt: core.Output("webhook.UserPrompt"),
MaxIterations: 30,
Tools: []agent.Tool{
agent.MCPTool("pagerduty", pagerdutyMCP),
},
CostLimits: costLimits,
Compaction: agent.CompactionConfig{
ThresholdTokens: 80000,
KeepRecent: 4,
},
})Key changes:
agent.Run()replaced byagent.Node(name, config)RunInputreplaced byNodeConfigwithLLMConfigsub-structMCPServersreplaced byTools(supports both MCP servers and provider activities)MaxTurnsrenamed toMaxIterations- New:
Compaction,Observer,CustomPricing,LLMTimeout,ToolTimeout
Tool Sources
Tools are now declared via typed constructors instead of raw MCPServerConfig slices:
Tools: []agent.Tool{
agent.MCPTool("pagerduty", agent.MCPServerConfig{...}),
agent.ProviderTool("slack", slack.Provider()),
}ProviderTool exposes all activities from a resolute provider as LLM-callable tools, with JSON Schema auto-generated from Go struct types.
New Features
Observer Pattern
The observer function evaluates the agent’s progress after each iteration from the parent workflow context. It communicates via Temporal signals — fully durable.
agent.Node("reviewer", agent.NodeConfig{
// ...
Observer: func(ctx agent.ObserverContext) agent.Verdict {
if ctx.TotalCost > 5.0 {
return agent.VerdictFail
}
return agent.VerdictContinue
},
})Verdicts: VerdictContinue, VerdictSucceed, VerdictFail, VerdictEscalate.
Context Compaction
Automatic context summarization when token count exceeds a configurable threshold. Older messages are summarized into a single message, preserving the most recent messages intact.
Compaction: agent.CompactionConfig{
ThresholdTokens: 80000,
KeepRecent: 4,
Model: "claude-haiku-4-5", // optional, defaults to agent model
},Compaction ROI is logged: tokens_before, tokens_after, tokens_saved.
Agent Observability
New structured logging and output fields for production monitoring:
| Feature | Description |
|---|---|
| Loop detection | Hash-based detection of consecutive identical tool calls. Warns after 3 consecutive failures on the same tool+input. |
| Per-turn token tracking | PerTurnInputTokens array in output tracks input tokens per iteration. Warns on >2x growth between turns. |
| Compaction ROI | TokensSavedByCompact in output reports total tokens reclaimed. |
| Tool schema overhead | Logs schema_bytes and estimated_tokens for MCP and provider tools at discovery time. |
| Custom pricing | CustomPricing *ModelPricing on NodeConfig for cost tracking with non-built-in models. |
Multi-Model Support
The agent supports Anthropic, Ollama, and any OpenAI-compatible endpoint. Switch models via configuration without code changes:
LLM: agent.LLMConfig{
ProviderType: "ollama",
BaseURL: "http://localhost:11434/v1",
Model: "qwen3.5:32b",
MaxTokens: 16384,
},
CustomPricing: &agent.ModelPricing{
InputPerMillionTokens: 0.50,
OutputPerMillionTokens: 1.50,
},Slack: NotifyReport Activity
New activity for posting structured Block Kit reports with LLM metadata:
slack.NotifyReport(slack.NotifyReportInput{
WebhookURL: os.Getenv("SLACK_WEBHOOK_URL"),
Header: "Review Complete",
Body: core.Output("review.Response"),
CostUSD: core.Output("review.TotalCost"),
Duration: core.Output("review.Duration"),
Succeeded: core.Output("review.Succeeded"),
LLMProvider: "anthropic",
LLMModel: "claude-sonnet-4-6",
FailHeader: "Review Failed",
FailMessage: "Check Temporal UI for details.",
})Features:
- Automatic markdown-to-Slack-mrkdwn conversion
- Long body split into multiple section blocks (3000 char limit)
- Handles both success and failure states
- Caps output at 50 Slack blocks
New Output Fields
NodeOutput now includes:
| Field | Type | Description |
|---|---|---|
Verdict | Verdict | Final observer verdict |
Summaries | []string | Compaction summaries generated during run |
PerTurnInputTokens | []int64 | Input token count per iteration |
TokensSavedByCompact | int64 | Total tokens reclaimed by compaction |
Installation
go get github.com/resolute-sh/resolute@v0.5.1-alpha
go get github.com/resolute-sh/resolute-agent@v0.3.1-alpha
go get github.com/resolute-sh/resolute-pagerduty@v0.2.1-alpha
go get github.com/resolute-sh/resolute-slack@v0.2.1-alphaMigration from v0.4.0-alpha
- Replace
agent.Run(RunInput{...})withagent.Node(name, NodeConfig{...}) - Move
ProviderType,BaseURL,APIKey,Model,MaxTokensintoLLMConfigsub-struct - Replace
MCPServers: []MCPServerConfig{...}withTools: []Tool{MCPTool(...), ...} - Rename
MaxTurnstoMaxIterations - Add
Compactionconfig if needed (recommended for workflows with many tools) - Replace
agent.Run(input).As("review")withagent.Node("reviewer", config).As("review") - Update downstream references:
review.TurnsUsed→review.Iterations,review.CostUSD→review.TotalCost
Full Changelog
v0.4.0-alpha…v0.5.0-alpha