v0.5.0-alpha

v0.5.0-alpha

Released: March 2026

Agent refactored from activity to durable child workflow, multi-model LLM support, agent observability improvements, and Slack report notifications.

Breaking Changes

Agent: Activity to Child Workflow

The agent no longer runs as a single Temporal activity. It now runs as a Temporal child workflow (agent.workflow), giving it durable execution, per-iteration heartbeating, and signal-based observer integration.

Before (v0.4.0):

agent.Run(agent.RunInput{
    ProviderType: "anthropic",
    Model:        "claude-sonnet-4-6",
    MaxTokens:    8192,
    SystemPrompt: prompt,
    UserPrompt:   core.Output("webhook.UserPrompt"),
    MaxTurns:     30,
    MCPServers:   mcpServers,
    CostLimits:   costLimits,
})

After (v0.5.0):

agent.Node("reviewer", agent.NodeConfig{
    LLM: agent.LLMConfig{
        ProviderType: "anthropic",
        Model:        "claude-sonnet-4-6",
        MaxTokens:    8192,
    },
    SystemPrompt:  prompt,
    UserPrompt:    core.Output("webhook.UserPrompt"),
    MaxIterations: 30,
    Tools: []agent.Tool{
        agent.MCPTool("pagerduty", pagerdutyMCP),
    },
    CostLimits: costLimits,
    Compaction: agent.CompactionConfig{
        ThresholdTokens: 80000,
        KeepRecent:      4,
    },
})

Key changes:

  • agent.Run() replaced by agent.Node(name, config)
  • RunInput replaced by NodeConfig with LLMConfig sub-struct
  • MCPServers replaced by Tools (supports both MCP servers and provider activities)
  • MaxTurns renamed to MaxIterations
  • New: Compaction, Observer, CustomPricing, LLMTimeout, ToolTimeout

Tool Sources

Tools are now declared via typed constructors instead of raw MCPServerConfig slices:

Tools: []agent.Tool{
    agent.MCPTool("pagerduty", agent.MCPServerConfig{...}),
    agent.ProviderTool("slack", slack.Provider()),
}

ProviderTool exposes all activities from a resolute provider as LLM-callable tools, with JSON Schema auto-generated from Go struct types.

New Features

Observer Pattern

The observer function evaluates the agent’s progress after each iteration from the parent workflow context. It communicates via Temporal signals — fully durable.

agent.Node("reviewer", agent.NodeConfig{
    // ...
    Observer: func(ctx agent.ObserverContext) agent.Verdict {
        if ctx.TotalCost > 5.0 {
            return agent.VerdictFail
        }
        return agent.VerdictContinue
    },
})

Verdicts: VerdictContinue, VerdictSucceed, VerdictFail, VerdictEscalate.

Context Compaction

Automatic context summarization when token count exceeds a configurable threshold. Older messages are summarized into a single message, preserving the most recent messages intact.

Compaction: agent.CompactionConfig{
    ThresholdTokens: 80000,
    KeepRecent:      4,
    Model:           "claude-haiku-4-5", // optional, defaults to agent model
},

Compaction ROI is logged: tokens_before, tokens_after, tokens_saved.

Agent Observability

New structured logging and output fields for production monitoring:

FeatureDescription
Loop detectionHash-based detection of consecutive identical tool calls. Warns after 3 consecutive failures on the same tool+input.
Per-turn token trackingPerTurnInputTokens array in output tracks input tokens per iteration. Warns on >2x growth between turns.
Compaction ROITokensSavedByCompact in output reports total tokens reclaimed.
Tool schema overheadLogs schema_bytes and estimated_tokens for MCP and provider tools at discovery time.
Custom pricingCustomPricing *ModelPricing on NodeConfig for cost tracking with non-built-in models.

Multi-Model Support

The agent supports Anthropic, Ollama, and any OpenAI-compatible endpoint. Switch models via configuration without code changes:

LLM: agent.LLMConfig{
    ProviderType: "ollama",
    BaseURL:      "http://localhost:11434/v1",
    Model:        "qwen3.5:32b",
    MaxTokens:    16384,
},
CustomPricing: &agent.ModelPricing{
    InputPerMillionTokens:  0.50,
    OutputPerMillionTokens: 1.50,
},

Slack: NotifyReport Activity

New activity for posting structured Block Kit reports with LLM metadata:

slack.NotifyReport(slack.NotifyReportInput{
    WebhookURL:  os.Getenv("SLACK_WEBHOOK_URL"),
    Header:      "Review Complete",
    Body:        core.Output("review.Response"),
    CostUSD:     core.Output("review.TotalCost"),
    Duration:    core.Output("review.Duration"),
    Succeeded:   core.Output("review.Succeeded"),
    LLMProvider: "anthropic",
    LLMModel:    "claude-sonnet-4-6",
    FailHeader:  "Review Failed",
    FailMessage: "Check Temporal UI for details.",
})

Features:

  • Automatic markdown-to-Slack-mrkdwn conversion
  • Long body split into multiple section blocks (3000 char limit)
  • Handles both success and failure states
  • Caps output at 50 Slack blocks

New Output Fields

NodeOutput now includes:

FieldTypeDescription
VerdictVerdictFinal observer verdict
Summaries[]stringCompaction summaries generated during run
PerTurnInputTokens[]int64Input token count per iteration
TokensSavedByCompactint64Total tokens reclaimed by compaction

Installation

go get github.com/resolute-sh/resolute@v0.5.1-alpha
go get github.com/resolute-sh/resolute-agent@v0.3.1-alpha
go get github.com/resolute-sh/resolute-pagerduty@v0.2.1-alpha
go get github.com/resolute-sh/resolute-slack@v0.2.1-alpha

Migration from v0.4.0-alpha

  1. Replace agent.Run(RunInput{...}) with agent.Node(name, NodeConfig{...})
  2. Move ProviderType, BaseURL, APIKey, Model, MaxTokens into LLMConfig sub-struct
  3. Replace MCPServers: []MCPServerConfig{...} with Tools: []Tool{MCPTool(...), ...}
  4. Rename MaxTurns to MaxIterations
  5. Add Compaction config if needed (recommended for workflows with many tools)
  6. Replace agent.Run(input).As("review") with agent.Node("reviewer", config).As("review")
  7. Update downstream references: review.TurnsUsedreview.Iterations, review.CostUSDreview.TotalCost

Full Changelog

v0.4.0-alpha…v0.5.0-alpha