Introduction
Sage is a programming language where agents are first-class citizens.
Instead of building agents using Python frameworks like LangChain or CrewAI, you write agents as naturally as you write functions. Agents, their state, and their interactions are semantic primitives baked into the compiler and runtime.
agent Researcher {
topic: String
on start {
let summary = try divine(
"Write a concise 2-sentence summary of: {self.topic}"
);
yield(summary);
}
on error(e) {
yield("Research unavailable");
}
}
agent Coordinator {
on start {
let r1 = summon Researcher { topic: "quantum computing" };
let r2 = summon Researcher { topic: "CRISPR gene editing" };
let s1 = try await r1;
let s2 = try await r2;
print(s1);
print(s2);
yield(0);
}
on error(e) {
print("A researcher failed");
yield(1);
}
}
run Coordinator;
Why Sage?
Agents as primitives, not patterns. Most agent frameworks are libraries that impose patterns on top of a general-purpose language. Sage makes agents a first-class concept — the compiler understands what an agent is, what state it holds, and how agents communicate.
Type-safe LLM integration. The divine expression lets you call LLMs with structured output. The type system ensures you handle divination results correctly.
Compiles to native binaries and WebAssembly. Sage compiles to Rust, then to native code or WebAssembly. Your agent programs are fast, self-contained binaries — or run directly in the browser. Try it now in the online playground.
Concurrent by default. Spawned agents run concurrently. The runtime handles scheduling and message passing.
Built-in testing with LLM mocking. Test your agents with deterministic mocks — no network calls, fast feedback, reliable CI.
What You’ll Learn
This guide covers:
- Getting Started — Install Sage and write your first program
- Language Guide — Syntax, types, and control flow
- Agents — State, handlers, summoning, and messaging
- LLM Integration — Using
divineto call language models - Tools — Built-in tools like HTTP, and MCP integration for external services
- Testing — Write tests with first-class LLM mocking
- WebAssembly — Compile agents for the browser and use the online playground
- Reference — CLI commands, environment variables, error codes
Prefer to learn by asking questions? Chat with Oswyn, the AI-powered Sage companion that runs in your browser.
Let’s get started with installation.
Installation
Prerequisites
Sage requires a C linker and OpenSSL headers for compilation. Rust is not required.
macOS:
xcode-select --install
Debian/Ubuntu:
sudo apt install gcc libssl-dev
Fedora/RHEL:
sudo dnf install gcc openssl-devel
Arch:
sudo pacman -S gcc openssl
Install Sage
Homebrew (macOS)
brew install sagelang/sage/sage
Quick Install (macOS/Linux)
curl -fsSL https://raw.githubusercontent.com/sagelang/sage/main/scripts/install.sh | bash
Cargo (if you have Rust)
cargo install sage-lang
Nix
nix profile install github:sagelang/sage
Verify Installation
sage --version
You should see output like:
sage 2.0.2
Next Steps
Now that Sage is installed, let’s write your first program: Hello World.
Hello World
Let’s write the simplest possible Sage program.
Create a File
Create a file called hello.sg:
agent Main {
on start {
print("Hello from Sage!");
yield(0);
}
}
run Main;
Run It
sage run hello.sg
Output:
Hello from Sage!
0
What’s Happening?
Let’s break down this program:
-
agent Main { ... }— Declares an agent namedMain. Agents are the basic unit of computation in Sage. -
on start { ... }— Thestarthandler runs when the agent is spawned. Every agent needs at least one handler. -
print("Hello from Sage!")— Prints a message to the console. -
yield(0)— Emits a value, signaling that the agent has finished. The emitted value becomes the agent’s result. -
run Main— Tells the compiler which agent to start. Every Sage program needs exactly onerunstatement.
Build a Binary
Instead of running directly, you can compile to a standalone binary:
sage build hello.sg -o out/
./out/hello/hello
The binary is self-contained — no Sage installation needed to run it.
Next Steps
Now let’s write something more interesting: Your First Agent.
Your First Agent
Let’s build an agent that does something useful — fetching information from an LLM.
Setup
First, set your OpenAI API key:
export SAGE_API_KEY="your-openai-api-key"
Or create a .env file in your project directory:
SAGE_API_KEY=your-openai-api-key
The Program
Create researcher.sg:
agent Researcher {
topic: String
on start {
let summary = try divine(
"Write a concise 2-sentence summary of: {self.topic}"
);
print(summary);
yield(summary);
}
on error(e) {
yield("Research failed");
}
}
agent Main {
on start {
let r = summon Researcher { topic: "the Rust programming language" };
let result = try await r;
print("Research complete!");
yield(0);
}
on error(e) {
print("Something went wrong");
yield(1);
}
}
run Main;
Run It
sage run researcher.sg
Output (will vary based on LLM response):
Rust is a systems programming language focused on safety, concurrency, and performance. It achieves memory safety without garbage collection through its ownership system.
Research complete!
0
What’s Happening?
-
topic: String— TheResearcheragent has a field calledtopic. Fields are the agent’s state, initialized when spawned. -
try divine("...")— Calls the LLM with the given prompt. The{self.topic}syntax interpolates the agent’s field into the prompt. Thetrypropagates errors toon error. -
on error(e)— Handles errors fromtryexpressions. Without this, the agent would panic on failure. -
summon Researcher { topic: "..." }— Creates a newResearcheragent with the given field value. -
try await r— Waits for the agent to yield its result. The summoned agent runs concurrently until awaited.
Multiple Agents
Let’s summon multiple researchers in parallel:
agent Researcher {
topic: String
on start {
let summary = try divine(
"One sentence about: {self.topic}"
);
yield(summary);
}
on error(e) {
yield("Research unavailable");
}
}
agent Main {
on start {
let r1 = summon Researcher { topic: "quantum computing" };
let r2 = summon Researcher { topic: "machine learning" };
let r3 = summon Researcher { topic: "blockchain" };
// All three run concurrently
let s1 = try await r1;
let s2 = try await r2;
let s3 = try await r3;
print(s1);
print(s2);
print(s3);
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
The three Researcher agents run concurrently, making parallel LLM calls.
Next Steps
Now that you’ve built your first agent, explore the Language Guide to learn more about Sage’s syntax and features.
Basic Syntax
Sage syntax is designed to be familiar to developers coming from Rust, TypeScript, or Go.
Comments
// Single-line comment
/*
Multi-line comment
(not yet supported)
*/
Variables
Variables are declared with let:
let x = 42;
let name = "Sage";
let numbers = [1, 2, 3];
Variables are immutable by default. Reassignment creates a new binding:
let x = 1;
x = 2; // Reassigns x
Operators
Arithmetic
let sum = 1 + 2;
let diff = 5 - 3;
let product = 4 * 2;
let quotient = 10 / 2;
Comparison
let eq = x == y;
let neq = x != y;
let lt = x < y;
let gt = x > y;
let lte = x <= y;
let gte = x >= y;
Logical
let and = a && b;
let or = a || b;
let not = !a;
String Concatenation
let greeting = "Hello, " ++ name ++ "!";
String Interpolation
Strings support interpolation with {identifier}:
let name = "World";
let greeting = "Hello, {name}!"; // "Hello, World!"
Semicolons
Following Rust conventions:
- Required after:
let,return, assignments, expression statements,run - Not required after:
if/else,for,whileblocks
let x = 1; // semicolon required
if x > 0 { // no semicolon after block
print("positive");
}
Types
Sage has a simple but expressive type system.
Primitive Types
| Type | Description | Example |
|---|---|---|
Int | 64-bit signed integer | 42, -17 |
Float | 64-bit floating point | 3.14, -0.5 |
Bool | Boolean | true, false |
String | UTF-8 string | "hello" |
Unit | No value (like Rust’s ()) | — |
Compound Types
List<T>
Ordered collection of elements:
let numbers: List<Int> = [1, 2, 3];
let names: List<String> = ["Alice", "Bob"];
let empty: List<Int> = [];
Map<K, V>
Key-value collections:
let ages: Map<String, Int> = {"alice": 30, "bob": 25};
let alice_age = map_get(ages, "alice"); // Option<Int>
map_set(ages, "charlie", 35);
let has_bob = map_has(ages, "bob"); // true
let keys = map_keys(ages); // List<String>
Tuples
Fixed-size heterogeneous collections:
let pair: (Int, String) = (42, "hello");
let first = pair.0; // 42
let second = pair.1; // "hello"
// Tuple destructuring
let (x, y) = pair;
// Three-element tuple
let triple: (Int, String, Bool) = (1, "test", true);
Option<T>
Optional values:
let some_value: Option<Int> = Some(42);
let no_value: Option<Int> = None;
// Pattern matching on Option
match some_value {
Some(n) => print("Got: " ++ str(n)),
None => print("Nothing"),
}
Result<T, E>
Success or error values:
let success: Result<Int, String> = Ok(42);
let failure: Result<Int, String> = Err("not found");
match success {
Ok(value) => print("Value: " ++ str(value)),
Err(msg) => print("Error: " ++ msg),
}
Fn(A, B) -> C
Function types for closures and higher-order functions:
let add: Fn(Int, Int) -> Int = |x: Int, y: Int| x + y;
let double: Fn(Int) -> Int = |x: Int| x * 2;
fn apply(f: Fn(Int) -> Int, x: Int) -> Int {
return f(x);
}
let result = apply(double, 21); // 42
User-Defined Types
Records
Define structured data with named fields:
record Point {
x: Int,
y: Int,
}
record Person {
name: String,
age: Int,
}
Construct records and access fields:
let p = Point { x: 10, y: 20 };
let sum = p.x + p.y;
let person = Person { name: "Alice", age: 30 };
print(person.name);
Records can also be generic. See Generics for details:
record Pair<A, B> {
first: A,
second: B,
}
let pair = Pair { first: 42, second: "hello" };
Enums
Define types with a fixed set of variants:
enum Status {
Active,
Inactive,
Pending,
}
enum Direction {
North,
South,
East,
West,
}
Use enum variants directly:
let s = Active;
let d = North;
Enum Payloads
Enums can carry data:
enum Result {
Ok(Int),
Err(String),
}
enum Message {
Text(String),
Number(Int),
Pair(Int, String),
}
// Construct variants with payloads
let success = Result::Ok(42);
let failure = Result::Err("not found");
let msg = Message::Pair(1, "hello");
Enums can also be generic. See Generics for details:
enum Either<L, R> {
Left(L),
Right(R),
}
let e = Either::<String, Int>::Left("error");
Match Expressions
Pattern match on enums and other values:
fn describe(s: Status) -> String {
return match s {
Active => "running",
Inactive => "stopped",
Pending => "waiting",
};
}
Match on integers with a wildcard:
fn classify(n: Int) -> String {
return match n {
0 => "zero",
1 => "one",
_ => "many",
};
}
Pattern Matching with Payloads
Bind payload values in match arms:
fn unwrap_result(r: Result) -> String {
return match r {
Ok(value) => str(value),
Err(msg) => msg,
};
}
fn handle_message(m: Message) -> String {
return match m {
Text(s) => s,
Number(n) => str(n),
Pair(n, s) => str(n) ++ ": " ++ s,
};
}
The compiler checks that all variants are covered (exhaustiveness checking).
Constants
Define compile-time constants:
const MAX_RETRIES: Int = 3;
const DEFAULT_NAME: String = "anonymous";
Agent Types
Agent<T>
A handle to a summoned agent that will yield a value of type T:
agent Worker {
on start {
yield(42);
}
}
agent Main {
on start {
let w: Agent<Int> = summon Worker {};
let result: Int = try await w;
yield(result);
}
on error(e) {
yield(0);
}
}
run Main;
Oracle<T>
The result of a divine call:
let summary = try divine("Summarize: {topic}");
Oracle<T> can be used anywhere T is expected — the type coerces automatically.
Type Inference
Sage infers types when possible:
let x = 42; // Int
let name = "Sage"; // String
let list = [1, 2, 3]; // List<Int>
Explicit annotations are required for:
- Function parameters
- Agent state fields
- Closure parameters
- Ambiguous cases
Type Annotations
Use : Type syntax:
let x: Int = 42;
let items: List<String> = [];
fn double(n: Int) -> Int {
return n * 2;
}
agent Worker {
count: Int
on start {
yield(self.count * 2);
}
}
Functions
Functions in Sage are defined at the top level and can be called from anywhere.
Defining Functions
fn greet(name: String) -> String {
return "Hello, " ++ name ++ "!";
}
fn add(a: Int, b: Int) -> Int {
return a + b;
}
Calling Functions
let message = greet("World");
let sum = add(1, 2);
Return Types
All functions must declare their return type:
fn double(n: Int) -> Int {
return n * 2;
}
fn print_message(msg: String) -> Unit {
print(msg);
return;
}
Use Unit for functions that don’t return a meaningful value.
Generic Functions
Functions can have type parameters, making them work with any type:
fn identity<T>(x: T) -> T {
return x;
}
fn swap<A, B>(pair: (A, B)) -> (B, A) {
return (pair.1, pair.0);
}
let x = identity(42); // T inferred as Int
let y = identity("hello"); // T inferred as String
See Generics for comprehensive coverage.
Recursion
Functions can call themselves:
fn factorial(n: Int) -> Int {
if n <= 1 {
return 1;
}
return n * factorial(n - 1);
}
fn fibonacci(n: Int) -> Int {
if n <= 1 {
return n;
}
return fibonacci(n - 1) + fibonacci(n - 2);
}
Closures
Sage supports first-class functions and closures:
// Closure with typed parameters
let add = |x: Int, y: Int| x + y;
// Empty parameter closure
let get_value = || 42;
// Multi-statement closure with block
let greet = |name: String| {
let msg = "Hello, " ++ name ++ "!";
return msg;
};
Closure parameters require explicit type annotations.
Function Types
Use Fn(A, B) -> C to describe function types:
fn apply(f: Fn(Int) -> Int, x: Int) -> Int {
return f(x);
}
let double = |x: Int| x * 2;
let result = apply(double, 21); // 42
Higher-Order Functions
Functions can return closures:
fn make_multiplier(n: Int) -> Fn(Int) -> Int {
return |x: Int| x * n;
}
let triple = make_multiplier(3);
let result = triple(10); // 30
Fallible Functions
Functions that can fail are marked with fails:
fn risky_operation() -> Int fails {
let value = try divine("Give me a number");
return parse_int(value);
}
Callers must handle errors with try or catch:
agent Main {
on start {
let result = try risky_operation();
yield(result);
}
on error(e) {
yield(0);
}
}
run Main;
Built-in Functions
Sage provides several built-in functions:
| Function | Signature | Description |
|---|---|---|
print | (String) -> Unit | Print to console |
str | (T) -> String | Convert any value to string |
len | (List<T>) -> Int | Get list or map length |
push | (List<T>, T) -> List<T> | Append to list |
join | (List<String>, String) -> String | Join strings |
int_to_str | (Int) -> String | Convert int to string |
str_contains | (String, String) -> Bool | Check substring |
sleep_ms | (Int) -> Unit | Sleep for milliseconds |
map_get | (Map<K,V>, K) -> Option<V> | Get value from map |
map_set | (Map<K,V>, K, V) -> Unit | Set key-value in map |
map_has | (Map<K,V>, K) -> Bool | Check if key exists |
map_delete | (Map<K,V>, K) -> Unit | Remove key from map |
map_keys | (Map<K,V>) -> List<K> | Get all keys as list |
map_values | (Map<K,V>) -> List<V> | Get all values as list |
Example
fn summarize_list(items: List<String>) -> String {
let count = len(items);
let joined = join(items, ", ");
return "Found " ++ str(count) ++ " items: " ++ joined;
}
agent Main {
on start {
let result = summarize_list(["apple", "banana", "cherry"]);
print(result);
yield(0);
}
}
run Main;
Output:
Found 3 items: apple, banana, cherry
Generics
Sage supports parametric polymorphism (generics), allowing you to write functions, records, and enums that work with any type.
Generic Functions
Declaration
Type parameters are declared in angle brackets after the function name:
fn identity<T>(x: T) -> T {
return x;
}
fn swap<A, B>(pair: (A, B)) -> (B, A) {
return (pair.1, pair.0);
}
fn map<T, U>(list: List<T>, f: Fn(T) -> U) -> List<U> {
let result: List<U> = [];
for item in list {
result = push(result, f(item));
}
return result;
}
Type parameters are typically single uppercase letters (T, U, A, B), but any identifier is valid (Item, Key, Value).
Calling Generic Functions
Type arguments are usually inferred from the arguments:
let x = identity(42); // T inferred as Int
let y = identity("hello"); // T inferred as String
let nums = [1, 2, 3];
let doubled = map(nums, |n: Int| n * 2); // T=Int, U=Int
When inference fails or is ambiguous, use turbofish syntax (::<...>):
let empty: List<Int> = [];
let mapped = map::<Int, String>(empty, |n: Int| str(n));
What You Can Do with Type Parameters
Because type parameters are unconstrained, you can only perform operations that work on all types:
Allowed:
- Assign values to variables of the same type
- Pass values to other generic functions
- Return values
- Store in generic containers (
List<T>,Option<T>, etc.) - Use in tuples or record fields
Not allowed:
- Use operators like
==,+,-on type parameters - Print type parameters directly (use concrete types)
// Valid - just moves values around
fn first<T>(list: List<T>) -> Option<T> {
if len(list) == 0 {
return None;
}
return Some(list[0]);
}
// Invalid - cannot compare unconstrained types
fn contains<T>(list: List<T>, target: T) -> Bool {
for item in list {
if item == target { // Error: cannot apply == to T
return true;
}
}
return false;
}
Generic Records
Declaration
Type parameters are declared after the record name:
record Pair<A, B> {
first: A,
second: B,
}
record Page<T> {
items: List<T>,
total: Int,
page: Int,
page_size: Int,
}
record Timestamped<T> {
value: T,
created_at: String,
updated_at: String,
}
Construction
Type arguments are inferred from field values:
// Type arguments inferred from field values
let pair = Pair { first: 42, second: "hello" };
// pair: Pair<Int, String>
let page: Page<String> = Page {
items: ["a", "b", "c"],
total: 100,
page: 1,
page_size: 10,
};
Field Access
Field access works the same as non-generic records:
let pair = Pair { first: 42, second: "hello" };
let n: Int = pair.first;
let s: String = pair.second;
Generic Records as Parameters
fn unwrap_timestamped<T>(ts: Timestamped<T>) -> T {
return ts.value;
}
fn paginate<T>(items: List<T>, page: Int, page_size: Int) -> Page<T> {
let start = (page - 1) * page_size;
// ... slice items ...
return Page {
items: sliced_items,
total: len(items),
page: page,
page_size: page_size,
};
}
Generic Enums
Declaration
Type parameters are declared after the enum name:
enum Either<L, R> {
Left(L),
Right(R),
}
enum Tree<T> {
Leaf(T),
Node(Tree<T>, Tree<T>),
}
enum Loadable<T, E> {
Loading,
Loaded(T),
Failed(E),
}
Construction
When constructing a variant, if the type cannot be fully inferred, use turbofish:
// Type can be inferred from context
let e: Either<String, Int> = Either::Left("error");
// Explicit turbofish when inference fails
let e = Either::<String, Int>::Left("error");
let e2 = Either::<String, Int>::Right(42);
// Tree example
let leaf: Tree<Int> = Tree::Leaf(42);
let tree = Tree::<Int>::Node(Tree::Leaf(1), Tree::Leaf(2));
Pattern Matching
Pattern matching works the same as non-generic enums:
fn tree_sum(tree: Tree<Int>) -> Int {
return match tree {
Leaf(n) => n,
Node(left, right) => tree_sum(left) + tree_sum(right),
};
}
fn describe_either<L, R>(e: Either<L, R>) -> String {
return match e {
Left(_) => "left",
Right(_) => "right",
};
}
Type Inference
How It Works
Sage infers type arguments from usage:
fn identity<T>(x: T) -> T { return x; }
let y = identity(42);
// Constraint: T = Int (from argument)
// Result: y: Int
Bidirectional Inference
Type information flows from both arguments and expected return type:
fn first<T>(list: List<T>) -> Option<T> { ... }
// Inference from argument
let x = first([1, 2, 3]);
// List<T> = List<Int> => T = Int
// Result: x: Option<Int>
// Inference from expected type
let y: Option<String> = first([]);
// Option<T> = Option<String> => T = String
When Inference Fails
Use type annotations or turbofish when inference can’t determine the type:
// Empty list - type unknown
let empty: List<Int> = []; // Annotation required
// Turbofish on function call
let result = parse::<Int>(json_string);
Using with Built-in Types
The built-in generic types (List<T>, Option<T>, Map<K, V>, Result<T, E>) work seamlessly with user-defined generics:
fn process<T>(items: List<T>) -> Int {
return len(items);
}
record MyData { value: Int }
let my_items: List<MyData> = [MyData { value: 1 }];
let count = process(my_items); // T = MyData
Generic Agents
Generic functions can be called from agent handlers:
fn transform_all<T>(items: List<T>, f: Fn(T) -> T) -> List<T> {
return map(items, f);
}
agent Processor {
on start {
let nums = [1, 2, 3];
let result = transform_all(nums, |n: Int| n * 2);
print(str(result)); // [2, 4, 6]
yield(0);
}
}
run Processor;
Common Patterns
Wrapper Types
record Validated<T> {
value: T,
is_valid: Bool,
errors: List<String>,
}
fn validate<T>(value: T, validator: Fn(T) -> List<String>) -> Validated<T> {
let errors = validator(value);
return Validated {
value: value,
is_valid: len(errors) == 0,
errors: errors,
};
}
Either for Error Handling
enum Either<L, R> {
Left(L),
Right(R),
}
fn safe_divide(a: Int, b: Int) -> Either<String, Int> {
if b == 0 {
return Either::<String, Int>::Left("division by zero");
}
return Either::<String, Int>::Right(a / b);
}
Pair and Triple
record Pair<A, B> {
first: A,
second: B,
}
fn zip_with_index<T>(items: List<T>) -> List<Pair<Int, T>> {
let result: List<Pair<Int, T>> = [];
let i = 0;
for item in items {
result = push(result, Pair { first: i, second: item });
i = i + 1;
}
return result;
}
Summary
| Feature | Syntax | Example |
|---|---|---|
| Generic function | fn name<T>(...) | fn identity<T>(x: T) -> T |
| Generic record | record Name<T> {...} | record Box<T> { value: T } |
| Generic enum | enum Name<T> {...} | enum Option<T> { Some(T), None } |
| Turbofish (function) | name::<Type>(...) | parse::<Int>(str) |
| Turbofish (enum) | Enum::<Type>::Variant(...) | Either::<A, B>::Left(x) |
| Type annotation | let x: Type<T> = ... | let list: List<Int> = [] |
Control Flow
Sage provides standard control flow constructs.
If/Else
if x > 0 {
print("positive");
} else if x < 0 {
print("negative");
} else {
print("zero");
}
Conditions must be Bool — no implicit truthy/falsy coercion.
For Loops
Iterate over lists:
let numbers = [1, 2, 3, 4, 5];
for n in numbers {
print(str(n));
}
With index tracking:
let names = ["Alice", "Bob", "Charlie"];
let i = 0;
for name in names {
print(str(i) ++ ": " ++ name);
i = i + 1;
}
Iterate over maps with tuple destructuring:
let scores = {"alice": 100, "bob": 85, "charlie": 92};
for (name, score) in scores {
print(name ++ ": " ++ str(score));
}
While Loops
let count = 0;
while count < 5 {
print(str(count));
count = count + 1;
}
Infinite Loops
Use loop for indefinite iteration, and break to exit:
loop {
let input = get_input();
if input == "quit" {
break;
}
process(input);
}
This is particularly useful for agents that process messages:
agent Worker receives WorkerMsg {
on start {
loop {
let msg: WorkerMsg = receive();
match msg {
Shutdown => break,
Task => process_task(),
}
}
yield(0);
}
}
Early Return
Use return to exit a function early:
fn find_first_positive(numbers: List<Int>) -> Int {
for n in numbers {
if n > 0 {
return n;
}
}
return -1;
}
Example: FizzBuzz
fn fizzbuzz(n: Int) -> String {
if n % 15 == 0 {
return "FizzBuzz";
}
if n % 3 == 0 {
return "Fizz";
}
if n % 5 == 0 {
return "Buzz";
}
return str(n);
}
agent Main {
on start {
let i = 1;
while i <= 20 {
print(fizzbuzz(i));
i = i + 1;
}
yield(0);
}
}
run Main;
Error Handling
Sage has a robust error handling system designed for the realities of AI-native applications, where LLM calls can fail, agents can crash, and network operations are inherently unreliable.
The Error Model
In Sage, errors are values. Operations that can fail are marked with fails and must be explicitly handled. This prevents silent failures and makes error paths visible in your code.
Fallible operations in Sage:
divine— LLM callsawait— waiting for agentssend— sending messages to agents- Functions marked with
fails - Tool calls (e.g.,
Http.get)
Handling Errors with try
The try keyword propagates errors to the enclosing on error handler:
agent Researcher {
topic: String
on start {
let summary = try divine("Summarise: {self.topic}");
yield(summary);
}
on error(e) {
print("Research failed: " ++ e.message);
yield("Unable to research topic");
}
}
run Researcher { topic: "quantum computing" };
When the divine call fails, execution jumps to on error. The error e contains:
message— human-readable descriptionkind— error category (see Error Kinds below)
Inline Recovery with catch
For fine-grained control, use catch to handle errors inline:
agent Main {
on start {
let result = catch divine("What is 2+2?") {
"I don't know"
};
print(result);
yield(0);
}
}
run Main;
If divine fails, the catch block runs and its value becomes the result. This is useful when you want to provide a fallback without involving the agent’s error handler.
Catch with Error Binding
You can bind the error to inspect it:
let result = catch divine("prompt") as err {
print("Failed: " ++ err.message);
"fallback value"
};
Explicit Failure with fail
Use fail to raise errors explicitly:
fn validate_age(age: Int) -> Int fails {
if age < 0 {
fail "Age cannot be negative";
}
if age > 150 {
fail "Age seems unrealistic";
}
return age;
}
The fail expression:
- Immediately returns an error from the current function
- The function must be marked with
fails - Takes a string message
Retrying Operations
For transient failures, use retry:
agent Fetcher {
url: String
on start {
// Retry up to 3 times
let response = retry(3) {
try Http.get(self.url)
};
yield(response.body);
}
on error(e) {
yield("Failed after retries");
}
}
Retry with Delay
Add a delay between attempts:
let result = retry(3, delay: 1000) {
try divine("Generate a haiku")
};
This waits 1000ms between each retry attempt.
Retry with Error Filtering
Only retry on specific error kinds:
let result = retry(3, on: [ErrorKind.Network, ErrorKind.Timeout]) {
try Http.get(url)
};
Other errors (like ErrorKind.User) will fail immediately without retrying.
Error Kinds
Sage categorises errors into kinds for programmatic handling:
| Kind | Description | Examples |
|---|---|---|
Llm | LLM-related failures | API errors, parse failures, empty responses |
Agent | Agent lifecycle errors | Spawn failures, await timeouts |
Runtime | Internal runtime errors | Type mismatches |
Tool | Tool call failures | HTTP errors, file I/O errors |
User | User-raised errors | From fail expressions |
Matching on Error Kind
on error(e) {
match e.kind {
ErrorKind.Llm => {
print("LLM failed, using fallback");
yield(fallback_response());
}
ErrorKind.Network => {
print("Network issue, please retry");
yield(1);
}
_ => {
print("Unexpected error: " ++ e.message);
yield(1);
}
}
}
Fallible Functions
Mark functions that can fail with fails:
fn fetch_user(id: Int) -> User fails {
let response = try Http.get("/users/" ++ str(id));
if response.status != 200 {
fail "User not found";
}
return parse_user(response.body);
}
Callers must handle the error:
// With try
let user = try fetch_user(42);
// With catch
let user = catch fetch_user(42) {
User { name: "Unknown", id: 0 }
};
Best Practices
1. Handle errors at the right level
Use try for errors that should bubble up to the agent’s error handler. Use catch for errors you want to handle locally with a fallback.
2. Provide meaningful fallbacks
// Good: meaningful fallback
let summary = catch divine("Summarise: {topic}") {
"Summary unavailable for " ++ topic
};
// Avoid: silent failures
let summary = catch divine("Summarise: {topic}") {
""
};
3. Use retry for transient failures
LLM calls and network requests often fail transiently. Use retry with appropriate delays:
let result = retry(3, delay: 500) {
try divine("Generate response")
};
4. Log errors in on error
on error(e) {
print("Error [" ++ str(e.kind) ++ "]: " ++ e.message);
yield(error_response);
}
5. Fail fast on unrecoverable errors
fn validate_config(config: Config) -> Config fails {
if is_empty(config.api_key) {
fail "API key is required";
}
return config;
}
Summary
| Construct | Purpose |
|---|---|
try expr | Propagate error to on error handler |
catch expr { fallback } | Handle error inline with fallback |
fail "message" | Raise an explicit error |
retry(n) { expr } | Retry operation up to n times |
on error(e) { ... } | Agent-level error handler |
fails | Mark function as fallible |
Extern Functions (Rust FFI)
Sage can call Rust functions directly via extern fn declarations. This lets you drop into Rust for performance-critical code, system integration, or access to the Rust ecosystem.
Declaring Extern Functions
Declare an extern function in Sage with the types it expects and returns:
extern fn now_iso() -> String
extern fn prompt(msg: String) -> String fails
extern fn clear_screen()
These declarations tell the compiler that the function is implemented in Rust and will be linked at compile time. You call them like any other Sage function:
let time = now_iso();
let input = try prompt("Enter your name:");
clear_screen();
The fails Modifier
Functions marked fails can return errors. On the Rust side they return Result<T, String>, and in Sage they must be called with try or catch:
extern fn read_config(path: String) -> String fails
agent Main {
on start {
let config = try read_config("settings.toml");
print(config);
yield(0);
}
on error(e) {
print("Failed to read config: " ++ e);
yield(1);
}
}
run Main;
Implementing in Rust
Create a Rust source file (e.g., src/sage_extern.rs) with the function implementations:
#![allow(unused)]
fn main() {
// src/sage_extern.rs
pub fn now_iso() -> String {
chrono::Utc::now().to_rfc3339()
}
pub fn prompt(msg: String) -> Result<String, String> {
print!("{}", msg);
std::io::Write::flush(&mut std::io::stdout())
.map_err(|e| e.to_string())?;
let mut input = String::new();
std::io::stdin()
.read_line(&mut input)
.map_err(|e| e.to_string())?;
Ok(input.trim().to_string())
}
pub fn clear_screen() {
print!("\x1b[2J\x1b[H");
}
}
Rules:
- Each
extern fnmust have a correspondingpub fnin the Rust module - Functions without
failsreturn their type directly - Functions with
failsreturnResult<T, String> - Functions returning nothing (
extern fn foo()) map topub fn foo()in Rust
Type Mapping
| Sage Type | Rust Type |
|---|---|
String | String |
Int | i64 |
Float | f64 |
Bool | bool |
Unit (no return) | () |
Configuring grove.toml
Register your extern modules and any Cargo dependencies they need:
[project]
name = "my_project"
entry = "src/main.sg"
[extern]
modules = ["src/sage_extern.rs"]
[extern.dependencies]
chrono = "0.4"
reqwest = { version = "0.12", features = ["blocking"] }
modules— list of Rust source files to compile and link[extern.dependencies]— additional Cargo dependencies needed by your extern code
The Sage compiler copies the extern modules into the generated Rust project and adds the dependencies to its Cargo.toml.
Complete Example
grove.toml:
[project]
name = "greeter"
entry = "src/main.sg"
[extern]
modules = ["src/sage_extern.rs"]
[extern.dependencies]
chrono = "0.4"
src/sage_extern.rs:
#![allow(unused)]
fn main() {
pub fn now_iso() -> String {
chrono::Utc::now().to_rfc3339()
}
pub fn styled(text: String, hex: String) -> String {
let r = u8::from_str_radix(&hex[0..2], 16).unwrap_or(255);
let g = u8::from_str_radix(&hex[2..4], 16).unwrap_or(255);
let b = u8::from_str_radix(&hex[4..6], 16).unwrap_or(255);
format!("\x1b[38;2;{};{};{}m{}\x1b[0m", r, g, b, text)
}
}
src/main.sg:
extern fn now_iso() -> String
extern fn styled(text: String, hex: String) -> String
agent Main {
on start {
let greeting = styled("Hello from Sage!", "4ECDC4");
let time = now_iso();
print(greeting);
print("Current time: " ++ time);
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
When to Use Extern Functions
Extern functions are ideal for:
- System integration — terminal I/O, filesystem operations beyond the built-in
Fstool - Performance-critical code — algorithms that benefit from direct Rust
- Rust ecosystem access — using any crate from crates.io
- Custom tooling — building domain-specific primitives for your agents
For most tasks, Sage’s built-in tools (Http, Database, Fs, Shell) and standard library are sufficient. Use extern functions when you need something they don’t cover.
What Are Agents?
Agents are the core abstraction in Sage — autonomous units of computation with state and behavior.
The Mental Model
Think of an agent as a small, focused worker:
- It has state (its private fields)
- It responds to events (start, messages, errors)
- It can summon other agents
- It yields a result when done
agent Worker {
task: String // State
on start { // Event handler
let result = do_work(self.task);
yield(result); // Result
}
}
Why Agents?
vs. Functions
Functions are synchronous and stateless. Agents are asynchronous and maintain state across their lifetime.
vs. Objects
Objects bundle state and methods. Agents bundle state and event handlers — they react to events rather than being called directly.
vs. Threads
Threads are low-level and share memory. Agents are high-level and communicate through messages. No locks, no races.
Agent Lifecycle
- Summon — Agent is created with initial state
- Start — The
on starthandler runs - Running — Agent can receive messages, summon other agents
- Yield — Agent produces its result
- Done — Agent terminates
summon Worker { task: "..." }
│
▼
┌───────┐
│ start │ ─── on start { ... }
└───┬───┘
│
▼
┌────────┐
│running │ ─── on message { ... }
└───┬────┘
│
▼
┌──────┐
│yield │ ─── yield(value)
└──────┘
A Complete Example
agent Counter {
initial: Int
on start {
let count = self.initial;
let i = 0;
while i < 5 {
count = count + 1;
i = i + 1;
}
yield(count);
}
}
agent Main {
on start {
let c1 = summon Counter { initial: 0 };
let c2 = summon Counter { initial: 100 };
let r1 = try await c1; // 5
let r2 = try await c2; // 105
print("Results: " ++ str(r1) ++ ", " ++ str(r2));
yield(0);
}
on error(e) {
print("A counter failed");
yield(1);
}
}
run Main;
Both counters run concurrently. The main agent waits for both results.
Next
- State — Agent fields
- Event Handlers — Responding to events
- Summoning & Awaiting — Creating and coordinating agents
- Messaging — Communication between agents
Agent State
Agent fields are private state. They’re initialized when the agent is summoned and can be accessed throughout the agent’s lifetime.
Declaring Fields
Agent state uses record-style field declarations:
agent Person {
name: String
age: Int
}
Fields must have explicit type annotations.
Initializing Fields
When summoning an agent, provide values for all fields:
let p = summon Person { name: "Alice", age: 30 };
Missing fields cause a compile error:
// Error: missing field `age` in summon
let p = summon Person { name: "Alice" };
Accessing Fields
Use self.fieldName inside the agent:
agent Greeter {
name: String
on start {
print("Hello, " ++ self.name ++ "!");
yield(0);
}
}
Fields Are Immutable
Fields cannot be reassigned after initialization:
agent Counter {
count: Int
on start {
// This won't work — fields are immutable
// self.count = self.count + 1;
// Use a local variable instead
let count = self.count;
count = count + 1;
yield(count);
}
}
Entry Agent Fields
The entry agent (the one in run) cannot have required fields:
// Error: entry agent cannot have required fields
agent Main {
config: String
on start {
yield(0);
}
}
run Main; // How would we provide `config`?
Design Pattern: Configuration
Use fields to configure agent behavior:
agent Fetcher {
url: String
timeout: Int
on start {
// Use self.url and self.timeout
yield("done");
}
}
agent Main {
on start {
let f1 = summon Fetcher {
url: "https://api.example.com/a",
timeout: 5000
};
let f2 = summon Fetcher {
url: "https://api.example.com/b",
timeout: 3000
};
let r1 = try await f1;
let r2 = try await f2;
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
Persistent Beliefs
Sage agents are, by default, ephemeral. When an agent completes or crashes, its state is gone. For task agents this is fine — they do their work and vanish. But steward agents — long-lived agents that maintain a domain over time — need to survive restarts.
Persistent beliefs solve this. Mark a field with @persistent and Sage will checkpoint it to durable storage. When the agent restarts, its state is recovered automatically.
Basic Usage
agent Counter {
@persistent count: Int
on start {
let current = self.count.get();
print("Starting at count: {current}");
self.count.set(current + 1);
yield(current);
}
}
run Counter;
Run this program multiple times. You’ll see the count increment across restarts:
$ sage run counter.sg
Starting at count: 0
$ sage run counter.sg
Starting at count: 1
$ sage run counter.sg
Starting at count: 2
The @persistent Annotation
Add @persistent before any agent field to enable checkpointing:
agent DatabaseSteward {
@persistent schema_version: Int
@persistent migration_log: List<String>
@persistent last_sync: String
// Non-persistent — recomputed on every start
active_connections: Int
on start {
// schema_version, migration_log, and last_sync are already
// populated from the last checkpoint (or zero-valued on first run)
print("Schema at version {self.schema_version.get()}");
yield(0);
}
}
Accessing Persistent Fields
Persistent fields use a wrapper that provides .get() and .set() methods:
// Read the current value
let version = self.schema_version.get();
// Update and checkpoint atomically
self.schema_version.set(version + 1);
Every .set() call immediately checkpoints the value. A crash after .set() will not lose that update.
Serialisable Types
Only serialisable types can be @persistent. These are:
- Primitives:
Int,Float,Bool,String - Collections:
List<T>,Map<K, V>(whereT,K,Vare serialisable) Option<T>andResult<T, E>(where inner types are serialisable)- Records (where all fields are serialisable)
- Enums (including payload-carrying variants)
Function types and agent handles cannot be persisted — this is a compile error:
agent Invalid {
@persistent callback: Fn(Int) -> Int // Error E052: not serialisable
}
First-Run Detection
A common pattern is detecting whether an agent is starting fresh or recovering from a checkpoint:
agent APISteward {
@persistent initialised: Bool
on start {
if !self.initialised.get() {
// First run — do expensive setup
print("First run: generating routes...");
generate_routes();
self.initialised.set(true);
} else {
// Subsequent run — state already loaded
print("Recovered from checkpoint");
}
yield(0);
}
}
For more complex cases, check if specific fields have meaningful values:
agent ConfigManager {
@persistent config_hash: String
on start {
if self.config_hash.get() == "" {
// No config loaded yet
let config = load_config_file();
self.config_hash.set(hash(config));
}
yield(0);
}
}
The on waking Lifecycle Hook
When an agent with persistent fields restarts, you often need to validate or act on the recovered state before normal operation begins. The on waking hook runs after persistent state is loaded but before on start:
agent DatabaseSteward {
@persistent schema_version: Int
@persistent connection_string: String
on waking {
// State is already loaded — validate it
print("Recovered at schema version {self.schema_version.get()}");
// Reconnect to resources
if self.connection_string.get() != "" {
reconnect_database();
}
}
on start {
// Normal operation begins
yield(0);
}
}
The lifecycle sequence is:
Process start / Restart
│
▼
Load checkpoint
│
▼
┌─────────────┐
│ on waking │ ← Persistent state available, validate/reconnect
└──────┬──────┘
│
▼
┌─────────────┐
│ on start │ ← Normal agent logic
└──────┬──────┘
│
▼
... run ...
│
▼
┌─────────────┐
│ on resting │ ← Cleanup before exit
└─────────────┘
Storage Backends
Configure the persistence backend in grove.toml:
SQLite (Default)
Best for local development and single-machine deployments:
[persistence]
backend = "sqlite"
path = ".sage/checkpoints.db"
PostgreSQL
Recommended for production steward programs:
[persistence]
backend = "postgres"
url = "postgresql://user:pass@localhost/myapp"
File
JSON files, useful for debugging:
[persistence]
backend = "file"
path = ".sage/state"
Each agent gets a separate JSON file in the directory.
Checkpoint Namespacing
Each agent instance has a unique checkpoint namespace derived from:
- The agent name
- Its initial belief values
This means two agents of the same type with different initial beliefs have independent checkpoints:
supervisor TwoCounters {
strategy: OneForOne
children {
Counter { restart: Permanent, count: 0 } // Checkpoint key: Counter_abc123
Counter { restart: Permanent, count: 100 } // Checkpoint key: Counter_def456
}
}
Integration with Supervision
Persistent beliefs and supervision work together to provide crash recovery with state:
supervisor AppSupervisor {
strategy: OneForOne
children {
DatabaseSteward {
restart: Permanent
schema_version: 0
migration_log: []
}
}
}
When a Permanent agent crashes and restarts:
- The supervisor respawns the agent
- Persistent fields are loaded from the last checkpoint
on wakingruns with recovered stateon startruns as normal
The agent resumes from its last stable checkpoint, not from scratch.
Explicit Checkpointing
Normally, .set() checkpoints automatically. For batched updates, you can checkpoint explicitly:
agent BatchUpdater {
@persistent items: List<String>
on start {
// Make many updates without individual checkpoints
let mut current = self.items.get();
for i in range(0, 100) {
current = push(current, "item_{i}");
}
// Checkpoint once at the end
self.items.set(current);
yield(0);
}
}
Error Handling
If a checkpoint fails (database unavailable, disk full, etc.), the agent continues running but logs a warning. The next successful checkpoint will include the latest state.
For critical applications, you can catch persistence errors in your on error handler — though typically the supervision tree handles this by restarting the agent.
Best Practices
-
Checkpoint only what you need. Every
.set()is a write operation. Don’t persist fields that can be recomputed cheaply. -
Keep persistent fields small. Large lists or maps checkpoint slowly. Consider aggregating or summarising data.
-
Use
on wakingfor validation. If your agent depends on external resources (database connections, file handles), re-establish them inon waking. -
Test recovery. Write tests that simulate crashes and verify your agent recovers correctly.
-
Consider checkpoint frequency. For high-frequency updates, batch changes and checkpoint periodically rather than on every update.
Related
- Supervision Trees — Automatic restart on failure
- Lifecycle Hooks — All agent lifecycle events
- The Steward Pattern — Building long-lived agents
Event Handlers
Agents respond to events through handlers. Each handler runs when its corresponding event occurs.
on start
Runs when the agent is summoned:
agent Worker {
on start {
print("Worker started!");
yield(42);
}
}
Every agent must have an on start handler — it’s where the agent’s main logic lives.
on error
Handles errors propagated by try:
agent Researcher {
topic: String
on start {
let result = try divine("Summarize: {self.topic}");
yield(result);
}
on error(e) {
print("Research failed: " ++ e);
yield("unavailable");
}
}
When a try expression fails, control jumps to on error. Without an on error handler, the agent will panic.
Message Handling
For agents that receive messages, use the receives clause with receive():
enum Command {
Ping,
Shutdown,
}
agent Worker receives Command {
on start {
loop {
let msg: Command = receive();
match msg {
Ping => print("Pong!"),
Shutdown => break,
}
}
yield(0);
}
}
See Messaging for details.
Handler Order
on startruns first, exactly onceon errorruns if atryexpression fails- After
yield, the agent terminates
yield
The yield expression signals that the agent has produced its result:
agent Calculator {
a: Int
b: Int
on start {
let result = self.a + self.b;
yield(result); // Agent is done
}
}
After yield:
- The agent’s result is available to whoever awaited it
- The agent proceeds to cleanup (
on stop) - No more messages are processed
Yield Type Consistency
All yield calls in an agent must have the same type:
agent Example {
on start {
if condition {
yield(42); // Int
} else {
yield("error"); // Error: expected Int, got String
}
}
}
Handler Scope
Each handler has its own scope. Variables don’t persist between handlers:
agent Example {
on start {
let x = 42;
// x is only visible here
yield(0);
}
on error(e) {
// x is not visible here
// Use agent fields for persistent state
yield(1);
}
}
Use agent fields (accessed via self) for state that needs to persist.
Spawning & Awaiting
Agents are created with summon and their results are retrieved with await.
summon
Creates a new agent and returns a handle:
let worker = summon Worker { task: "process data" };
The spawned agent starts running immediately and concurrently with the spawning agent.
Summon Syntax
summon AgentName { field1: value1, field2: value2 }
All fields must be provided:
agent Point {
x: Int
y: Int
on start {
yield(self.x + self.y);
}
}
// Correct
let p = summon Point { x: 10, y: 20 };
// Error: missing field `y`
let p = summon Point { x: 10 };
Agent Handle Type
summon returns an Agent<T> where T is the yield type:
agent Worker {
on start {
yield(42); // Emits Int
}
}
let w: Agent<Int> = summon Worker {};
await
Waits for an agent to yield its result. Since agents can fail, await is a fallible operation that requires try:
let worker = summon Worker {};
let result = try await worker; // Blocks until Worker emits
Await Type
await returns the type that the agent yields:
agent StringWorker {
on start {
yield("done");
}
}
agent Main {
on start {
let w = summon StringWorker {};
let result: String = try await w;
print(result);
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
Await Blocks
await suspends the current agent until the result is ready. Other agents continue running.
Concurrent Execution
Spawned agents run concurrently:
agent Sleeper {
ms: Int
on start {
sleep_ms(self.ms);
yield(self.ms);
}
}
agent Main {
on start {
// All three start immediately
let s1 = summon Sleeper { ms: 100 };
let s2 = summon Sleeper { ms: 200 };
let s3 = summon Sleeper { ms: 300 };
// Total time: ~300ms (not 600ms)
let r1 = try await s1;
let r2 = try await s2;
let r3 = try await s3;
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
Pattern: Fan-Out/Fan-In
Spawn multiple workers, await all results:
agent Researcher {
topic: String
on start {
let result = try divine(
"One sentence about: {self.topic}"
);
yield(result);
}
on error(e) {
yield("Research failed");
}
}
agent Coordinator {
on start {
// Fan out
let r1 = summon Researcher { topic: "AI" };
let r2 = summon Researcher { topic: "Robotics" };
let r3 = summon Researcher { topic: "Quantum" };
// Fan in
let s1 = try await r1;
let s2 = try await r2;
let s3 = try await r3;
print(s1);
print(s2);
print(s3);
yield(0);
}
on error(e) {
print("A researcher failed");
yield(1);
}
}
run Coordinator;
Pattern: Pipeline
Chain agents together:
agent Step1 {
input: String
on start {
let result = self.input ++ " -> step1";
yield(result);
}
}
agent Step2 {
input: String
on start {
let result = self.input ++ " -> step2";
yield(result);
}
}
agent Main {
on start {
let s1 = summon Step1 { input: "start" };
let r1 = try await s1;
let s2 = summon Step2 { input: r1 };
let r2 = try await s2;
print(r2); // "start -> step1 -> step2"
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
Messaging
Agents can receive typed messages from other agents using the actor model pattern.
The receives Clause
An agent declares what type of messages it accepts using the receives clause:
enum WorkerMsg {
Task,
Ping,
Shutdown,
}
agent Worker receives WorkerMsg {
id: Int
on start {
// This agent can now receive WorkerMsg messages
yield(0);
}
}
Agents without a receives clause are pure summon/await agents and cannot receive messages.
The receive() Expression
Inside an agent with a receives clause, use receive() to wait for a message:
agent Worker receives WorkerMsg {
id: Int
on start {
let msg: WorkerMsg = receive();
match msg {
Task => print("Got a task"),
Ping => print("Pinged"),
Shutdown => print("Shutting down"),
}
yield(0);
}
}
receive() blocks until a message arrives in the agent’s mailbox.
The send() Function
Send a message to a running agent using its handle. send is fallible (the agent might have terminated), so use try:
agent Main {
on start {
let w = summon Worker { id: 1 };
try send(w, Task);
try send(w, Shutdown);
try await w;
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
send queues the message and returns immediately.
Long-Running Agents with loop
Combine receive() with loop for agents that process multiple messages:
agent Worker receives WorkerMsg {
id: Int
on start {
loop {
let msg: WorkerMsg = receive();
match msg {
Task => {
let result = try divine("Process a task");
print("Worker {self.id}: {result}");
}
Ping => {
print("Worker {self.id} is alive");
}
Shutdown => {
break;
}
}
}
yield(0);
}
on error(e) {
print("Worker {self.id} failed: " ++ e);
yield(1);
}
}
Complete Example: Worker Pool
enum WorkerMsg {
Task,
Shutdown,
}
agent Worker receives WorkerMsg {
id: Int
on start {
loop {
let msg: WorkerMsg = receive();
match msg {
Task => {
let result = try divine("Summarise something interesting");
print("Worker {self.id}: {result}");
}
Shutdown => {
break;
}
}
}
yield(0);
}
on error(e) {
print("Worker {self.id} failed");
yield(1);
}
}
agent Coordinator {
on start {
let w1 = summon Worker { id: 1 };
let w2 = summon Worker { id: 2 };
// Distribute tasks
try send(w1, Task);
try send(w2, Task);
try send(w1, Task);
try send(w2, Task);
// Shut down workers
try send(w1, Shutdown);
try send(w2, Shutdown);
// Wait for completion
try await w1;
try await w2;
yield(0);
}
on error(e) {
print("Coordination failed");
yield(1);
}
}
run Coordinator;
Type Safety
The compiler ensures type safety:
agent Worker receives WorkerMsg {
on start {
let msg: WorkerMsg = receive();
yield(0);
}
}
agent Main {
on start {
let w = summon Worker {};
try send(w, Task); // OK - Task is a WorkerMsg variant
try send(w, "hello"); // Error: expected WorkerMsg, got String
yield(0);
}
on error(e) {
yield(1);
}
}
Messaging vs Awaiting
await | send / receive | |
|---|---|---|
| Direction | Get final result from agent | Ongoing communication |
| Blocking | Yes, waits for agent to complete | send returns immediately, receive blocks until message arrives |
| Use case | One-shot tasks | Long-running workers, event loops |
Mailbox Semantics
- Each agent has a bounded mailbox (128 messages by default)
- When the mailbox is full,
sendblocks until space opens (backpressure) - Messages from a single sender arrive in order
- Messages from multiple senders are interleaved (no global ordering)
Current Limitations
- No
receive_timeoutin the language yet (available in runtime) - No broadcast channels (one-to-many messaging)
- Error handling for closed channels needs its own RFC
The divine Expression
The divine expression is how Sage programs interact with large language models.
Basic Usage
Since LLM calls can fail (network errors, API errors), divine is a fallible operation that requires try:
agent Main {
on start {
let result = try divine("What is the capital of France?");
print(result); // "Paris" (or similar)
yield(0);
}
on error(e) {
print("LLM call failed: " ++ e);
yield(1);
}
}
run Main;
String Interpolation
Use {identifier} to include variables in prompts:
agent Researcher {
topic: String
on start {
let summary = try divine(
"Write a 2-sentence summary of: {self.topic}"
);
yield(summary);
}
on error(e) {
yield("Research unavailable");
}
}
Multiple interpolations:
let format = "JSON";
let topic = "climate change";
let result = try divine(
"Output a {format} object with key facts about {topic}"
);
The Oracle<T> Type
divine returns Oracle<T>, which wraps the LLM’s response.
Oracle<T> coerces to T automatically:
let response = try divine("Hello!");
print(response); // Works - Oracle<String> coerces to String
Structured Output
divine can return any type, including user-defined records:
record Summary {
title: String,
key_points: List<String>,
sentiment: String,
}
agent Analyzer {
topic: String
on start {
let result: Oracle<Summary> = try divine(
"Analyze this topic and provide a structured summary: {self.topic}"
);
print("Title: " ++ result.title);
print("Sentiment: " ++ result.sentiment);
yield(result);
}
on error(e) {
print("Analysis failed: " ++ e);
yield(Summary { title: "Error", key_points: [], sentiment: "unknown" });
}
}
The runtime automatically:
- Injects the expected schema into the prompt
- Parses the LLM’s response as JSON
- Retries with error feedback if parsing fails (configurable via
SAGE_INFER_RETRIES)
This works with any OpenAI-compatible API, including Ollama.
Error Handling
Use try to propagate errors to the agent’s on error handler:
let result = try divine("prompt");
Or use catch to handle errors inline with a fallback:
let result = catch divine("prompt") {
"fallback value"
};
Example: Multi-Step Reasoning
agent Reasoner {
question: String
on start {
let step1 = try divine(
"Break down this question into sub-questions: {self.question}"
);
let step2 = try divine(
"Given these sub-questions: {step1}\n\nAnswer each one briefly."
);
let step3 = try divine(
"Given the original question: {self.question}\n\n" ++
"And these answers: {step2}\n\n" ++
"Provide a final comprehensive answer."
);
yield(step3);
}
on error(e) {
yield("Reasoning failed: " ++ e);
}
}
agent Main {
on start {
let r = summon Reasoner {
question: "How do vaccines work and why are they important?"
};
let answer = try await r;
print(answer);
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
Concurrent Inference
Multiple divine calls can run concurrently via spawned agents:
agent Summarizer {
text: String
on start {
let summary = try divine(
"Summarize in one sentence: {self.text}"
);
yield(summary);
}
on error(e) {
yield("Summary unavailable");
}
}
agent Main {
on start {
let s1 = summon Summarizer { text: "Long article about AI..." };
let s2 = summon Summarizer { text: "Long article about robotics..." };
let s3 = summon Summarizer { text: "Long article about space..." };
// All three LLM calls happen concurrently
let r1 = try await s1;
let r2 = try await s2;
let r3 = try await s3;
print(r1);
print(r2);
print(r3);
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
Configuration
Configure LLM behavior through environment variables.
Required
SAGE_API_KEY
Your OpenAI API key (or compatible provider):
export SAGE_API_KEY="sk-..."
Or in a .env file in your project directory:
SAGE_API_KEY=sk-...
Optional
SAGE_LLM_URL
Base URL for the LLM API. Defaults to OpenAI:
export SAGE_LLM_URL="https://api.openai.com/v1"
For local models (Ollama):
export SAGE_LLM_URL="http://localhost:11434/v1"
For other providers (Azure, Anthropic-compatible, etc.):
export SAGE_LLM_URL="https://your-provider.com/v1"
SAGE_MODEL
Which model to use. Default: gpt-4o-mini
export SAGE_MODEL="gpt-4o"
For Ollama:
export SAGE_MODEL="llama2"
SAGE_MAX_TOKENS
Maximum tokens per response. Default: 1024
export SAGE_MAX_TOKENS="2048"
SAGE_TIMEOUT_MS
Request timeout in milliseconds. Default: 30000 (30 seconds)
export SAGE_TIMEOUT_MS="60000"
Using .env Files
Sage automatically loads .env files from the current directory:
# .env
SAGE_API_KEY=sk-...
SAGE_MODEL=gpt-4o
SAGE_MAX_TOKENS=2048
Provider Examples
OpenAI (default)
export SAGE_API_KEY="sk-..."
# SAGE_LLM_URL defaults to OpenAI
export SAGE_MODEL="gpt-4o"
Ollama (local)
export SAGE_LLM_URL="http://localhost:11434/v1"
export SAGE_MODEL="llama2"
# No API key needed for local Ollama
Azure OpenAI
export SAGE_LLM_URL="https://your-resource.openai.azure.com/openai/deployments/your-deployment"
export SAGE_API_KEY="your-azure-key"
export SAGE_MODEL="gpt-4"
Other OpenAI-Compatible Providers
Any provider with an OpenAI-compatible API should work:
export SAGE_LLM_URL="https://api.together.xyz/v1"
export SAGE_API_KEY="your-key"
export SAGE_MODEL="meta-llama/Llama-3-70b-chat-hf"
Troubleshooting
“API key not set”
Make sure SAGE_API_KEY is exported or in your .env file.
Timeout errors
Increase SAGE_TIMEOUT_MS for slow models or complex prompts.
Connection refused
Check SAGE_LLM_URL is correct and the service is running.
Patterns
Common patterns for building LLM-powered agents.
Parallel Research
Spawn multiple researchers, combine results:
agent Researcher {
topic: String
on start {
let result = try divine(
"Research and provide 3 key facts about: {self.topic}"
);
yield(result);
}
on error(e) {
yield("Research failed for topic");
}
}
agent Synthesizer {
findings: List<String>
on start {
let combined = join(self.findings, "\n\n");
let synthesis = try divine(
"Given these research findings:\n{combined}\n\n" ++
"Provide a unified summary highlighting connections."
);
yield(synthesis);
}
on error(e) {
yield("Synthesis failed");
}
}
agent Coordinator {
on start {
// Parallel research
let r1 = summon Researcher { topic: "quantum computing" };
let r2 = summon Researcher { topic: "machine learning" };
let r3 = summon Researcher { topic: "cryptography" };
let f1 = try await r1;
let f2 = try await r2;
let f3 = try await r3;
// Synthesis
let s = summon Synthesizer {
findings: [f1, f2, f3]
};
let result = try await s;
print(result);
yield(0);
}
on error(e) {
print("Pipeline failed");
yield(1);
}
}
run Coordinator;
Chain of Thought
Break complex reasoning into steps:
agent ChainOfThought {
question: String
on start {
let understand = try divine(
"Question: {self.question}\n\n" ++
"First, restate the question in your own words and identify what's being asked."
);
let analyze = try divine(
"Question: {self.question}\n\n" ++
"Understanding: {understand}\n\n" ++
"Now, list the key concepts and relationships involved."
);
let solve = try divine(
"Question: {self.question}\n\n" ++
"Understanding: {understand}\n\n" ++
"Analysis: {analyze}\n\n" ++
"Now, provide a step-by-step solution."
);
let answer = try divine(
"Question: {self.question}\n\n" ++
"Solution: {solve}\n\n" ++
"State the final answer concisely."
);
yield(answer);
}
on error(e) {
yield("Reasoning failed: " ++ e);
}
}
Validation Loop
Have agents check each other’s work:
agent Generator {
task: String
on start {
let result = try divine(
"Complete this task: {self.task}"
);
yield(result);
}
on error(e) {
yield("Generation failed");
}
}
agent Validator {
task: String
result: String
on start {
let valid = try divine(
"Task: {self.task}\n\n" ++
"Result: {self.result}\n\n" ++
"Is this result correct and complete? " ++
"Answer YES or NO, then explain briefly."
);
yield(valid);
}
on error(e) {
yield("Validation failed");
}
}
agent Main {
on start {
let task = "Write a haiku about programming";
let gen = summon Generator { task: task };
let result = try await gen;
let val = summon Validator { task: task, result: result };
let validation = try await val;
print("Result: " ++ result);
print("Validation: " ++ validation);
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
Map-Reduce
Process items in parallel, combine results:
agent Processor {
item: String
on start {
let result = try divine(
"Process this item and extract key information: {self.item}"
);
yield(result);
}
on error(e) {
yield("Processing failed");
}
}
agent Reducer {
items: List<String>
on start {
let combined = join(self.items, "\n---\n");
let result = try divine(
"Combine these processed items into a summary:\n{combined}"
);
yield(result);
}
on error(e) {
yield("Reduction failed");
}
}
agent MapReduce {
on start {
// Map phase - process in parallel
let p1 = summon Processor { item: "doc1 content" };
let p2 = summon Processor { item: "doc2 content" };
let p3 = summon Processor { item: "doc3 content" };
let r1 = try await p1;
let r2 = try await p2;
let r3 = try await p3;
// Reduce phase
let reducer = summon Reducer { items: [r1, r2, r3] };
let final_result = try await reducer;
print(final_result);
yield(0);
}
on error(e) {
yield(1);
}
}
run MapReduce;
Debate
Multiple agents argue different positions:
agent Debater {
position: String
topic: String
on start {
let argument = try divine(
"You are arguing {self.position} on the topic: {self.topic}\n\n" ++
"Make your best argument in 2-3 sentences."
);
yield(argument);
}
on error(e) {
yield("Argument unavailable");
}
}
agent Judge {
topic: String
arg_for: String
arg_against: String
on start {
let verdict = try divine(
"Topic: {self.topic}\n\n" ++
"Argument FOR:\n{self.arg_for}\n\n" ++
"Argument AGAINST:\n{self.arg_against}\n\n" ++
"Which argument is stronger and why? Be brief."
);
yield(verdict);
}
on error(e) {
yield("Verdict unavailable");
}
}
agent Main {
on start {
let topic = "AI will create more jobs than it destroys";
let d1 = summon Debater { position: "FOR", topic: topic };
let d2 = summon Debater { position: "AGAINST", topic: topic };
let arg_for = try await d1;
let arg_against = try await d2;
let judge = summon Judge {
topic: topic,
arg_for: arg_for,
arg_against: arg_against
};
let verdict = try await judge;
print("FOR: " ++ arg_for);
print("AGAINST: " ++ arg_against);
print("VERDICT: " ++ verdict);
yield(0);
}
on error(e) {
yield(1);
}
}
run Main;
Built-in Tools
Sage provides built-in tools that agents can use to interact with external services: databases, HTTP APIs, filesystems, and shell commands. Tools are capability declarations — an agent must explicitly declare which tools it uses, making its external interactions visible in its signature.
Declaring Tool Usage
Use the use keyword inside an agent to declare which tools it needs:
agent DataFetcher {
use Http
use Database
on start {
// Both Http and Database methods are now available
let response = try Http.get("https://api.example.com/status");
let rows = try Database.query("SELECT * FROM cache");
yield(0);
}
on error(e) {
yield(-1);
}
}
run DataFetcher;
Attempting to use a tool method without declaring it is a compile error:
agent Broken {
// No `use Http` declaration
on start {
let r = try Http.get("..."); // Error E038: undeclared tool use
yield(0);
}
}
This is intentional. The use clause is a capability declaration — it makes an agent’s external interactions explicit and auditable.
Available Tools
| Tool | Description | Methods |
|---|---|---|
| Http | HTTP client for web requests | get, post, put, delete |
| Database | SQL database client | query, execute |
| Fs | Filesystem operations | read, write, exists, list, delete |
| Shell | Execute shell commands | run |
Tool Calls Are Fallible
Every tool call can fail — network timeouts, database connection errors, file not found, command failures. Tool methods return Result<T, ToolError> implicitly, so you must handle errors.
Using try
The try keyword unwraps the result and propagates errors to the agent’s on error handler:
agent Fetcher {
use Http
on start {
// If this fails, control jumps to on error
let response = try Http.get("https://api.example.com/data");
print("Got: " ++ response.body);
yield(response.status);
}
on error(e) {
print("Request failed: " ++ e.message);
yield(-1);
}
}
Using catch
The catch expression provides a fallback value when the call fails:
let response = catch Http.get(url) {
HttpResponse { status: 0, body: "", headers: {} }
};
if response.status == 0 {
print("Request failed, using fallback");
}
Using match
For fine-grained control, call without try and match on the result:
let result = Http.get(url);
match result {
Ok(response) => {
print("Success: " ++ response.body);
}
Err(e) => {
print("Failed: " ++ e.message);
// Retry logic, logging, etc.
}
}
Configuration
Tools can be configured in two ways: environment variables (simple) or grove.toml (recommended for projects).
Environment Variables
Quick configuration for development:
# HTTP
export SAGE_HTTP_TIMEOUT=60
# Database
export SAGE_DATABASE_URL="postgres://localhost/myapp"
# Filesystem
export SAGE_FS_ROOT="/var/data"
grove.toml Configuration
For projects, configure tools in your grove.toml:
[project]
name = "my-steward"
[tools.database]
driver = "postgres"
url = "postgresql://user:pass@localhost/myapp"
pool_size = 10
[tools.http]
timeout_ms = 30000
[tools.filesystem]
root = "./data"
Database Configuration
[tools.database]
driver = "postgres" # postgres | sqlite | mysql
url = "postgresql://..." # Connection URL
pool_size = 5 # Connection pool size (default: 5)
HTTP Configuration
[tools.http]
timeout_ms = 30000 # Request timeout (default: 30000)
Filesystem Configuration
[tools.filesystem]
root = "./data" # All paths relative to this root
Multiple Tools
Declare multiple tools by listing them separately:
agent FullStack {
use Http
use Database
use Fs
use Shell
on start {
// Fetch data from API
let api_data = try Http.get("https://api.example.com/data");
// Store in database
try Database.execute("INSERT INTO cache (data) VALUES ('{api_data.body}')");
// Write to file
try Fs.write("cache/latest.json", api_data.body);
// Run a post-processing script
let result = try Shell.run("./scripts/process.sh");
yield(result.exit_code);
}
on error(e) {
yield(-1);
}
}
Tool Result Types
Each tool has specific return types for its methods:
HttpResponse
record HttpResponse {
status: Int,
body: String,
headers: Map<String, String>,
}
DbRow
record DbRow {
columns: List<String>,
values: List<String>,
}
ShellResult
record ShellResult {
exit_code: Int,
stdout: String,
stderr: String,
}
Testing with Mock Tools
In test files (*_test.sg), you can mock tool responses:
test "handles API response" {
mock tool Http.get -> HttpResponse {
status: 200,
body: "{\"user\": \"alice\"}",
headers: {}
};
// Agent under test will receive the mocked response
let agent = summon DataFetcher {};
let result = await(agent);
assert_eq(result, 200);
}
test "handles API failure" {
mock tool Http.get -> fail("connection refused");
let agent = summon DataFetcher {};
let result = await(agent);
assert_eq(result, -1); // Error handler returns -1
}
See Testing > Mocking for details.
Best Practices
-
Declare only what you need. Don’t add
use Shellunless you actually run commands. The capability list should be minimal. -
Always handle errors. Tool calls fail in production. Use
trywith a robuston errorhandler, orcatchwith sensible defaults. -
Configure via grove.toml. Environment variables work but grove.toml is versioned and explicit.
-
Be careful with Shell. Arbitrary command execution is powerful but dangerous. Validate inputs, avoid string interpolation with untrusted data.
-
Test with mocks. Don’t hit real databases or APIs in tests. Mock tool responses for reliable, fast tests.
HTTP Client
The Http tool provides methods for making HTTP requests.
Usage
Declare the tool with use Http in your agent:
agent ApiClient {
use Http
on start {
let response = try Http.get("https://api.example.com/data");
print("Status: " ++ str(response.status));
print("Body: " ++ response.body);
yield(response.status);
}
on error(e) {
print("Request failed");
yield(-1);
}
}
run ApiClient;
Methods
Http.get(url: String) -> HttpResponse
Performs an HTTP GET request.
let response = try Http.get("https://httpbin.org/get");
Http.post(url: String, body: String) -> HttpResponse
Performs an HTTP POST request with a JSON body.
let response = try Http.post(
"https://httpbin.org/post",
"{\"key\": \"value\"}"
);
HttpResponse
Both methods return an HttpResponse with the following fields:
| Field | Type | Description |
|---|---|---|
status | Int | HTTP status code (e.g., 200, 404, 500) |
body | String | Response body as text |
headers | Map<String, String> | Response headers |
Examples
Fetching JSON Data
agent JsonFetcher {
use Http
url: String
on start {
let response = try Http.get(self.url);
if response.status == 200 {
yield(response.body);
} else {
yield("Error: " ++ str(response.status));
}
}
on error(e) {
yield("Request failed");
}
}
run JsonFetcher { url: "https://httpbin.org/json" };
Posting Data
agent DataPoster {
use Http
on start {
let payload = "{\"message\": \"Hello from Sage!\"}";
let response = try Http.post("https://httpbin.org/post", payload);
yield(response.status);
}
on error(e) {
yield(-1);
}
}
run DataPoster;
Error Recovery
agent ResilientFetcher {
use Http
urls: List<String>
on start {
for url in self.urls {
let response = catch Http.get(url) {
HttpResponse { status: 0, body: "", headers: {} }
};
if response.status == 200 {
yield(response.body);
return;
}
}
yield("All URLs failed");
}
}
run ResilientFetcher {
urls: ["https://primary.example.com", "https://backup.example.com"]
};
Configuration
| Variable | Description | Default |
|---|---|---|
SAGE_HTTP_TIMEOUT | Request timeout in seconds | 30 |
The HTTP client automatically sets a User-Agent header of sage-agent/{version}.
Database Client
The Database tool provides SQL query capabilities for agents. It supports SQLite, PostgreSQL, and MySQL via connection URLs.
Usage
Declare the tool with use Database in your agent:
agent DataAgent {
use Database
on start {
let rows = try Database.query("SELECT id, name FROM users");
for row in rows {
print(row.columns); // ["id", "name"]
print(row.values); // ["1", "Alice"]
}
yield(0);
}
on error(e) {
print("Database error");
yield(-1);
}
}
run DataAgent;
Methods
Database.query(sql: String) -> List<DbRow>
Executes a SELECT query and returns the results as a list of rows.
let rows = try Database.query("SELECT * FROM users WHERE active = true");
Database.execute(sql: String) -> Int
Executes an INSERT, UPDATE, or DELETE statement and returns the number of affected rows.
let affected = try Database.execute(
"INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com')"
);
print("Inserted: " ++ int_to_str(affected) ++ " rows");
DbRow
Query results are returned as DbRow records:
| Field | Type | Description |
|---|---|---|
columns | List<String> | Column names from the query |
values | List<String> | Values as strings |
Configuration
| Variable | Description | Required |
|---|---|---|
SAGE_DATABASE_URL | Database connection URL | Yes |
Connection URL Formats
SQLite:
SAGE_DATABASE_URL="sqlite:./data.db"
SAGE_DATABASE_URL="sqlite::memory:" # In-memory database
PostgreSQL:
SAGE_DATABASE_URL="postgres://user:password@localhost/dbname"
MySQL:
SAGE_DATABASE_URL="mysql://user:password@localhost/dbname"
Examples
CRUD Operations
agent UserManager {
use Database
on start {
// Create table
try Database.execute("
CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT
)
");
// Insert
let inserted = try Database.execute(
"INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com')"
);
print("Inserted " ++ int_to_str(inserted) ++ " user");
// Select
let users = try Database.query("SELECT id, name, email FROM users");
for user in users {
print("User: " ++ user.values.1 ++ " <" ++ user.values.2 ++ ">");
}
// Update
let updated = try Database.execute(
"UPDATE users SET email = 'alice@newdomain.com' WHERE name = 'Alice'"
);
print("Updated " ++ int_to_str(updated) ++ " user");
// Delete
let deleted = try Database.execute("DELETE FROM users WHERE id = 1");
print("Deleted " ++ int_to_str(deleted) ++ " user");
yield(0);
}
on error(e) {
yield(-1);
}
}
run UserManager;
Querying with Aggregates
agent StatsAgent {
use Database
on start {
let stats = try Database.query("
SELECT
COUNT(*) as total,
AVG(age) as avg_age
FROM users
");
if len(stats) > 0 {
print("Total users: " ++ stats.0.values.0);
print("Average age: " ++ stats.0.values.1);
}
yield(0);
}
on error(e) {
yield(-1);
}
}
run StatsAgent;
Notes
- SQL queries are executed directly; be careful with user input to prevent SQL injection
- Values are returned as strings; use
parse_int()or similar to convert numeric values - The
databasefeature must be enabled at compile time (it is by default)
Filesystem
The Fs tool provides file operations for agents, allowing them to read, write, and manage files.
Usage
Declare the tool with use Fs in your agent:
agent FileAgent {
use Fs
on start {
try Fs.write("hello.txt", "Hello, World!");
let content = try Fs.read("hello.txt");
print(content);
yield(0);
}
on error(e) {
print("File error");
yield(-1);
}
}
run FileAgent;
Methods
Fs.read(path: String) -> String
Reads the entire contents of a file as a string.
let content = try Fs.read("config.json");
Fs.write(path: String, content: String) -> Unit
Writes content to a file. Creates the file if it doesn’t exist, or overwrites if it does. Parent directories are created automatically.
try Fs.write("output/data.txt", "Hello, World!");
Fs.exists(path: String) -> Bool
Checks if a file or directory exists.
if try Fs.exists("config.json") {
print("Config found");
}
Fs.list(path: String) -> List<String>
Lists the contents of a directory, returning file and directory names.
let files = try Fs.list(".");
for file in files {
print(file);
}
Fs.delete(path: String) -> Unit
Deletes a file.
try Fs.delete("temp.txt");
Configuration
| Variable | Description | Default |
|---|---|---|
SAGE_FS_ROOT | Root directory for all file operations | . (current directory) |
All paths are relative to the configured root directory:
# All file operations will be relative to /data
SAGE_FS_ROOT="/data" sage run myprogram.sg
Examples
Reading and Processing Files
agent ConfigReader {
use Fs
on start {
if try Fs.exists("config.txt") {
let config = try Fs.read("config.txt");
print("Config loaded: " ++ config);
} else {
print("No config found, using defaults");
}
yield(0);
}
on error(e) {
yield(-1);
}
}
run ConfigReader;
Writing Log Files
agent Logger {
use Fs
message: String
on start {
let timestamp = "2024-01-15T10:30:00";
let entry = timestamp ++ " - " ++ self.message ++ "\n";
// Append to log file
let existing = catch Fs.read("app.log") { "" };
try Fs.write("app.log", existing ++ entry);
yield(0);
}
on error(e) {
yield(-1);
}
}
run Logger { message: "Application started" };
Processing Directory Contents
agent DirectoryProcessor {
use Fs
on start {
let files = try Fs.list("input");
let processed = 0;
for file in files {
if str_ends_with(file, ".txt") {
let content = try Fs.read("input/" ++ file);
let upper = str_upper(content);
try Fs.write("output/" ++ file, upper);
processed = processed + 1;
}
}
print("Processed " ++ int_to_str(processed) ++ " files");
yield(processed);
}
on error(e) {
yield(-1);
}
}
run DirectoryProcessor;
Creating Nested Directories
agent NestedWriter {
use Fs
on start {
// Parent directories are created automatically
try Fs.write("reports/2024/january/summary.txt", "Monthly summary...");
yield(0);
}
on error(e) {
yield(-1);
}
}
run NestedWriter;
Notes
- All file operations are async and non-blocking
- Paths are always relative to
SAGE_FS_ROOT(default: current directory) write()automatically creates parent directories- Binary files are not currently supported; use the Shell tool for binary operations
Shell
The Shell tool allows agents to execute shell commands and capture their output.
Usage
Declare the tool with use Shell in your agent:
agent ShellAgent {
use Shell
on start {
let result = try Shell.run("echo 'Hello from shell'");
print(result.stdout);
yield(result.exit_code);
}
on error(e) {
print("Command failed");
yield(-1);
}
}
run ShellAgent;
Methods
Shell.run(command: String) -> ShellResult
Executes a shell command using sh -c and returns the result.
let result = try Shell.run("ls -la");
print(result.stdout);
ShellResult
Command execution returns a ShellResult with the following fields:
| Field | Type | Description |
|---|---|---|
exit_code | Int | Exit code from the command (0 = success) |
stdout | String | Standard output |
stderr | String | Standard error |
Examples
Basic Command Execution
agent BasicShell {
use Shell
on start {
let result = try Shell.run("whoami");
print("Running as: " ++ str_trim(result.stdout));
yield(0);
}
on error(e) {
yield(-1);
}
}
run BasicShell;
Checking Exit Codes
agent ExitCodeChecker {
use Shell
on start {
let result = try Shell.run("test -f /etc/passwd");
if result.exit_code == 0 {
print("File exists");
} else {
print("File not found");
}
yield(result.exit_code);
}
on error(e) {
yield(-1);
}
}
run ExitCodeChecker;
Handling Errors
agent ErrorHandler {
use Shell
on start {
let result = try Shell.run("ls /nonexistent");
if result.exit_code != 0 {
print("Error: " ++ result.stderr);
} else {
print("Output: " ++ result.stdout);
}
yield(result.exit_code);
}
on error(e) {
yield(-1);
}
}
run ErrorHandler;
Complex Commands with Pipes
agent PipelineAgent {
use Shell
on start {
// Multiple commands with pipes
let result = try Shell.run("cat /etc/passwd | grep root | head -1");
print(result.stdout);
// Command with environment variables
let result2 = try Shell.run("echo $HOME");
print("Home: " ++ str_trim(result2.stdout));
yield(0);
}
on error(e) {
yield(-1);
}
}
run PipelineAgent;
Running Git Commands
agent GitAgent {
use Shell
on start {
let status = try Shell.run("git status --short");
if status.stdout == "" {
print("Working directory clean");
} else {
print("Changes detected:");
print(status.stdout);
}
let branch = try Shell.run("git branch --show-current");
print("Current branch: " ++ str_trim(branch.stdout));
yield(0);
}
on error(e) {
yield(-1);
}
}
run GitAgent;
Building and Testing
agent BuildAgent {
use Shell
on start {
print("Running tests...");
let test_result = try Shell.run("cargo test 2>&1");
if test_result.exit_code == 0 {
print("Tests passed!");
} else {
print("Tests failed:");
print(test_result.stdout);
}
yield(test_result.exit_code);
}
on error(e) {
yield(-1);
}
}
run BuildAgent;
Security Considerations
- Commands are executed via
sh -c, so shell features like pipes, redirects, and variable expansion are available - Be cautious when constructing commands from user input to avoid command injection
- Consider using the
Fstool for file operations instead of shell commands when possible
Notes
- Commands run in the current working directory
- Environment variables from the parent process are inherited
- Long-running commands will block the agent until completion
- Use timeouts in your agent logic if command execution time is a concern
MCP Integration
New in v2.2.0 — RFC-0023
Sage supports the Model Context Protocol (MCP) for connecting agents to external tool servers. There are two complementary modes:
- Typed MCP Tools — compile-time checked tool interfaces backed by MCP servers
- Dynamic MCP — runtime tool discovery and invocation for orchestration scenarios
Typed MCP Tools
Declare MCP tools using the tool keyword, exactly like built-in tools:
tool Github {
fn search_repositories(query: String) -> String
fn list_issues(owner: String, repo: String) -> String
fn get_issue(owner: String, repo: String, issue_number: Int) -> String
fn create_issue(owner: String, repo: String, title: String, body: String) -> String
}
Tool functions are implicitly fallible — you must use try or catch when calling them. Parameter and return types map directly to JSON Schema for MCP serialization.
Using MCP Tools in Agents
Agents declare MCP tool usage with use statements, identical to built-in tools:
agent IssueScanner {
use Github
owner: String
repo: String
on start {
let raw = try Github.list_issues(self.owner, self.repo);
let summary = try divine(
"Summarise these issues: {raw}"
);
yield(summary);
}
on error(e) {
yield("Unavailable");
}
}
Tool Name Mapping
If the MCP server uses different naming conventions (e.g. kebab-case), use the #[mcp_name] attribute:
tool Github {
#[mcp_name = "create-issue"]
fn create_issue(repo: String, title: String, body: String) -> String
#[mcp_name = "list-issues"]
fn list_issues(repo: String, state: String) -> String
}
Type Mapping
Arguments are serialized to JSON objects. Return values are deserialized from tool results.
| Sage Type | JSON Schema | Example |
|---|---|---|
Int | integer | 42 |
Float | number | 3.14 |
Bool | boolean | true |
String | string | "hello" |
List<T> | array | [1, 2, 3] |
Map<String, V> | object | {"a": 1} |
Option<T> | nullable T | 42 or null |
record Foo { x: Int } | object | {"x": 42} |
enum Status { Active } | string | "Active" |
Result deserialization:
- If the MCP response has
structuredContentmatching the return type schema, it is deserialized directly - If the response has a single text content item, JSON deserialization is attempted
- If the return type is
String, the text value is used directly
Configuration
MCP servers are configured in grove.toml using [tools.X] sections.
Stdio Transport
Launch the server as a subprocess:
[tools.Github]
transport = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
timeout_ms = 30000
connect_timeout_ms = 10000
[tools.Github.env]
GITHUB_PERSONAL_ACCESS_TOKEN = "$GITHUB_TOKEN"
| Field | Default | Description |
|---|---|---|
transport | — | "stdio" for subprocess servers |
command | — | Executable to launch |
args | [] | Command arguments |
timeout_ms | 30000 | Per-call timeout in milliseconds |
connect_timeout_ms | 10000 | Connection timeout in milliseconds |
Environment variables in the [tools.X.env] section starting with $ are resolved from the host environment.
HTTP Transport
Connect to a remote MCP server:
[tools.Slack]
transport = "http"
url = "https://mcp.slack.example.com/mcp"
timeout_ms = 30000
Bearer Token Auth
[tools.API]
transport = "http"
url = "https://api.example.com/mcp"
auth = "bearer"
token_env = "API_TOKEN"
OAuth 2.1 + PKCE
[tools.CloudAPI]
transport = "http"
url = "https://cloud.example.com/mcp"
auth = "oauth"
client_id_env = "CLOUD_CLIENT_ID"
authorization_url = "https://auth.cloud.example.com/authorize"
token_url = "https://auth.cloud.example.com/token"
scopes = ["tools:read", "tools:write"]
Dynamic MCP
For scenarios where tools aren’t known at compile time, use the dynamic MCP functions:
agent DynamicExplorer {
config_json: String
on start {
let handle = try mcp_connect(self.config_json);
let tools = try mcp_list_tools(handle);
let args = '{"repo": "sagelang/sage", "state": "open"}';
let result = try mcp_call(handle, "list-issues", args);
try mcp_disconnect(handle);
yield(result);
}
on error(e) {
yield("failed");
}
}
Dynamic MCP Functions
| Function | Signature | Description |
|---|---|---|
mcp_connect | (String) -> McpConnection fails | Connect using a JSON config string |
mcp_list_tools | (McpConnection) -> List<McpTool> fails | List available tools |
mcp_call | (McpConnection, String, String) -> String fails | Call a tool with JSON args |
mcp_call_json | (McpConnection, String, Map<String, String>) -> String fails | Call a tool with a Map |
mcp_disconnect | (McpConnection) -> Unit fails | Disconnect from the server |
mcp_server_info | (McpConnection) -> McpServerInfo fails | Get server metadata |
Dynamic MCP Types
record McpConnection { id: Int }
record McpTool { name: String, description: String, input_schema: String }
record McpServerInfo { name: String, version: String }
Testing MCP Tools
MCP tools integrate with the existing mock system:
test "issue filing works" {
mock tool Github.create_issue -> '{"number": 42, "url": "..."}';
let agent = summon IssueFiler { title: "Test", body: "Body" };
let result = try await agent;
assert_eq(result, 42);
}
test "handles server failure" {
mock tool Github.create_issue -> fail("Server unavailable");
let agent = summon IssueFiler { title: "Test", body: "Body" };
let result = try await agent;
assert_eq(result, -1);
}
Mocks intercept before the MCP transport layer. Dynamic MCP calls can also be mocked with mock tool mcp.call -> "json".
CLI Commands
# List configured MCP tools
sage tools list
# Inspect a server's tool manifest
sage tools inspect --stdio "npx -y @modelcontextprotocol/server-github"
sage tools inspect --http "https://mcp.example.com/mcp"
# Generate Sage tool declarations from a server
sage tools generate --stdio "npx -y @modelcontextprotocol/server-github" -o src/tools/github.sg
# Verify declared signatures match the server
sage check --verify-tools
Error Codes
| Code | Condition |
|---|---|
| E080 | Agent uses a tool with no [tools.X] in grove.toml and it’s not built-in |
| E081 | [tools.X] section missing required fields |
| E082 | (with --verify-tools) Declared signature doesn’t match server manifest |
| E083 | #[mcp_name] attribute value isn’t a string literal |
Complete Example
grove.toml:
[project]
name = "mcp-devops"
entry = "src/main.sg"
[tools.Github]
transport = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
timeout_ms = 30000
[tools.Github.env]
GITHUB_PERSONAL_ACCESS_TOKEN = "$GITHUB_TOKEN"
[tools.Filesystem]
transport = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp/sage-devops"]
[persistence]
backend = "sqlite"
path = ".sage/devops_state.db"
src/main.sg:
tool Github {
fn list_issues(owner: String, repo: String) -> String
fn create_issue(owner: String, repo: String, title: String, body: String) -> String
}
tool Filesystem {
fn write_file(path: String, content: String) -> String
fn read_file(path: String) -> String
}
agent IssueScanner {
use Github
owner: String
repo: String
on start {
let raw = try Github.list_issues(self.owner, self.repo);
let summary = try divine("Summarise these issues: {raw}");
yield(summary);
}
on error(e) {
yield("Unavailable");
}
}
agent ReportWriter {
use Filesystem
issues: String
on start {
let report = try divine(
"Write a markdown report from these issues:\n{self.issues}"
);
try Filesystem.write_file("/tmp/sage-devops/report.md", report);
yield(report);
}
on error(e) {
yield("Report generation failed");
}
}
agent Coordinator {
on start {
let scanner = summon IssueScanner {
owner: "sagelang",
repo: "sage"
};
let issues = try await scanner;
let writer = summon ReportWriter { issues: issues };
let report = try await writer;
print(report);
yield(0);
}
on error(e) {
print("Pipeline failed: " ++ str(e));
yield(1);
}
}
run Coordinator;
Testing Overview
Sage has a built-in testing framework that makes it easy to test your agents and functions. Tests are first-class citizens in the language, not bolted-on annotations.
Why Built-In Testing?
Agent-based systems are notoriously hard to test:
- LLM calls are non-deterministic
- Agent lifecycles involve async operations
- Message passing creates complex interaction patterns
Sage’s testing framework solves these problems with:
- First-class LLM mocking — deterministic tests without network calls
- Async-aware test bodies —
summonandawaitwork naturally in tests - Concurrent execution — tests run in parallel by default for speed
Quick Start
Create a test file ending in _test.sg:
src/math_test.sg:
test "addition works" {
assert_eq(1 + 1, 2);
}
test "multiplication works" {
let result = 6 * 7;
assert_eq(result, 42);
}
Run your tests:
sage test .
Output:
🦉 Ward Running 2 tests from 1 file
PASS math_test.sg::addition works
PASS math_test.sg::multiplication works
🦉 Ward test result: ok. 2 passed, 0 failed, 0 skipped [0.82s]
Test File Convention
Test files must end in _test.sg. The test runner automatically discovers all test files in your project:
my_project/
├── grove.toml
└── src/
├── main.sg
├── utils.sg
├── utils_test.sg # Tests for utils.sg
└── agents_test.sg # Tests for agents
Next Steps
- Writing Tests — test syntax and best practices
- Assertions — available assertion functions
- Mocking LLMs — how to mock
divinecalls
Writing Tests
Test Syntax
Tests are declared with the test keyword followed by a description string and a block:
test "descriptive name for the test" {
// test body
}
The description appears in test output, so make it meaningful:
- ✓
"user can log in with valid credentials" - ✓
"empty list returns None for find" - ✗
"test1"(not descriptive)
Serial Tests
By default, tests run concurrently for speed. Use @serial when a test needs isolation:
@serial test "modifies global state" {
// This test runs alone, not concurrently with others
}
Use @serial when:
- Tests modify shared state
- Tests depend on specific timing
- Tests use resources that can’t be shared
Testing Functions
Test regular functions by calling them and asserting on results:
fn factorial(n: Int) -> Int {
if n <= 1 {
return 1;
}
return n * factorial(n - 1);
}
test "factorial of 5 is 120" {
assert_eq(factorial(5), 120);
}
test "factorial of 0 is 1" {
assert_eq(factorial(0), 1);
}
test "factorial of 1 is 1" {
assert_eq(factorial(1), 1);
}
Testing Agents
Test agents by spawning them with mocked LLM responses:
agent Summariser {
topic: String
on start {
let summary = try divine("Summarise: {self.topic}");
yield(summary);
}
on error(e) {
yield("Error occurred");
}
}
test "summariser returns LLM response" {
mock divine -> "This is a summary of quantum physics.";
let result = await summon Summariser { topic: "quantum physics" };
assert_eq(result, "This is a summary of quantum physics.");
}
Test Body Semantics
Test bodies are async by default — you can use await and summon without special syntax:
test "two agents can run concurrently" {
mock divine -> "Result A";
mock divine -> "Result B";
let a = summon Researcher { topic: "A" };
let b = summon Researcher { topic: "B" };
let result_a = await a;
let result_b = await b;
assert_eq(result_a, "Result A");
assert_eq(result_b, "Result B");
}
Organising Tests
Keep tests close to the code they test:
src/
├── auth.sg
├── auth_test.sg # Tests for auth.sg
├── payments.sg
└── payments_test.sg # Tests for payments.sg
Or use a dedicated test directory:
src/
├── main.sg
└── lib/
├── utils.sg
└── utils_test.sg
Assertions
Sage provides a rich set of assertion functions for testing. All assertions are only available in test files (*_test.sg).
Basic Assertions
assert
Assert that an expression is true:
test "basic assertion" {
assert(1 + 1 == 2);
assert(true);
}
assert_eq / assert_neq
Assert equality or inequality:
test "equality assertions" {
assert_eq(1 + 1, 2);
assert_neq(1 + 1, 3);
assert_eq("hello", "hello");
assert_neq("hello", "world");
}
assert_true / assert_false
Assert boolean values:
test "boolean assertions" {
assert_true(5 > 3);
assert_false(5 < 3);
}
Comparison Assertions
assert_gt / assert_lt
Assert greater than or less than:
test "comparison assertions" {
assert_gt(10, 5); // 10 > 5
assert_lt(5, 10); // 5 < 10
}
assert_gte / assert_lte
Assert greater than or equal / less than or equal:
test "inclusive comparison" {
assert_gte(10, 10); // 10 >= 10
assert_gte(10, 5); // 10 >= 5
assert_lte(5, 5); // 5 <= 5
assert_lte(5, 10); // 5 <= 10
}
String Assertions
assert_contains / assert_not_contains
Assert string containment:
test "string containment" {
assert_contains("hello world", "world");
assert_not_contains("hello world", "foo");
}
assert_starts_with / assert_ends_with
Assert string prefix or suffix:
test "string prefix and suffix" {
assert_starts_with("hello world", "hello");
assert_ends_with("hello world", "world");
}
Collection Assertions
assert_empty / assert_not_empty
Assert collection emptiness:
test "collection emptiness" {
assert_empty([]);
assert_not_empty([1, 2, 3]);
assert_empty("");
assert_not_empty("hello");
}
assert_len
Assert collection length:
test "collection length" {
assert_len([1, 2, 3], 3);
assert_len("hello", 5);
}
Error Assertions
assert_fails
Assert that an expression produces an error:
test "agent handles error correctly" {
mock divine -> fail("simulated failure");
let handle = summon Summariser { topic: "test" };
assert_fails(await handle);
}
This is useful for testing error handling paths in your agents.
Assertion Failures
When an assertion fails, the test stops immediately and reports the failure:
FAIL math_test.sg::addition works
Failures:
math_test.sg::addition works
thread 'addition_works' panicked at src/main.rs:7:5:
assertion failed: 1 + 1 == 3
The error message shows:
- Which test failed
- Where in the generated code the failure occurred
- The assertion that failed
Mocking
Sage’s testing framework provides first-class mocking for both LLM calls and tool calls. This makes your tests deterministic, fast, and independent of external services.
Mocking LLM Calls
You can specify exactly what divine calls should return using mock divine.
Basic Mocking
Use mock divine -> value; to specify what the next divine call should return:
test "divine returns mocked value" {
mock divine -> "This is a mocked response";
let result: String = try divine("Summarise something");
assert_eq(result, "This is a mocked response");
}
The mock is consumed by the divine call — each mock is used exactly once.
Multiple Mocks
When your test makes multiple divine calls, queue up multiple mocks in order:
test "multiple divine calls" {
mock divine -> "First response";
mock divine -> "Second response";
mock divine -> "Third response";
let r1 = try divine("Query 1");
let r2 = try divine("Query 2");
let r3 = try divine("Query 3");
assert_eq(r1, "First response");
assert_eq(r2, "Second response");
assert_eq(r3, "Third response");
}
Mocks are consumed in FIFO order (first in, first out).
Mocking Structured Output
For typed divine calls, mock with the appropriate record structure:
record Summary {
text: String,
confidence: Float,
}
test "structured divine returns typed mock" {
mock divine -> Summary {
text: "Quantum computing is fast.",
confidence: 0.88
};
let summary: Summary = try divine("Summarise quantum computing");
assert_eq(summary.text, "Quantum computing is fast.");
assert_gt(summary.confidence, 0.8);
}
Mocking Failures
Use fail("message") to mock a divine failure:
test "agent handles divine failure" {
mock divine -> fail("rate limit exceeded");
let handle = summon ResilientResearcher { topic: "test" };
let result = await handle;
// Agent's fallback behaviour
assert_eq(result, "unavailable");
}
This is essential for testing error handling paths.
Testing Agents with Mocks
When testing agents that use divine, mocks are consumed by the agent’s divine calls:
agent Researcher {
topic: String
on start {
let summary = try divine("Research: {self.topic}");
yield(summary);
}
on error(e) {
yield("Research failed");
}
}
test "researcher emits summary" {
mock divine -> "Quantum computing uses qubits.";
let result = await summon Researcher { topic: "quantum" };
assert_eq(result, "Quantum computing uses qubits.");
}
Testing Multi-Agent Systems
For agents that summon other agents, each agent’s divine calls consume mocks in execution order:
test "coordinator gets results from two researchers" {
mock divine -> "Summary about AI";
mock divine -> "Summary about robots";
let c = summon Coordinator {
topics: ["AI", "robots"]
};
let results = await c;
assert_contains(results, "AI");
assert_contains(results, "robots");
}
Mock Queue Exhaustion
If a divine call is made without an available mock, the test fails with error code E054:
Error: divine called with no mock available (E054)
Always provide enough mocks for all divine calls in your test.
Mocking Tool Calls
Just like LLM calls, you can mock tool calls (Http, Fs, etc.) to avoid real network or filesystem operations in tests.
Basic Tool Mocking
Use mock tool ToolName.method -> value; to specify what a tool call should return:
test "http get returns mocked response" {
mock tool Http.get -> HttpResponse {
status: 200,
body: "Hello, World!",
headers: {}
};
let response = try Http.get("https://example.com");
assert_eq(response.status, 200);
assert_eq(response.body, "Hello, World!");
}
Mocking Tool Failures
Use fail("message") to mock a tool failure:
test "handles network error gracefully" {
mock tool Http.get -> fail("connection timeout");
let result = catch Http.get("https://example.com");
assert_true(result.is_err());
}
Multiple Tool Mocks
Like mock divine, tool mocks are consumed in FIFO order:
test "multiple http calls" {
mock tool Http.get -> HttpResponse { status: 200, body: "first", headers: {} };
mock tool Http.get -> HttpResponse { status: 200, body: "second", headers: {} };
let r1 = try Http.get("https://api.example.com/1");
let r2 = try Http.get("https://api.example.com/2");
assert_eq(r1.body, "first");
assert_eq(r2.body, "second");
}
Mocking Different Tools
You can mock different tools in the same test:
test "agent uses multiple tools" {
mock tool Http.get -> HttpResponse { status: 200, body: "data", headers: {} };
mock tool Fs.read -> "config content";
mock divine -> "processed result";
let result = await summon DataProcessor {};
assert_eq(result, "processed result");
}
Testing Agents with Tool Mocks
When testing agents that use tools, mocks are consumed by the agent’s tool calls:
agent Fetcher {
url: String
use Http
on start {
let response = try Http.get(self.url);
yield(response.body);
}
}
test "fetcher returns body" {
mock tool Http.get -> HttpResponse {
status: 200,
body: "fetched content",
headers: {}
};
let result = await summon Fetcher { url: "https://example.com" };
assert_eq(result, "fetched content");
}
Best Practices
- One assertion per test — easier to identify failures
- Descriptive mock values — make it clear what’s being tested
- Test error paths — use
fail()to test error handling - Keep mocks simple — avoid complex JSON in mocks when possible
- Mock all external calls — both
divineand tool calls should be mocked for deterministic tests
The Steward Pattern
Sage v1.x proved the thesis: agents, beliefs, and LLM inference as first-class language constructs produce programs that are simpler and safer than equivalent Python framework code.
Sage v2.0 pursues a deeper thesis: agents as stewards of long-lived systems.
What Is a Steward?
A steward is an agent that:
- Owns a domain
- Maintains it over time
- Reacts to change
- Coordinates with other stewards
- Survives crashes
The most valuable systems in software are not tasks — they are ongoing processes. A database doesn’t run once and exit. An API server doesn’t emit a result and terminate. These are stewards.
The Steward Anatomy
A complete steward agent has:
agent DatabaseSteward
uses Database // Tool capabilities
follows SchemaSync as DatabaseSteward // Protocol participation
receives SchemaCommand { // Message type
@persistent schema_version: Int // Durable state
@persistent migration_log: List<String>
active_connections: Int // Ephemeral state
on waking { // Recovery hook
trace("Recovered at schema v{self.schema_version.get()}");
reconnect_database();
}
on start { // Main logic
loop {
let cmd: SchemaCommand = receive();
handle_command(cmd);
}
yield(0);
}
on message(cmd: SchemaCommand) { // Message handler
// Protocol-aware handling
match cmd {
SchemaCommand.Migrate(spec) => {
apply_migration(spec);
reply(Acknowledged {});
}
}
}
on resting { // Cleanup hook
trace("Shutting down");
close_connections();
}
on error(e) { // Error handler
trace("Error: {e.message}");
yield(1);
}
}
The Three-Steward Architecture
The canonical steward application is a web application expressed as three coordinating stewards:
┌─────────────────────────────────────────────────┐
│ AppSupervisor │
│ strategy: RestForOne │
│ │
│ ┌───────────────────────────────────────────┐ │
│ │ DatabaseSteward restart: Permanent │ │
│ │ @persistent schema_version │ │
│ │ uses: Database │ │
│ │ follows: SchemaSync as DatabaseSteward │ │
│ └───────────────────┬───────────────────────┘ │
│ │ SchemaChanged │
│ ▼ │
│ ┌───────────────────────────────────────────┐ │
│ │ APISteward restart: Permanent │ │
│ │ @persistent route_version │ │
│ │ uses: Database, Http, Fs │ │
│ │ follows: SchemaSync as APISteward │ │
│ │ follows: ApiSync as APISteward │ │
│ └───────────────────┬───────────────────────┘ │
│ │ RouteChanged │
│ ▼ │
│ ┌───────────────────────────────────────────┐ │
│ │ FrontendSteward restart: Permanent │ │
│ │ @persistent build_version │ │
│ │ uses: Fs, Shell │ │
│ │ follows: ApiSync as FrontendSteward │ │
│ └───────────────────────────────────────────┘ │
└─────────────────────────────────────────────────┘
Why RestForOne?
The RestForOne strategy is deliberate:
- If
DatabaseStewardcrashes, bothAPIStewardandFrontendStewardrestart (they depend on the database) - If
APIStewardcrashes, onlyFrontendStewardrestarts (it depends on the API) - If
FrontendStewardcrashes, only it restarts (nothing depends on it)
The dependencies flow downward in declaration order.
Change Propagation
The key pattern is declarative change propagation:
DatabaseStewarddetects a schema change (via LLM reasoning or external command)DatabaseStewardapplies the migration viaDatabase.executeDatabaseStewardincrementsschema_version(checkpointed)DatabaseStewardsendsSchemaChanged(change)toAPIStewardAPIStewardreceives, regenerates affected routes viadivineAPIStewardincrementsroute_version(checkpointed)APIStewardsendsRouteChanged(change)toFrontendStewardFrontendStewardregenerates affected componentsFrontendStewardruns build, updatesbuild_version(checkpointed)
At any point, a crash is safe: the checkpointed state tells the restarted steward exactly where it left off.
Why This Matters
Versus Frameworks
Every framework that attempts this today (LangChain, AutoGen, CrewAI) hits the same ceiling: coordination logic is in Python, so the framework can’t reason about it.
| Problem | Framework Approach | Sage Approach |
|---|---|---|
| Agent crashes | Restart blindly | Typed supervision with state recovery |
| Inter-agent communication | String messages | Session-typed protocols |
| Tool failures | Python exceptions | Typed Result<T, ToolError> |
| State persistence | Manual serialisation | @persistent annotation |
Versus Manual Code
You could write this in Rust or Python without Sage. But you’d be implementing:
- Your own checkpoint system
- Your own supervision tree
- Your own protocol verification
- Your own LLM integration
- Your own observability
Sage provides these as language features. The compiler is the primary safety mechanism, not the programmer’s vigilance.
Building a Steward
Step 1: Define Your Domain
What does this steward own? What state must survive restarts?
agent CacheManager {
@persistent cache_version: Int
@persistent eviction_count: Int
@persistent last_cleanup: String
}
Step 2: Declare Capabilities
What external resources does this steward need?
agent CacheManager uses Database, Http {
// Can access both Database and Http tools
}
Step 3: Define Protocols
How does this steward communicate with others?
protocol CacheInvalidation {
DataSteward -> CacheManager: InvalidateKey
CacheManager -> DataSteward: Acknowledged
}
agent CacheManager follows CacheInvalidation as CacheManager {
// Protocol-compliant communication
}
Step 4: Implement Handlers
Write the lifecycle hooks:
agent CacheManager {
on waking {
trace("Cache restored at version {self.cache_version.get()}");
warm_cache();
}
on start {
loop {
let cmd = receive();
process(cmd);
}
}
on resting {
trace("Flushing cache to disk");
flush_to_disk();
}
}
Step 5: Wrap in Supervisor
Declare the restart behaviour:
supervisor CacheSystem {
strategy: OneForOne
children {
CacheManager {
restart: Permanent
cache_version: 0
eviction_count: 0
last_cleanup: ""
}
}
}
Step 6: Configure
Set up persistence and observability:
# grove.toml
[project]
name = "cache-system"
[persistence]
backend = "sqlite"
path = ".sage/cache_state.db"
[supervision]
max_restarts = 5
within_seconds = 60
[observability]
backend = "otlp"
otlp_endpoint = "http://localhost:4318/v1/traces"
Example: Database Guardian
A complete steward application that monitors a PostgreSQL database:
// Types
record QueryStats {
slow_count: Int,
avg_ms: Float,
}
// Protocols
protocol AlertProtocol {
QueryMonitor -> AlertSender: Alert
AlertSender -> QueryMonitor: Acknowledged
}
record Alert {
severity: String,
message: String,
}
record Acknowledged {}
// Query Monitor Steward
agent QueryMonitor
uses Database
follows AlertProtocol as QueryMonitor {
@persistent check_count: Int
@persistent alert_count: Int
on waking {
trace("Resuming monitoring, {self.check_count.get()} checks done");
}
on start {
loop {
span "monitoring cycle" {
let stats = try Database.query(
"SELECT COUNT(*) as cnt, AVG(mean_exec_time) as avg " ++
"FROM pg_stat_statements WHERE mean_exec_time > 500"
);
self.check_count.set(self.check_count.get() + 1);
if has_problems(stats) {
send(alert_sender, Alert {
severity: "warning",
message: "Slow queries detected",
});
self.alert_count.set(self.alert_count.get() + 1);
}
}
sleep_ms(60000); // Check every minute
}
}
on error(e) {
trace("Monitor error: {e.message}");
yield(1);
}
}
// Alert Sender Steward
agent AlertSender
uses Http
follows AlertProtocol as AlertSender
receives Alert {
@persistent alerts_sent: Int
on message(alert: Alert) {
let payload = json_stringify(alert);
try Http.post("https://hooks.slack.com/...", payload);
self.alerts_sent.set(self.alerts_sent.get() + 1);
reply(Acknowledged {});
}
on start {
yield(0);
}
}
// Supervisor
supervisor DbGuardian {
strategy: OneForOne
children {
QueryMonitor {
restart: Permanent
check_count: 0
alert_count: 0
}
AlertSender {
restart: Permanent
alerts_sent: 0
}
}
}
run DbGuardian;
When to Use Stewards
Use the steward pattern when:
- Continuous operation: The system should run indefinitely
- State matters: Losing state on crash would be costly
- Coordination required: Multiple agents must work together
- Recovery is important: Crashes should be handled gracefully
Don’t use stewards for:
- One-shot tasks: Simple agents that run once and exit
- Stateless workers: Agents that don’t need to remember anything
- Single-agent programs: No coordination needed
Summary
The steward pattern combines:
| Feature | Purpose |
|---|---|
@persistent beliefs | State that survives restarts |
supervisor declarations | Automatic crash recovery |
uses clauses | Typed tool capabilities |
follows clauses | Protocol-verified communication |
| Lifecycle hooks | Resource management |
| Observability | Production visibility |
Together, these features enable infrastructure-as-intent: you declare what you want each steward to maintain, and the language runtime ensures it is maintained.
Related
- Persistent Beliefs — State management
- Supervision Trees — Crash recovery
- Session Types — Protocol verification
- Lifecycle Hooks — Resource management
- Tools — External capabilities
- Observability — Production visibility
Supervision Trees
When agents crash, what happens? In v1.x Sage, the answer is simple: the error propagates to whoever spawned the agent, and it’s your problem. This works for task agents — short-lived workers that do one thing and exit.
But steward agents — long-lived agents that maintain a domain — need something better. A DatabaseSteward that crashes because of a transient connection error should restart, not bring down the whole program.
Supervision trees provide declarative crash recovery. You declare how agents should be restarted when they fail, and the runtime handles it automatically.
The Supervisor Declaration
A supervisor is declared with the supervisor keyword:
supervisor AppSupervisor {
strategy: OneForOne
children {
DatabaseSteward {
restart: Permanent
connection_string: "postgres://localhost/myapp"
schema_version: 0
}
APISteward {
restart: Permanent
port: 8080
}
MetricsCollector {
restart: Transient
interval_ms: 5000
}
}
}
run AppSupervisor;
When you run a supervisor, it spawns its children in order and monitors them. When a child exits, the supervisor applies its restart strategy.
Restart Strategies
The strategy determines what happens when a child fails.
OneForOne
Restart only the failed child. Other children continue running.
supervisor WebApp {
strategy: OneForOne
children {
Worker1 { restart: Permanent }
Worker2 { restart: Permanent }
Worker3 { restart: Permanent }
}
}
If Worker2 crashes, only Worker2 restarts. Worker1 and Worker3 are unaffected.
Use when: Children are independent. A database connection agent doesn’t affect an API server agent.
OneForAll
When one child fails, restart all children.
supervisor TightlyCoupled {
strategy: OneForAll
children {
ConfigLoader { restart: Permanent }
Worker1 { restart: Permanent }
Worker2 { restart: Permanent }
}
}
If any child crashes, all children are stopped and restarted together.
Use when: Children share state and can’t function correctly if one fails. If your config loader crashes, the workers have stale config and should restart too.
RestForOne
Restart the failed child and all children declared after it.
supervisor Pipeline {
strategy: RestForOne
children {
DatabaseSteward { restart: Permanent } // Position 1
APISteward { restart: Permanent } // Position 2
FrontendSteward { restart: Permanent } // Position 3
}
}
If APISteward (position 2) crashes:
DatabaseSteward(position 1) continues — it’s before the failureAPISteward(position 2) restarts — it failedFrontendSteward(position 3) restarts — it’s after the failure
Use when: Children have dependencies in declaration order. The API steward depends on the database steward, and the frontend steward depends on the API steward. If the database fails, everything downstream should restart.
Restart Policies
Each child has a restart policy that determines when it should be restarted.
Permanent
Always restart, regardless of exit reason.
DatabaseSteward {
restart: Permanent
// ...
}
If the agent exits cleanly (calls yield), restart it. If it crashes (calls yield in on error), restart it. Permanent agents run forever — until the supervisor itself stops.
Use for: Core steward agents that must always be running.
Transient
Restart only if the agent exited with an error.
MigrationRunner {
restart: Transient
// ...
}
If the agent exits cleanly, don’t restart — it completed its work. If it crashes, restart it to retry.
Use for: Agents that do work and then should stop, but should retry on failure.
Temporary
Never restart.
OneTimeSetup {
restart: Temporary
// ...
}
Run once. If it succeeds, fine. If it fails, fine. Don’t restart either way.
Use for: Initialisation agents, cleanup agents, or agents that shouldn’t retry.
Restart Intensity Limiting
A crashing agent that keeps crashing creates a restart storm. To prevent this, supervisors have a circuit breaker:
# grove.toml
[supervision]
max_restarts = 5
within_seconds = 60
If a supervisor sees more than max_restarts within within_seconds, it gives up and terminates. If the supervisor has a parent supervisor, that parent’s strategy applies.
Default: 5 restarts within 60 seconds.
Integration with Persistence
Supervision and persistent beliefs work together to provide crash recovery with state.
When a Permanent agent with @persistent fields restarts:
- The supervisor spawns a fresh agent instance
@persistentfields are loaded from the last checkpointon wakingruns (validate recovered state, reconnect)on startruns (normal operation)
agent DatabaseSteward {
@persistent schema_version: Int
@persistent migration_log: List<String>
on waking {
print("Recovered at schema v{self.schema_version.get()}");
reconnect_to_database();
}
on start {
// Resume normal operation
yield(0);
}
}
Without @persistent, a restarted agent starts fresh with zero-valued fields. This may be fine for stateless workers, but steward agents typically need persistence.
Belief Initialisation
When declaring children in a supervisor, you provide initial values for their beliefs:
supervisor AppSupervisor {
strategy: OneForOne
children {
QueryMonitor {
restart: Permanent
check_count: 0
slow_query_threshold_ms: 100
alert_count: 0
}
}
}
These are the initial values used on the first run. If the agent has @persistent fields and a checkpoint exists, the checkpoint values are used instead.
Practical Example
A database guardian with multiple monitoring agents:
// Query Monitor - tracks slow queries
agent QueryMonitor {
@persistent check_count: Int
@persistent alert_count: Int
on waking {
trace("Resuming with {self.check_count.get()} previous checks");
}
on start {
let count = self.check_count.get() + 1;
self.check_count.set(count);
trace("Check #{count}");
// Actual monitoring logic...
yield(count);
}
on error(e) {
let alerts = self.alert_count.get() + 1;
self.alert_count.set(alerts);
trace("Error (alert #{alerts})");
yield(-1);
}
}
// Pool Monitor - watches connection pool
agent PoolMonitor {
@persistent max_connections_seen: Int
on start {
let current = check_pool_connections();
if current > self.max_connections_seen.get() {
self.max_connections_seen.set(current);
}
yield(current);
}
on error(e) {
yield(-1);
}
}
// Supervisor
supervisor DbGuardian {
strategy: OneForOne
children {
QueryMonitor {
restart: Permanent
check_count: 0
alert_count: 0
}
PoolMonitor {
restart: Permanent
max_connections_seen: 0
}
}
}
run DbGuardian;
Configure in grove.toml:
[project]
name = "db-guardian"
[persistence]
backend = "sqlite"
path = ".sage/db_guardian.db"
[supervision]
max_restarts = 10
within_seconds = 30
Running a Supervisor
Use run SupervisorName; at the end of your file:
run DbGuardian;
The supervisor starts all children and monitors them. The program runs until:
- All children have exited (and none need restarting)
- The circuit breaker trips (too many restarts)
- The process is killed externally
Nested Supervisors
Supervisors can be children of other supervisors, creating a supervision tree:
supervisor DatabaseSection {
strategy: OneForAll
children {
QueryMonitor { restart: Permanent }
PoolMonitor { restart: Permanent }
}
}
supervisor ApiSection {
strategy: OneForOne
children {
RouterAgent { restart: Permanent }
HandlerPool { restart: Permanent }
}
}
supervisor AppRoot {
strategy: RestForOne
children {
DatabaseSection { restart: Permanent }
ApiSection { restart: Permanent }
}
}
run AppRoot;
If the DatabaseSection supervisor’s circuit breaker trips, AppRoot sees it as a child failure and applies RestForOne — restarting DatabaseSection and ApiSection.
Maximum nesting depth: 8 levels (to prevent pathological trees).
Best Practices
-
Start with OneForOne. It’s the simplest and usually correct. Escalate to RestForOne or OneForAll only when you have clear dependencies.
-
Use Permanent for core stewards. Your main agents should always be running.
-
Use Transient for retry-on-failure workers. Agents that do work and exit should be Transient.
-
Pair Permanent with @persistent. An always-restart agent without persistence restarts from scratch — probably not what you want.
-
Tune restart intensity. The default (5 restarts in 60 seconds) may be too aggressive or too lenient for your use case.
-
Keep supervisors shallow. Deep nesting is a code smell. If you need more than 2-3 levels, reconsider your architecture.
Related
- Persistent Beliefs — State that survives restarts
- The Steward Pattern — Building long-lived agents
- Lifecycle Hooks —
on waking,on resting, and friends
Session Types
When agents communicate, they follow protocols. A coordinator sends a task to a worker, the worker sends a result back. A database steward notifies an API steward of a schema change, the API steward acknowledges.
In v1.x Sage, these protocols exist only in the programmer’s head. You can send any message at any time. Send a shutdown before a task. Send a result after the protocol has ended. The compiler doesn’t know and doesn’t care.
Session types make communication protocols explicit. You declare the protocol, and the compiler verifies that agents follow it. Wrong message order? Compile error. Missing reply? Compile error.
Protocol Declarations
A protocol defines the valid sequence of messages between roles:
protocol SchemaSync {
DatabaseSteward -> APISteward: SchemaChanged
APISteward -> DatabaseSteward: Acknowledged
}
This declares:
DatabaseStewardsends aSchemaChangedmessage toAPIStewardAPIStewardreplies with anAcknowledgedmessage
The protocol has two roles (DatabaseSteward, APISteward) and two steps.
Multi-Step Protocols
Protocols can have multiple steps:
protocol DebateRound {
Coordinator -> Debater: Topic
Debater -> Coordinator: Argument
Coordinator -> Debater: Feedback
Debater -> Coordinator: Revision
}
Message Types
Protocol steps reference message types. Define these as records or enums:
record SchemaChanged {
table: String,
change_type: String,
}
record Acknowledged {}
protocol SchemaSync {
DatabaseSteward -> APISteward: SchemaChanged
APISteward -> DatabaseSteward: Acknowledged
}
Following Protocols
Agents declare which protocols they participate in using the follows clause:
agent APISteward
receives SchemaChanged
follows SchemaSync as APISteward {
on start {
// Wait for schema changes
yield(0);
}
on message(change: SchemaChanged) {
print("Schema changed: {change.table}");
// Protocol requires a reply
reply(Acknowledged {});
}
}
The follows SchemaSync as APISteward declaration says: “This agent plays the APISteward role in the SchemaSync protocol.”
The reply Expression
When a protocol step expects a reply, use the reply() expression:
on message(change: SchemaChanged) {
// Handle the change...
process_schema_change(change);
// Send the required reply
reply(Acknowledged {});
}
reply() sends a message back to the sender of the most recent message. It’s only valid inside on message handlers when the agent follows a protocol that expects a reply.
Compile-Time Verification
If you forget the reply:
agent APISteward follows SchemaSync as APISteward {
on message(change: SchemaChanged) {
process_schema_change(change);
// Missing reply(Acknowledged {})!
}
}
The compiler catches this:
Error E076: Protocol SchemaSync requires APISteward to send Acknowledged after receiving SchemaChanged
Protocol Errors
The checker catches protocol violations at compile time:
E070: Unknown Protocol
agent Worker follows NonexistentProtocol as Worker {
// Error E070: Unknown protocol 'NonexistentProtocol'
}
E071: Unknown Role
protocol SchemaSync {
DatabaseSteward -> APISteward: SchemaChanged
}
agent Worker follows SchemaSync as UnknownRole {
// Error E071: Role 'UnknownRole' not found in protocol 'SchemaSync'
}
E073: Reply Outside Handler
agent Worker follows SchemaSync as APISteward {
on start {
reply(Acknowledged {}); // Error E073: reply outside message handler
}
}
E074: Wrong Message Type
agent APISteward follows SchemaSync as APISteward {
on message(change: SchemaChanged) {
reply(WrongType {}); // Error E074: Protocol expects Acknowledged, got WrongType
}
}
E076: Missing Reply
agent APISteward follows SchemaSync as APISteward {
on message(change: SchemaChanged) {
print("Got change");
// Error E076: Missing required reply
}
}
Practical Example
A request-response protocol between a client and server:
// Message types
record Request {
data: String,
}
record Response {
result: String,
}
// Protocol declaration
protocol RequestResponse {
Client -> Server: Request
Server -> Client: Response
}
// Server agent
agent RequestWorker
receives Request
follows RequestResponse as Server {
on start {
yield(0);
}
on message(req: Request) {
let result = process(req.data);
reply(Response { result: result });
}
}
// Client agent
agent Requester
follows RequestResponse as Client {
target: Agent<Int>,
on start {
send(self.target, Request { data: "hello" });
// Wait for response
let response: Response = receive();
print("Got: {response.result}");
yield(0);
}
}
Multiple Protocols
An agent can follow multiple protocols:
agent APISteward
follows SchemaSync as APISteward
follows ApiSync as APISteward {
// This agent participates in both protocols
}
Each follows clause is independent. The agent must satisfy all protocol obligations.
Protocol State and Supervision
When a supervised agent crashes and restarts, its protocol state is reset. The restarted agent begins fresh from the protocol’s initial state.
For protocols that span multiple message exchanges, consider:
- Idempotent operations: Design handlers so replaying a message is safe
- State in persistent beliefs: Store protocol progress in
@persistentfields - Acknowledgment patterns: Use explicit acknowledgments to confirm each step
Design Guidelines
Keep Protocols Simple
Protocols with many steps are hard to reason about. Prefer short, focused protocols:
// Good: Simple two-step protocol
protocol SchemaSync {
DatabaseSteward -> APISteward: SchemaChanged
APISteward -> DatabaseSteward: Acknowledged
}
// Avoid: Complex multi-step protocol
protocol ComplexWorkflow {
A -> B: Step1
B -> A: Step2
A -> C: Step3
C -> A: Step4
A -> B: Step5
B -> C: Step6
// ...getting hard to follow
}
Break complex workflows into multiple simpler protocols.
Use Descriptive Role Names
Protocol roles should match the agent names that play them:
// Good: Role names match agent names
protocol SchemaSync {
DatabaseSteward -> APISteward: SchemaChanged
}
agent DatabaseSteward follows SchemaSync as DatabaseSteward { }
agent APISteward follows SchemaSync as APISteward { }
Document Protocol Semantics
The type system checks syntax, not semantics. Document what each message means:
// SchemaChanged: Sent when the database schema has been modified.
// The receiver should regenerate any cached schema information.
record SchemaChanged {
table: String,
change_type: String, // "add_column" | "drop_column" | "add_table"
}
Limitations
Session types in Sage v2.0 are structural, not behavioural. The compiler verifies:
- Protocols exist
- Roles exist in protocols
- Message types match protocol steps
- Required replies are present
The compiler does not verify:
- Messages are sent in the correct runtime order
- Protocol conversations terminate correctly
- State machines are followed exactly
Full behavioural session type verification is planned for v3.0.
Related
- Messaging — Basic agent communication
- Supervision Trees — Restart behaviour affects protocol state
- The Steward Pattern — Protocols in steward architectures
Lifecycle Hooks
Sage agents go through a lifecycle: they wake, they start, they may pause and resume, and eventually they rest. v2.0 provides hooks for each phase, letting you run code at the right moment.
The Full Lifecycle
Process start / Supervisor restart
│
▼
Load @persistent fields
│
▼
┌─────────────┐
│ on waking │ ← State loaded, reconnect resources
└──────┬──────┘
│
▼
┌─────────────┐
│ on start │ ← Main agent logic
└──────┬──────┘
│
┌────┴─────┐
│ │
▼ ▼
┌────────┐ ┌───────────┐
│on pause│ │on message │ ← Concurrent with on start
└────┬───┘ └───────────┘
│
▼
┌──────────┐
│on resume │
└────┬─────┘
│
▼
yield
│
▼
┌────────────┐
│ on resting │ ← Cleanup before exit
└────────────┘
Handler Reference
on waking
Runs after @persistent fields are loaded from checkpoint, before on start.
Use for:
- Validating recovered state
- Reconnecting to external resources (databases, APIs)
- Registering with service registries
- Logging recovery
agent DatabaseSteward {
@persistent schema_version: Int
@persistent connection_string: String
on waking {
trace("Recovered at schema v{self.schema_version.get()}");
if self.connection_string.get() != "" {
reconnect_database();
trace("Database connection re-established");
}
}
on start {
// Normal operation
yield(0);
}
}
Restrictions:
- Cannot call
yield(the agent hasn’t started yet) - Cannot call
receive()(no messages before start)
Warning: Using on waking without any @persistent fields is pointless — the checker emits warning W006.
on start
The main entry point. Runs every time the agent starts or restarts.
Use for:
- Core agent logic
- Initial setup (if not recovered)
- Starting the main work loop
agent Worker {
on start {
trace("Worker starting");
do_work();
yield(0);
}
}
This is the only required handler. Every agent needs on start.
on message(msg: T)
Handles incoming messages. Can run concurrently with on start if the agent uses loop with receive().
Use for:
- Processing commands from other agents
- Handling protocol messages
- Event-driven behaviour
agent Coordinator receives Command {
on start {
loop {
let cmd: Command = receive();
// Delegate to on message
}
}
on message(cmd: Command) {
match cmd {
Command.Process(data) => process(data),
Command.Shutdown => break,
}
}
}
on pause
Runs when a supervisor signals a graceful pause.
Use for:
- Finishing in-flight work
- Flushing buffers
- Releasing locks
- Checkpointing current state
agent StreamProcessor {
@persistent processed_count: Int
buffer: List<Event>
on pause {
trace("Pausing, flushing {len(self.buffer)} buffered events");
flush_buffer();
trace("Pause complete");
}
}
Restrictions:
- Cannot call
yield(pausing is temporary) - Should complete quickly to avoid blocking the supervisor
on resume
Runs when the agent is unpaused by the supervisor.
Use for:
- Resuming work
- Re-acquiring resources released during pause
- Logging resume
agent StreamProcessor {
on resume {
trace("Resuming stream processing");
reacquire_stream_lock();
}
}
on resting
Runs after yield is called, before the agent process exits.
Use for:
- Closing connections
- Flushing final state
- Deregistering from service registries
- Cleanup
agent APISteward {
@persistent routes_generated: Bool
on resting {
trace("APISteward resting, cleaning up");
close_database_connections();
deregister_from_consul();
trace("Cleanup complete");
}
}
Restrictions:
- Cannot call
yield(already yielded) - Cannot call
receive()(mailbox closed)
Note: on resting is the v2.0 name. on stop is still accepted as an alias for backward compatibility.
on error(e)
Handles errors that propagate to the agent.
Use for:
- Logging errors
- Cleanup on failure
- Deciding whether to retry or give up
agent Worker {
on start {
let data = try fetch_data(); // May fail
process(data);
yield(0);
}
on error(e) {
trace("Worker failed: {e.message}");
// Must yield to exit
yield(-1);
}
}
Important: on error must call yield. If you don’t, the error re-propagates.
Lifecycle with Supervision
When a supervised agent crashes and restarts:
- The supervisor detects the exit
- If restart policy permits, a fresh agent is spawned
@persistentfields load from checkpointon wakingrunson startruns- Normal operation resumes
The agent picks up where it left off, thanks to persistent beliefs.
agent Counter {
@persistent count: Int
on waking {
trace("Counter recovered at {self.count.get()}");
}
on start {
let current = self.count.get() + 1;
self.count.set(current);
trace("Count is now {current}");
yield(current);
}
}
supervisor CounterSupervisor {
strategy: OneForOne
children {
Counter {
restart: Permanent
count: 0
}
}
}
Lifecycle Without Supervision
For standalone agents (no supervisor), the lifecycle is simpler:
- Agent spawns
@persistentfields load (if any)on wakingruns (if defined)on startrunsyieldis calledon restingruns (if defined)- Agent exits
No restarts — a crash exits the whole program.
Common Patterns
First-Run vs Recovery
agent Initialiser {
@persistent setup_complete: Bool
on waking {
if self.setup_complete.get() {
trace("Recovering from previous run");
} else {
trace("First run, no state to recover");
}
}
on start {
if !self.setup_complete.get() {
run_setup();
self.setup_complete.set(true);
}
yield(0);
}
}
Graceful Shutdown
agent Server {
should_shutdown: Bool
on start {
loop {
if self.should_shutdown {
break;
}
handle_request();
}
yield(0);
}
on message(cmd: Command) {
if cmd == Command.Shutdown {
trace("Shutdown requested");
self.should_shutdown = true;
}
}
on resting {
trace("Server shutting down gracefully");
close_all_connections();
}
}
Resource Lifecycle
agent DatabaseAgent {
use Database
@persistent last_query_time: String
on waking {
trace("Reconnecting to database");
ensure_connection();
}
on start {
loop {
let query = receive();
try Database.execute(query);
self.last_query_time.set(now_iso());
}
}
on resting {
trace("Closing database connection");
close_connection();
}
}
Handler Restrictions Summary
| Handler | Can yield? | Can receive()? | Can use tools? |
|---|---|---|---|
on waking | No | No | Yes |
on start | Yes | Yes | Yes |
on message | No (use break) | No (already receiving) | Yes |
on pause | No | No | Yes (briefly) |
on resume | No | No | Yes |
on resting | No | No | Yes (briefly) |
on error | Yes (required) | No | Yes |
Related
- Persistent Beliefs — State that survives restarts
- Supervision Trees — Restart behaviour
- Event Handlers — Handler basics
Observability
Production steward programs need visibility. When an agent crashes, you need to know what it was doing. When a divine call takes 8 seconds, you need to know which agent called it. When a migration fails, you need the full context.
Sage v2.0 provides structured observability as a first-class language feature.
The trace Statement
Add trace events at key points in your agent logic:
agent DataProcessor {
on start {
trace("Starting data processing");
let data = try load_data();
trace("Loaded {len(data)} items");
for item in data {
trace("Processing: {item.id}");
process(item);
}
trace("Processing complete");
yield(len(data));
}
}
Trace events include:
- Timestamp
- Agent name and ID
- Current handler
- Your message
The span Block
Group related work under a named span for timing and tracing:
agent MigrationRunner {
on start {
span "schema reconciliation" {
let current = get_current_version();
let target = determine_target_version();
apply_migrations(current, target);
}
// span ends here — duration is recorded automatically
span "index rebuild" {
rebuild_indexes();
}
yield(0);
}
}
Nested spans create a trace tree:
span "outer" {
trace("in outer");
span "inner" {
trace("in inner");
}
trace("back in outer");
}
Configuration
Environment Variables (Quick Start)
# Enable tracing to stderr
export SAGE_TRACE=1
# Or write to a file
export SAGE_TRACE_FILE=trace.ndjson
Command Line
# Trace to stderr
sage run program.sg --trace
# Trace to file
sage run program.sg --trace-file trace.ndjson
grove.toml (Recommended)
Configure the observability backend in your project manifest:
[project]
name = "my-steward"
[observability]
backend = "ndjson" # ndjson | otlp | none
NDJSON Backend (Default)
Newline-delimited JSON output. Good for local development and log aggregation.
[observability]
backend = "ndjson"
Output goes to stderr by default, or to a file if SAGE_TRACE_FILE is set.
OTLP Backend
OpenTelemetry Protocol HTTP/JSON export. Integrates with Grafana, Jaeger, Honeycomb, and any OTLP-compatible backend.
[observability]
backend = "otlp"
otlp_endpoint = "http://localhost:4318/v1/traces"
service_name = "my-steward"
Disabled
Turn off tracing entirely:
[observability]
backend = "none"
Automatic Events
The runtime emits automatic trace events for:
| Event | When |
|---|---|
agent.spawn | Agent spawned |
agent.start | on start handler begins |
agent.emit | Agent emits result |
agent.error | on error handler triggered |
agent.stop | on resting handler runs |
infer.start | LLM call begins |
infer.complete | LLM call completes |
infer.error | LLM call fails |
span.start | span block begins |
span.end | span block completes |
user | Custom trace() event |
For supervised agents, additional events:
| Event | When |
|---|---|
supervisor.start | Supervisor starts monitoring |
supervisor.child.restart | Child agent restarted |
supervisor.circuit_breaker | Restart limit exceeded |
NDJSON Format
Events are emitted as newline-delimited JSON:
{"t":1710000000001,"kind":"agent.spawn","agent":"Worker","id":"abc123"}
{"t":1710000000002,"kind":"agent.start","agent":"Worker","id":"abc123"}
{"t":1710000000003,"kind":"user","message":"Processing batch 1"}
{"t":1710000000015,"kind":"infer.start","agent":"Worker","id":"abc123","model":"gpt-4o","prompt_len":150}
{"t":1710000000842,"kind":"infer.complete","agent":"Worker","id":"abc123","model":"gpt-4o","response_len":320,"duration_ms":827}
{"t":1710000000843,"kind":"agent.emit","agent":"Worker","id":"abc123","value_type":"String"}
This format is compatible with jq, Elasticsearch, Datadog, and standard log aggregation tools.
Analysing Traces
Pretty Print
sage trace pretty trace.ndjson
Output:
[0.000s] agent.spawn Worker
[0.001s] agent.start Worker
[0.002s] user "Processing batch 1"
[0.014s] infer.start Worker model=gpt-4o
[0.841s] infer.complete Worker 827ms
[0.842s] agent.emit Worker
Summary Statistics
sage trace summary trace.ndjson
Output:
Trace Summary
─────────────────────────────────
Duration: 1.204s
Agents spawned: 3
LLM calls: 5
Agent Timeline:
Coordinator 0.000s - 0.904s (904ms)
Worker 0.002s - 0.902s (900ms)
LLM Statistics:
Total calls: 5
Total time: 3.2s
Avg duration: 640ms
Success rate: 100%
Filter Events
# By agent
sage trace filter trace.ndjson --agent Worker
# By event kind
sage trace filter trace.ndjson --kind infer.complete
# By time range
sage trace filter trace.ndjson --after 0.5 --before 1.0
LLM Analysis
sage trace divine trace.ndjson
Output:
LLM Calls
───────────────────────────────────────────────────
Agent Model Duration Status
───────────────────────────────────────────────────
Worker gpt-4o 827ms OK
Worker gpt-4o 912ms OK
───────────────────────────────────────────────────
Total: 2 calls, 1739ms, 100% success
OTLP Integration
With OTLP configured, traces are exported to your OpenTelemetry collector:
[observability]
backend = "otlp"
otlp_endpoint = "http://localhost:4318/v1/traces"
service_name = "database-guardian"
Grafana Tempo
# docker-compose.yml
services:
tempo:
image: grafana/tempo:latest
ports:
- "4318:4318" # OTLP HTTP
[observability]
backend = "otlp"
otlp_endpoint = "http://localhost:4318/v1/traces"
Jaeger
services:
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "4318:4318" # OTLP HTTP
- "16686:16686" # UI
Honeycomb
[observability]
backend = "otlp"
otlp_endpoint = "https://api.honeycomb.io/v1/traces"
service_name = "my-steward"
Set HONEYCOMB_API_KEY environment variable.
Best Practices
1. Trace at Boundaries
Add traces at the start and end of significant operations:
trace("Starting batch processing");
// ... work ...
trace("Batch complete: {count} items processed");
2. Use Spans for Timing
Wrap timed operations in spans:
span "database migration" {
apply_migration(migration);
}
// Duration automatically recorded
3. Include Context
Add relevant data to trace messages:
trace("Processing user {user.id}: {user.email}");
trace("Query returned {len(rows)} rows");
4. Monitor in Production
Use OTLP export for production observability:
[observability]
backend = "otlp"
otlp_endpoint = "https://your-collector.example.com/v1/traces"
service_name = "production-steward"
5. Analyse LLM Costs
Use trace analysis to understand LLM usage:
sage trace divine production-trace.ndjson
# Identify slow calls, high token counts, failure patterns
Related
- Error Handling — Error events in traces
- Supervision Trees — Supervisor events
- LLM Configuration — Model settings affecting traces
Editor Support
Sage includes first-class editor support with syntax highlighting and real-time diagnostics via the Language Server Protocol (LSP).
Supported Editors
Features
All Sage editor extensions provide:
- Syntax Highlighting — Keywords, strings, comments, types, and more
- Real-time Diagnostics — Errors and warnings as you type
- Auto-indentation — Smart indentation for blocks and expressions
Language Server
The Sage language server (sage-sense) provides:
- Parse error reporting
- Type checking errors
- Undefined variable detection
- All compiler diagnostics in real-time
The language server is built into the sage CLI and starts automatically when you open a .sg file in a supported editor.
Manual LSP Setup
If you’re using an editor that supports LSP but doesn’t have a Sage extension, you can configure it to use:
sage sense
This starts the language server on stdin/stdout using the standard LSP protocol.
Zed
Zed is a high-performance code editor with native Sage support.
Installation
- Open Zed
- Press
Cmd+Shift+Xto open Extensions - Search for “Sage”
- Click Install
Alternatively, use the command palette (Cmd+Shift+P) and run “zed: install extension”.
Features
The Sage extension for Zed provides:
- Tree-sitter Highlighting — Fast, accurate syntax highlighting
- LSP Diagnostics — Real-time error reporting from the Sage compiler
- Auto-indentation — Smart indentation for agents, functions, and blocks
Requirements
The language server requires sage to be on your PATH. Install via:
# Homebrew (macOS)
brew install sagelang/sage/sage
# Cargo
cargo install sage-lang
# Quick install
curl -fsSL https://raw.githubusercontent.com/sagelang/sage/main/scripts/install.sh | bash
Troubleshooting
No syntax highlighting
If syntax highlighting isn’t working:
- Ensure the file has a
.sgextension - Check that the Sage extension is installed (Extensions → Installed)
- Try restarting Zed
No diagnostics
If you’re not seeing error diagnostics:
- Verify
sageis on your PATH:which sage - Check Zed logs:
Cmd+Shift+P→ “zed: open log” - Look for “sage-sense” or “language server” errors
Extension not loading
If the extension fails to load:
- Uninstall the extension
- Restart Zed
- Reinstall the extension
VS Code
Visual Studio Code is supported via the Sage extension.
Installation
- Open VS Code
- Press
Cmd+Shift+X(Mac) orCtrl+Shift+X(Windows/Linux) to open Extensions - Search for “Sage”
- Click Install
Features
The Sage extension for VS Code provides:
- TextMate Highlighting — Syntax highlighting for all Sage constructs
- LSP Diagnostics — Real-time error reporting from the Sage compiler
- File Icons — Custom icon for
.sgfiles
Requirements
The language server requires sage to be on your PATH. Install via:
# Homebrew (macOS)
brew install sagelang/sage/sage
# Cargo
cargo install sage-lang
# Quick install
curl -fsSL https://raw.githubusercontent.com/sagelang/sage/main/scripts/install.sh | bash
Configuration
The extension can be configured in VS Code settings:
{
"sage.path": "/usr/local/bin/sage"
}
| Setting | Description | Default |
|---|---|---|
sage.path | Path to the sage binary | Auto-detected from PATH |
Troubleshooting
No syntax highlighting
If syntax highlighting isn’t working:
- Ensure the file has a
.sgextension - Check that the Sage extension is installed
- Try reloading the window:
Cmd+Shift+P→ “Developer: Reload Window”
No diagnostics
If you’re not seeing error diagnostics:
- Verify
sageis on your PATH:which sage - Check the Output panel: View → Output → select “Sage Language Server”
- Look for connection or startup errors
Extension not activating
If the extension isn’t activating:
- Check the Extensions panel for errors
- Disable and re-enable the extension
- Check VS Code’s developer console for errors
WASM Target
Sage can compile agents to WebAssembly for browser execution.
Building for WASM
sage build hello.sg --target web
This compiles your Sage program through the full pipeline (parse, type-check, codegen) but targets wasm32-unknown-unknown instead of your native platform.
Output
The build produces a pkg/ directory containing:
pkg/
hello.js # JavaScript glue (wasm-bindgen)
hello_bg.wasm # WebAssembly binary
Prerequisites
The WASM target requires:
wasm32-unknown-unknownRust target:rustup target add wasm32-unknown-unknownwasm-bindgen-cli:cargo install wasm-bindgen-cli- (Optional)
wasm-optfor size optimisation: install via binaryen
Target values
The --target flag accepts:
| Value | Description |
|---|---|
native | Default. Compile to a native binary. |
web or wasm | Compile to WebAssembly for browser use. |
How It Works
When targeting WASM, the codegen layer:
- Generates a
#[wasm_bindgen(start)]entry point instead of#[tokio::main] - Uses
sage-runtime-web— a browser-compatible runtime that replacestokio,reqwest, and native I/O with Web APIs - Produces a
cdylibcrate compiled withwasm-bindgen - Optionally runs
wasm-opt -Ozfor size optimisation
Using in a Web Page
<script type="module">
import init from './pkg/hello.js';
await init();
</script>
The WASM module initialises automatically via the #[wasm_bindgen(start)] entry point.
Limitations
divine(LLM calls) requires a browser-accessible OpenAI-compatible endpointDatabaseandShelltools are not available in WASMFsoperations use browser storage APIs instead of the filesystem- Agent concurrency uses
wasm_bindgen_futures::spawn_local(single-threaded)
Online Playground
Try Sage instantly in your browser — no installation required.
sagelang.github.io/sage-playground
Features
- Editable code — write any Sage code in the browser
- Live output — see
print()output and yield values immediately - Syntax highlighting — keywords, types, strings, numbers, builtins, and comments
- Example programs — Hello World, Counter Loop, String Operations, Fibonacci
- Keyboard shortcuts — Ctrl/Cmd+Enter to run, Tab to indent
How It Works
The playground uses a tree-walking interpreter (sage-playground-engine) compiled to WebAssembly. It shares the same parser as the full Sage compiler but interprets the AST directly instead of generating Rust code.
The interpreter supports:
- Variables, assignments, and scoping
- Functions (user-defined and standard library)
- Control flow:
if/else,while,for,loop,break,return - Records, enums, tuples, and pattern matching
- String operations and interpolation
print(),trace(), andyield()- Infinite loop protection (1M step limit)
What’s Not Supported
The playground interpreter does not support features that require external services:
divine/infer(LLM calls)- Tool calls (
Http,Database,Fs,Shell) - Agent spawning (
summon,await) - Supervisors and protocols
- Persistence (
@persistent)
These features work in the full compiled Sage — the playground focuses on exploring the core language.
Oswyn — Your Sage Companion
Oswyn is a browser-based AI assistant that knows everything about the Sage programming language. Ask questions about syntax, agents, LLM integration, tools, testing, supervision trees, and more — and get working code examples in return.
Features
- Ask questions about any part of Sage and get working code examples
- Conversation memory across sessions (stored in your browser’s localStorage)
- Supports OpenAI, Anthropic, Ollama, and any OpenAI-compatible API
- Runs entirely in your browser — no backend, no data collection
- Your API key never leaves your browser
Setup
- Open sagelang.github.io/oswyn
- Click the settings icon and enter your API key
- Start chatting
Example Questions
- “How do I create my first Sage agent?”
- “Explain the
divineexpression and LLM integration” - “How do supervision trees work in Sage?”
- “Show me how to use the Http tool”
- “How do I mock LLM calls in tests?”
Privacy
Oswyn runs entirely in your browser. Your API key and conversation history are stored in localStorage and are never sent to any server other than your chosen LLM provider.
Oswyn in the CLI
You’ll also encounter Oswyn in the Sage CLI. When you compile and run programs, Oswyn provides warm, encouraging feedback:
👻 Oswyn is pleased. Your program compiled successfully.
→ sage run examples/research.sg
👻 Oswyn is consulting the ancient texts...
→ divine() awaiting LLM response
Ward handles the stern compiler warnings. Oswyn handles the encouragement. Between them, you’re in good hands.
CLI Commands
The sage command-line tool compiles and runs Sage programs.
sage new
Create a new Sage project with scaffolding:
sage new my_project
This creates:
my_project/
├── grove.toml # Project manifest
└── src/
└── main.sg # Entry point with example code
Examples
# Create a new project
sage new my_agent
# Enter the project and run it
cd my_agent
sage run .
sage run
Compile and execute a Sage program:
sage run program.sg
Options
| Option | Description |
|---|---|
--release | Build with optimizations |
-q, --quiet | Minimal output |
--trace | Enable tracing (emit trace events to stderr) |
--trace-file <path> | Write trace events to a file (NDJSON format) |
Examples
# Run a program
sage run hello.sg
# Run with optimizations
sage run hello.sg --release
# Run quietly (only program output)
sage run hello.sg -q
# Run with tracing to stderr
sage run hello.sg --trace
# Run with tracing to a file
sage run hello.sg --trace-file trace.ndjson
sage build
Compile a Sage program to a native binary without running it:
sage build program.sg
Options
| Option | Description |
|---|---|
--release | Build with optimizations |
-o, --output <dir> | Output directory (default: hearth) |
--emit-rust | Only generate Rust code, don’t compile |
--target <target> | Compilation target: native (default), web, or wasm |
Examples
# Build a native binary
sage build hello.sg
# Build with optimizations
sage build hello.sg --release
# Custom output directory
sage build hello.sg -o ./out
# Generate Rust code only (for inspection)
sage build hello.sg --emit-rust
# Build for WebAssembly
sage build hello.sg --target web
Output Structure
After building a native target, you’ll find:
hearth/
hello/
main.rs # Generated Rust code
hello # Native binary (if not --emit-rust)
After building a WASM target (--target web):
pkg/
hello.js # JavaScript glue (wasm-bindgen)
hello_bg.wasm # WebAssembly binary
sage check
Type-check a Sage program without compiling or running:
sage check program.sg
This is useful for quick validation during development.
Examples
# Check for errors
sage check hello.sg
# Output on success:
# ✨ No errors in hello.sg
sage test
Run tests in a Sage project:
sage test .
This discovers all *_test.sg files, compiles them, and runs the tests.
Options
| Option | Description |
|---|---|
--filter <pattern> | Only run tests matching the pattern |
--file <path> | Run only tests in the specified file |
--serial | Run all tests sequentially (not in parallel) |
-v, --verbose | Show detailed failure output |
--no-colour | Disable colored output |
Examples
# Run all tests in the project
sage test .
# Run tests matching "auth"
sage test . --filter auth
# Run tests in a specific file
sage test . --file src/utils_test.sg
# Run tests sequentially (useful for debugging)
sage test . --serial
# Verbose output with failure details
sage test . --verbose
Output
🦉 Ward Running 3 tests from 2 files
PASS auth_test.sg::login succeeds with valid credentials
PASS auth_test.sg::login fails with invalid password
FAIL utils_test.sg::parse handles empty input
🦉 Ward test result: FAILED. 2 passed, 1 failed, 0 skipped [1.23s]
Exit Codes
| Code | Meaning |
|---|---|
| 0 | All tests passed |
| 1 | One or more tests failed |
sage add
Add a dependency to your project:
sage add package-name --git https://github.com/user/package
Options
| Option | Description |
|---|---|
--git <url> | Git repository URL |
--path <path> | Local path to the package (relative or absolute) |
--tag <tag> | Git tag to use |
--branch <branch> | Git branch to use |
--rev <rev> | Git revision (commit SHA) to use |
Examples
# Add a git dependency
sage add mylib --git https://github.com/user/mylib
# Add a specific version
sage add mylib --git https://github.com/user/mylib --tag v1.0.0
# Add a local path dependency (for development)
sage add mylib --path ./path/to/mylib
# Add a sibling directory
sage add shared --path ../shared-lib
Path Dependencies
Path dependencies are useful for:
- Monorepo setups where packages are in the same repository
- Local development of dependencies
- Testing changes before publishing
Path dependencies are resolved relative to the project root (where grove.toml is located).
sage update
Update dependencies to their latest versions:
sage update
This fetches the latest commits for git dependencies and updates grove.lock.
sage trace
Analyse trace files generated by sage run --trace-file:
sage trace <subcommand> <file>
Subcommands
| Subcommand | Description |
|---|---|
pretty | Pretty-print trace events in human-readable format |
summary | Show summary with agent timeline, totals, and durations |
filter --agent <name> | Filter trace events by agent name |
divine | Show all LLM inference calls with durations |
cost | Estimate token costs from inference calls (experimental) |
Examples
# Pretty-print all trace events
sage trace pretty trace.ndjson
# Get a summary of what happened
sage trace summary trace.ndjson
# Filter events for a specific agent
sage trace filter trace.ndjson --agent Researcher
# See all LLM calls
sage trace divine trace.ndjson
# Estimate costs (experimental)
sage trace cost trace.ndjson
Trace File Format
Trace files use NDJSON (newline-delimited JSON) format. Each line is a JSON object representing an event:
{"t":0,"kind":"agent_spawn","agent":"Main","id":"a1"}
{"t":1,"kind":"infer_start","agent":"Main","model":"gpt-4"}
{"t":150,"kind":"infer_end","agent":"Main","duration_ms":149}
{"t":151,"kind":"agent_emit","agent":"Main","value":"Hello"}
sage tools
Manage and inspect MCP tool servers configured in grove.toml:
sage tools <subcommand>
Subcommands
| Subcommand | Description |
|---|---|
list | List configured MCP tools from grove.toml |
inspect --stdio <cmd> | Inspect a stdio server’s tool manifest |
inspect --http <url> | Inspect an HTTP server’s tool manifest |
generate --stdio <cmd> -o <file> | Generate Sage tool declarations from a server |
generate --http <url> -o <file> | Generate from an HTTP server |
Examples
# List tools configured in grove.toml
sage tools list
# Inspect a server to see available tools
sage tools inspect --stdio "npx -y @modelcontextprotocol/server-github"
sage tools inspect --http "https://mcp.example.com/mcp"
# Generate Sage declarations from a server manifest
sage tools generate --stdio "npx -y @modelcontextprotocol/server-github" -o src/tools/github.sg
Verification
Use sage check --verify-tools to verify that declared tool signatures match the actual MCP server manifest:
sage check --verify-tools
sage sense
Start the Language Server Protocol (LSP) server for editor integration:
sage sense
This command starts the Sage language server on stdin/stdout. It’s typically invoked automatically by editor extensions (Zed, VS Code) rather than manually.
Features
The language server provides:
- Real-time parse error reporting
- Type checking diagnostics
- Undefined variable detection
- All compiler error codes
Manual Usage
For editors without a Sage extension, configure the LSP client to run sage sense as the language server command.
Example for generic LSP configuration:
{
"languageId": "sage",
"command": "sage",
"args": ["sense"],
"fileExtensions": [".sg"]
}
Global Options
| Option | Description |
|---|---|
-h, --help | Show help information |
-V, --version | Show version |
Exit Codes
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | Compilation error (parse, type, or codegen) |
| Other | Program exit code (when using sage run) |
Compilation Modes
Sage automatically selects the fastest compilation mode:
Pre-compiled Toolchain (Default)
When installed via the install script or release binaries, Sage includes a pre-compiled Rust toolchain. This provides fast compilation without requiring Rust to be installed.
Cargo Fallback
If no pre-compiled toolchain is found, Sage falls back to using cargo. This requires Rust to be installed but allows compilation on any platform.
The output will indicate which mode was used:
✨ Done Compiled hello.sg in 0.42s # Pre-compiled toolchain
✨ Done Compiled hello.sg (cargo) in 2.31s # Cargo fallback
Environment Variables
Sage uses environment variables to configure LLM integration and the compiler.
LLM Configuration
These variables configure the divine expression.
SAGE_API_KEY
Required for LLM features. Your API key for the LLM provider.
export SAGE_API_KEY="sk-..."
SAGE_LLM_URL
Base URL for the LLM API. Defaults to OpenAI.
# OpenAI (default)
export SAGE_LLM_URL="https://api.openai.com/v1"
# Ollama (local)
export SAGE_LLM_URL="http://localhost:11434/v1"
# Azure OpenAI
export SAGE_LLM_URL="https://your-resource.openai.azure.com/openai/deployments/your-deployment"
# Other OpenAI-compatible providers
export SAGE_LLM_URL="https://api.together.xyz/v1"
SAGE_MODEL
Which model to use. Default: gpt-4o-mini
export SAGE_MODEL="gpt-4o"
SAGE_MAX_TOKENS
Maximum tokens per response. Default: 1024
export SAGE_MAX_TOKENS="2048"
SAGE_TIMEOUT_MS
Request timeout in milliseconds. Default: 30000 (30 seconds)
export SAGE_TIMEOUT_MS="60000"
SAGE_INFER_RETRIES
Maximum retries for structured output parsing. When divine returns a type other than String, the runtime parses the LLM’s response as JSON. If parsing fails, it retries with error feedback. Default: 3
export SAGE_INFER_RETRIES="5"
Tool Configuration
These variables configure built-in tools used by agents.
SAGE_HTTP_TIMEOUT
HTTP request timeout in seconds. Default: 30
export SAGE_HTTP_TIMEOUT="60"
SAGE_DATABASE_URL
Database connection URL for the Database tool. Required when using database features.
# SQLite
export SAGE_DATABASE_URL="sqlite:./data.db"
export SAGE_DATABASE_URL="sqlite::memory:"
# PostgreSQL
export SAGE_DATABASE_URL="postgres://user:password@localhost/dbname"
# MySQL
export SAGE_DATABASE_URL="mysql://user:password@localhost/dbname"
SAGE_FS_ROOT
Root directory for Fs tool operations. All file paths are relative to this directory. Default: . (current directory)
export SAGE_FS_ROOT="/var/data/myapp"
Observability
SAGE_TRACE
Enable trace output to stderr. Set to 1 to enable.
export SAGE_TRACE=1
SAGE_TRACE_FILE
Write trace output to a file instead of stderr.
export SAGE_TRACE_FILE="trace.log"
Compiler Configuration
SAGE_TOOLCHAIN
Override the path to the pre-compiled toolchain. Normally this is detected automatically.
export SAGE_TOOLCHAIN="/path/to/toolchain"
The toolchain directory should contain:
bin/rustc- The Rust compilerlibs/- Pre-compiled runtime libraries
Using .env Files
Sage automatically loads .env files from the current directory:
# .env
SAGE_API_KEY=sk-...
SAGE_MODEL=gpt-4o
SAGE_MAX_TOKENS=2048
This is useful for per-project configuration and keeping secrets out of your shell history.
Provider Quick Reference
OpenAI
export SAGE_API_KEY="sk-..."
export SAGE_MODEL="gpt-4o"
Ollama (Local)
export SAGE_LLM_URL="http://localhost:11434/v1"
export SAGE_MODEL="llama2"
# No API key needed
Azure OpenAI
export SAGE_LLM_URL="https://your-resource.openai.azure.com/openai/deployments/your-deployment"
export SAGE_API_KEY="your-azure-key"
export SAGE_MODEL="gpt-4"
Together AI
export SAGE_LLM_URL="https://api.together.xyz/v1"
export SAGE_API_KEY="your-key"
export SAGE_MODEL="meta-llama/Llama-3-70b-chat-hf"
Error Messages
Sage provides helpful error messages with source locations and suggestions.
Parse Errors
Unexpected token
error: unexpected token
--> hello.sg:5:10
|
5 | let x =
| ^ expected expression
Fix: Complete the expression or remove the incomplete statement.
Missing semicolon
error: expected ';'
--> hello.sg:3:15
|
3 | let x = 42
| ^ expected ';' after statement
Fix: Add a semicolon at the end of the statement.
Unclosed brace
error: unclosed '{'
--> hello.sg:2:12
|
2 | on start {
| ^ this '{' was never closed
Fix: Add the matching closing brace }.
Type Errors
Type mismatch
error: type mismatch
--> hello.sg:7:20
|
7 | let x: Int = "hello";
| ^^^^^^^ expected Int, found String
Fix: Use a value of the correct type or change the type annotation.
Undefined variable
error: undefined variable 'foo'
--> hello.sg:5:10
|
5 | print(foo);
| ^^^ not found in this scope
Fix: Define the variable before using it, or check for typos.
Unknown agent
error: unknown agent 'Worker'
--> hello.sg:10:22
|
10 | let w = summon Worker {};
| ^^^^^^ agent not defined
Fix: Define the agent or check the spelling.
Missing field
error: missing field 'name'
--> hello.sg:15:22
|
15 | let g = summon Greeter {};
| ^^^^^^^^^ field 'name' not provided
Fix: Provide all required fields when spawning:
let g = summon Greeter { name: "World" };
Unhandled fallible operation (E013)
error[E013]: fallible operation must be handled
--> hello.sg:5:15
|
5 | let x = divine("prompt");
| ^^^^^^^^^^^^^^^ this can fail
|
= help: use 'try' to propagate or 'catch' to handle inline
Fix: Handle the error with try or catch:
// Propagate to on error handler
let x = try divine("prompt");
// Or handle inline
let x = catch divine("prompt") {
"fallback"
};
Wrong message type
error: type mismatch in send
--> hello.sg:8:10
|
8 | try send(worker, "hello");
| ^^^^^^^^^^^^^^^^ worker expects WorkerMsg, got String
Fix: Send a value of the type the agent accepts (defined by its receives clause).
Runtime Errors
API key not set
error: SAGE_API_KEY environment variable not set
Fix: Set your API key:
export SAGE_API_KEY="sk-..."
LLM timeout
error: LLM request timed out after 30000ms
Fix: Increase the timeout or use a faster model:
export SAGE_TIMEOUT_MS="60000"
Connection refused
error: failed to connect to LLM API
Fix: Check that SAGE_LLM_URL is correct and the service is running.
Compilation Errors
Rust not found (cargo mode)
error: Failed to run cargo build. Is Rust installed?
This happens when using the cargo fallback without Rust installed.
Fix: Either:
- Install Sage using the install script (includes pre-compiled toolchain)
- Install Rust from https://rustup.rs
Linker not found
error: linker 'cc' not found
Fix: Install a C compiler:
# Ubuntu/Debian
sudo apt install gcc
# macOS
xcode-select --install
Getting Help
If you encounter an error not listed here:
- Check the GitHub issues
- Open a new issue with:
- The error message
- Your Sage code (minimal example)
- Your environment (OS, Sage version)
Standard Library Reference
All standard library functions are available in the prelude without import.
String Functions
Construction
str(value) -> String
Convert any value to its string representation.
str(42) // "42"
str(true) // "true"
str([1, 2, 3]) // "[1, 2, 3]"
repeat(s, n) -> String
Repeat a string n times.
repeat("ab", 3) // "ababab"
repeat("-", 10) // "----------"
Inspection
len(s) -> Int
Get the length of a string in characters (Unicode-aware).
len("hello") // 5
len("héllo") // 5 (not bytes!)
len("") // 0
is_empty(s) -> Bool
Check if a string is empty.
is_empty("") // true
is_empty("hello") // false
contains(s, sub) -> Bool
Check if a string contains a substring.
contains("hello world", "world") // true
contains("hello", "xyz") // false
starts_with(s, prefix) -> Bool
Check if a string starts with a prefix.
starts_with("hello", "hel") // true
starts_with("hello", "world") // false
ends_with(s, suffix) -> Bool
Check if a string ends with a suffix.
ends_with("hello.txt", ".txt") // true
ends_with("hello", "world") // false
index_of(s, sub) -> Option<Int>
Find the index of a substring. Returns None if not found.
index_of("hello", "ll") // Some(2)
index_of("hello", "xyz") // None
Transformation
trim(s) -> String
Remove whitespace from both ends.
trim(" hello ") // "hello"
trim("\n\thi\n") // "hi"
trim_start(s) -> String
Remove whitespace from the start.
trim_start(" hello") // "hello"
trim_end(s) -> String
Remove whitespace from the end.
trim_end("hello ") // "hello"
to_upper(s) -> String
Convert to uppercase.
to_upper("hello") // "HELLO"
to_lower(s) -> String
Convert to lowercase.
to_lower("HELLO") // "hello"
replace(s, from, to) -> String
Replace all occurrences of a substring.
replace("hello world", "world", "sage") // "hello sage"
replace("aaa", "a", "b") // "bbb"
replace_first(s, from, to) -> String
Replace the first occurrence of a substring.
replace_first("aaa", "a", "b") // "baa"
Splitting and Joining
split(s, delim) -> List<String>
Split a string by a delimiter.
split("a,b,c", ",") // ["a", "b", "c"]
split("hello", "") // ["h", "e", "l", "l", "o"]
lines(s) -> List<String>
Split a string into lines.
lines("a\nb\nc") // ["a", "b", "c"]
join(parts, sep) -> String
Join strings with a separator.
join(["a", "b", "c"], ", ") // "a, b, c"
join(["hello"], "-") // "hello"
Slicing
slice(s, start, end) -> String
Extract a substring by character indices (Unicode-aware).
slice("hello", 1, 4) // "ell"
slice("héllo", 0, 3) // "hél"
chars(s) -> List<String>
Split a string into individual characters.
chars("hello") // ["h", "e", "l", "l", "o"]
Parsing
parse_int(s) -> Int fails
Parse a string as an integer.
let n = try parse_int("42"); // 42
let n = try parse_int("-10"); // -10
let n = try parse_int("abc"); // Error!
parse_float(s) -> Float fails
Parse a string as a float.
let f = try parse_float("3.14"); // 3.14
let f = try parse_float("42"); // 42.0
parse_bool(s) -> Bool fails
Parse a string as a boolean.
let b = try parse_bool("true"); // true
let b = try parse_bool("false"); // false
List Functions
Construction
range(start, end) -> List<Int>
Create a list of integers from start (inclusive) to end (exclusive).
range(0, 5) // [0, 1, 2, 3, 4]
range(1, 4) // [1, 2, 3]
range_step(start, end, step) -> List<Int>
Create a list with a custom step.
range_step(0, 10, 2) // [0, 2, 4, 6, 8]
range_step(10, 0, -2) // [10, 8, 6, 4, 2]
Inspection
len(list) -> Int
Get the length of a list.
len([1, 2, 3]) // 3
len([]) // 0
is_empty(list) -> Bool
Check if a list is empty.
is_empty([]) // true
is_empty([1, 2]) // false
contains(list, value) -> Bool
Check if a list contains a value.
contains([1, 2, 3], 2) // true
contains([1, 2, 3], 5) // false
first(list) -> Option<T>
Get the first element.
first([1, 2, 3]) // Some(1)
first([]) // None
last(list) -> Option<T>
Get the last element.
last([1, 2, 3]) // Some(3)
last([]) // None
get(list, index) -> Option<T>
Get an element by index.
get([1, 2, 3], 1) // Some(2)
get([1, 2, 3], 10) // None
Transformation
map(list, f) -> List<U>
Transform each element.
map([1, 2, 3], |x: Int| x * 2) // [2, 4, 6]
filter(list, f) -> List<T>
Keep elements that satisfy a predicate.
filter([1, 2, 3, 4], |x: Int| x > 2) // [3, 4]
reduce(list, init, f) -> U
Reduce a list to a single value.
reduce([1, 2, 3], 0, |acc: Int, x: Int| acc + x) // 6
flat_map(list, f) -> List<U>
Map and flatten.
flat_map([1, 2], |x: Int| [x, x * 10]) // [1, 10, 2, 20]
flatten(list) -> List<T>
Flatten a list of lists.
flatten([[1, 2], [3, 4]]) // [1, 2, 3, 4]
Ordering
sort(list) -> List<T>
Sort a list in ascending order.
sort([3, 1, 2]) // [1, 2, 3]
reverse(list) -> List<T>
Reverse a list.
reverse([1, 2, 3]) // [3, 2, 1]
Slicing
slice(list, start, end) -> List<T>
Extract a sublist.
slice([1, 2, 3, 4, 5], 1, 4) // [2, 3, 4]
take(list, n) -> List<T>
Take the first n elements.
take([1, 2, 3, 4], 2) // [1, 2]
drop(list, n) -> List<T>
Drop the first n elements.
drop([1, 2, 3, 4], 2) // [3, 4]
Aggregation
any(list, f) -> Bool
Check if any element satisfies a predicate.
any([1, 2, 3], |x: Int| x > 2) // true
all(list, f) -> Bool
Check if all elements satisfy a predicate.
all([1, 2, 3], |x: Int| x > 0) // true
count(list, f) -> Int
Count elements satisfying a predicate.
count([1, 2, 3, 4], |x: Int| x > 2) // 2
sum(list) -> Int
Sum integers.
sum([1, 2, 3]) // 6
sum_float(list) -> Float
Sum floats.
sum_float([1.5, 2.5]) // 4.0
Mutation Helpers
push(list, value) -> List<T>
Add an element to the end (returns new list).
push([1, 2], 3) // [1, 2, 3]
concat(a, b) -> List<T>
Concatenate two lists.
concat([1, 2], [3, 4]) // [1, 2, 3, 4]
unique(list) -> List<T>
Remove duplicates.
unique([1, 2, 2, 3, 1]) // [1, 2, 3]
zip(a, b) -> List<(T, U)>
Combine two lists into pairs.
zip([1, 2], ["a", "b"]) // [(1, "a"), (2, "b")]
enumerate(list) -> List<(Int, T)>
Pair each element with its index.
enumerate(["a", "b"]) // [(0, "a"), (1, "b")]
Math Functions
Basic
abs(n) -> Int
Absolute value of an integer.
abs(-5) // 5
abs(5) // 5
abs_float(n) -> Float
Absolute value of a float.
abs_float(-3.14) // 3.14
min(a, b) -> Int
Minimum of two integers.
min(3, 7) // 3
max(a, b) -> Int
Maximum of two integers.
max(3, 7) // 7
min_float(a, b) -> Float
Minimum of two floats.
max_float(a, b) -> Float
Maximum of two floats.
clamp(value, low, high) -> Int
Clamp a value to a range.
clamp(5, 0, 10) // 5
clamp(-5, 0, 10) // 0
clamp(15, 0, 10) // 10
Rounding
floor(n) -> Int
Round down to nearest integer.
floor(3.7) // 3
floor(-3.7) // -4
ceil(n) -> Int
Round up to nearest integer.
ceil(3.2) // 4
ceil(-3.2) // -3
round(n) -> Int
Round to nearest integer.
round(3.5) // 4
round(3.4) // 3
Powers and Roots
pow(base, exp) -> Int
Integer power.
pow(2, 10) // 1024
pow(3, 3) // 27
pow_float(base, exp) -> Float
Float power.
pow_float(2.0, 0.5) // 1.414...
sqrt(n) -> Float
Square root.
sqrt(16.0) // 4.0
sqrt(2.0) // 1.414...
log(n) -> Float
Natural logarithm.
log(E) // 1.0
log2(n) -> Float
Base-2 logarithm.
log2(8.0) // 3.0
log10(n) -> Float
Base-10 logarithm.
log10(100.0) // 2.0
Conversion
int_to_float(n) -> Float
Convert integer to float.
int_to_float(42) // 42.0
float_to_int(n) -> Int
Convert float to integer (truncates).
float_to_int(3.9) // 3
Constants
const PI: Float = 3.141592653589793
const E: Float = 2.718281828459045
I/O Functions
File Operations
read_file(path) -> String fails
Read entire file contents.
let contents = try read_file("data.txt");
write_file(path, content) fails
Write string to file (creates or truncates).
try write_file("output.txt", "Hello, world!");
append_file(path, content) fails
Append string to file.
try append_file("log.txt", "New entry\n");
file_exists(path) -> Bool
Check if a file or directory exists.
if file_exists("config.json") {
// ...
}
delete_file(path) fails
Delete a file.
try delete_file("temp.txt");
list_dir(path) -> List<String> fails
List directory contents.
let files = try list_dir(".");
make_dir(path) fails
Create a directory (and parents).
try make_dir("output/data");
Standard Streams
read_line() -> String fails
Read a line from stdin.
print("Enter your name: ");
let name = try read_line();
read_all() -> String fails
Read all input from stdin until EOF.
let input = try read_all();
Time Functions
now_ms() -> Int
Current time in milliseconds since Unix epoch.
let timestamp = now_ms();
now_s() -> Int
Current time in seconds since Unix epoch.
let timestamp = now_s();
format_timestamp(ms, fmt) -> String
Format a timestamp.
format_timestamp(now_ms(), "%Y-%m-%d") // "2024-01-15"
format_timestamp(now_ms(), "%H:%M:%S") // "10:30:45"
Format codes:
%Y— year (4 digits)%m— month (01-12)%d— day (01-31)%H— hour (00-23)%M— minute (00-59)%S— second (00-59)%F— ISO date (YYYY-MM-DD)%T— ISO time (HH:MM:SS)
parse_timestamp(s, fmt) -> Int fails
Parse a timestamp string.
let ms = try parse_timestamp("2024-01-15 10:30:00 +0000", "%Y-%m-%d %H:%M:%S %z");
Constants
const MS_PER_SECOND: Int = 1000
const MS_PER_MINUTE: Int = 60000
const MS_PER_HOUR: Int = 3600000
const MS_PER_DAY: Int = 86400000
Option Functions
is_some(opt) -> Bool
Check if option has a value.
is_some(Some(42)) // true
is_some(None) // false
is_none(opt) -> Bool
Check if option is empty.
is_none(None) // true
is_none(Some(42)) // false
unwrap(opt) -> T fails
Extract value or fail.
let x = try unwrap(Some(42)); // 42
let y = try unwrap(None); // Error!
unwrap_or(opt, default) -> T
Extract value or return default.
unwrap_or(Some(42), 0) // 42
unwrap_or(None, 0) // 0
unwrap_or_else(opt, f) -> T
Extract value or compute default.
unwrap_or_else(None, || expensive_default())
map_option(opt, f) -> Option<U>
Transform the value if present.
map_option(Some(2), |x: Int| x * 2) // Some(4)
map_option(None, |x: Int| x * 2) // None
str_truncate(s, max_len) -> String
Truncate a string to a maximum length, appending “…” if truncated. Unicode-aware.
str_truncate("hello", 10) // "hello" (no truncation)
str_truncate("hello world", 8) // "hello..." (5 chars + "...")
str_truncate("hello", 5) // "hello" (exact length, no truncation)
str_truncate("héllo wörld", 8) // "héllo..." (Unicode-aware)
Environment Functions
env(key) -> Option<String>
Get an environment variable. Returns None if not set.
let home = env("HOME"); // Some("/Users/alice")
let missing = env("NONEXISTENT"); // None
env_or(key, default) -> String
Get an environment variable, returning a default if not set.
let port = env_or("PORT", "8080"); // "8080" if PORT not set
let home = env_or("HOME", "/home/user"); // actual HOME value
JSON Functions
json_parse(s) -> String fails
Validate JSON and return if valid.
let json = try json_parse("{\"name\": \"Alice\"}");
json_get(json, key) -> Option<String>
Get a field as a string.
json_get("{\"name\": \"Alice\"}", "name") // Some("Alice")
json_get("{\"age\": 30}", "name") // None
json_get_int(json, key) -> Option<Int>
Get a field as an integer.
json_get_int("{\"age\": 30}", "age") // Some(30)
json_get_float(json, key) -> Option<Float>
Get a field as a float.
json_get_float("{\"price\": 9.99}", "price") // Some(9.99)
json_get_bool(json, key) -> Option<Bool>
Get a field as a boolean.
json_get_bool("{\"active\": true}", "active") // Some(true)
json_get_list(json, key) -> Option<List<String>>
Get a field as a list of strings.
json_get_list("{\"tags\": [\"a\", \"b\"]}", "tags") // Some(["a", "b"])
json_stringify(value) -> String
Convert a value to JSON string.
json_stringify("hello") // "\"hello\""
json_escape(s) -> String
Escape JSON special characters in a string without wrapping in quotes.
json_escape("hello") // "hello"
json_escape("say \"hi\"") // "say \\\"hi\\\""
json_escape("line\nbreak") // "line\\nbreak"
json_escape("tab\there") // "tab\\there"
Generic Deserialization
Note: Generic from_json<T> deserialization is not currently available in Sage. This would require runtime type information, which isn’t supported by the current architecture where Sage compiles to Rust with monomorphised generics.
Workaround: Use the json_get_* functions to extract typed fields from JSON strings:
// Instead of: let user: User = from_json(json);
// Do this:
let name = unwrap_or(json_get(json, "name"), "");
let age = unwrap_or(json_get_int(json, "age"), 0);
let active = unwrap_or(json_get_bool(json, "active"), false);
// Build your record manually
let user = User { name: name, age: age, active: active };
For complex nested structures, extract fields level by level or use Oracle<T> with LLM parsing if appropriate.
Map Functions
map_get(map, key) -> Option<V>
Get a value by key.
let ages = {"alice": 30, "bob": 25};
map_get(ages, "alice") // Some(30)
map_get(ages, "charlie") // None
map_set(map, key, value)
Set a key-value pair (mutates map).
let ages = {"alice": 30};
map_set(ages, "bob", 25);
map_has(map, key) -> Bool
Check if key exists.
map_has({"a": 1}, "a") // true
map_has({"a": 1}, "b") // false
map_delete(map, key)
Remove a key (mutates map).
let m = {"a": 1, "b": 2};
map_delete(m, "a");
map_keys(map) -> List<K>
Get all keys.
map_keys({"a": 1, "b": 2}) // ["a", "b"]
map_values(map) -> List<V>
Get all values.
map_values({"a": 1, "b": 2}) // [1, 2]
Output
print(message)
Print to stdout with newline.
print("Hello, world!");
print("Value: " ++ str(42));
grove.toml Reference
The grove.toml file is the project manifest for Sage projects. It configures the project name, entry point, dependencies, persistence, supervision, and extern functions.
[project]
Basic project metadata.
[project]
name = "my_project"
entry = "src/main.sg"
| Field | Required | Description |
|---|---|---|
name | Yes | Project name (used for the generated binary) |
entry | Yes | Path to the entry point .sg file |
[dependencies]
Git-based or local path dependencies for multi-package projects.
[dependencies]
mylib = { git = "https://github.com/user/mylib" }
utils = { git = "https://github.com/user/utils", tag = "v1.0.0" }
local-lib = { path = "../shared-lib" }
| Field | Description |
|---|---|
git | Git repository URL |
path | Local path (relative to project root) |
tag | Git tag |
branch | Git branch |
rev | Git commit SHA |
Manage dependencies with sage add and sage update.
[persistence]
Configure automatic checkpointing for @persistent agent fields.
[persistence]
backend = "sqlite"
path = ".sage/checkpoints.db"
| Field | Default | Description |
|---|---|---|
backend | "sqlite" | Storage backend: "sqlite", "postgres", "file" |
path | ".sage/checkpoints.db" | Path for SQLite/file backends |
url | — | Connection URL for PostgreSQL backend |
Backend examples
# SQLite (default)
[persistence]
backend = "sqlite"
path = ".sage/checkpoints.db"
# PostgreSQL
[persistence]
backend = "postgres"
url = "postgres://user:password@localhost/mydb"
# File-based (JSON files)
[persistence]
backend = "file"
path = ".sage/state"
[supervision]
Configure supervision tree parameters.
[supervision]
max_restarts = 5
restart_window_s = 60
| Field | Default | Description |
|---|---|---|
max_restarts | 3 | Maximum restarts before circuit breaker trips |
restart_window_s | 5 | Time window (seconds) for counting restarts |
When max_restarts is exceeded within restart_window_s, the supervisor stops all children and shuts down.
[extern]
Configure Rust FFI for extern function declarations.
[extern]
modules = ["src/sage_extern.rs"]
[extern.dependencies]
chrono = "0.4"
reqwest = { version = "0.12", features = ["blocking"] }
| Field | Description |
|---|---|
modules | List of Rust source files to compile and link |
[extern.dependencies]
Additional Cargo dependencies needed by your extern Rust code. Uses standard Cargo dependency syntax:
[extern.dependencies]
# Simple version
serde = "1.0"
# With features
tokio = { version = "1", features = ["full"] }
# Git dependency
my-crate = { git = "https://github.com/user/crate" }
These are added to the generated Cargo.toml alongside sage-runtime.
[tools.X]
Configure MCP (Model Context Protocol) tool servers. Each tool gets its own [tools.X] section where X matches the tool declaration name in your Sage code.
Stdio Transport
[tools.Github]
transport = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
timeout_ms = 30000
connect_timeout_ms = 10000
[tools.Github.env]
GITHUB_PERSONAL_ACCESS_TOKEN = "$GITHUB_TOKEN"
| Field | Default | Description |
|---|---|---|
transport | — | "stdio" for subprocess servers |
command | — | Executable to launch |
args | [] | Command arguments |
timeout_ms | 30000 | Per-call timeout in milliseconds |
connect_timeout_ms | 10000 | Connection timeout in milliseconds |
Environment variables in [tools.X.env] starting with $ are resolved from the host environment.
HTTP Transport
[tools.Slack]
transport = "http"
url = "https://mcp.slack.example.com/mcp"
timeout_ms = 30000
auth = "bearer"
token_env = "SLACK_MCP_TOKEN"
| Field | Default | Description |
|---|---|---|
transport | — | "http" for remote servers |
url | — | Server endpoint URL |
auth | — | "bearer" or "oauth" |
token_env | — | Environment variable name for bearer token |
client_id_env | — | Environment variable for OAuth client ID |
authorization_url | — | OAuth authorization endpoint |
token_url | — | OAuth token endpoint |
scopes | [] | OAuth scopes |
See MCP Integration for full documentation.
Complete Example
[project]
name = "webapp_steward"
entry = "src/main.sg"
[dependencies]
shared = { path = "../shared-lib" }
[persistence]
backend = "sqlite"
path = ".sage/checkpoints.db"
[supervision]
max_restarts = 5
restart_window_s = 60
[tools.Github]
transport = "stdio"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
[tools.Github.env]
GITHUB_PERSONAL_ACCESS_TOKEN = "$GITHUB_TOKEN"
[extern]
modules = ["src/sage_extern.rs"]
[extern.dependencies]
chrono = "0.4"