AI agents are too powerful for a prompt.
Give them a mission.

Jetty is managed infrastructure for agentic AI workflows.Ship workflows that execute, evaluate, and iterate until they’re right. Isolated sandboxes, full persistence, and an OpenAI-compatible API.

Get a demo
runbook.md — ETL Pipeline Agent
1
---
2
version: "1.0.0"
3
evaluation: rubric
4
agent: claude-code
5
snapshot: python312-uv
6
---
7
 
8
# ETL Pipeline Agent
9
 
10
## Objective
11
Fetch new events, enrich with
12
summaries, persist to output.
13
 
14
## Parameters
15
| source | {{source_table}} | raw.events |
16
 
17
## Step 1: Fetch Records
18
Query source_table for new rows...
19
 
20
## Step 2: Enrich & Transform
21
Apply Summarizer skill to each...
22
 
23
## Evaluation
24
| Completeness | 5: all | 1: gaps |
25
Pass ≥ 4.0. Max 3 retries.
Jetty Agent Execution Trace
Jetty Agent Execution Trace
etl-pipeline-agent v1.0.0
📋
Runbook loaded
Parsed 3 steps, 2 dependencies
✓ done
🗄️
db:query_skill
Fetching 2,847 new events
✓ done
🧠
summarizer_skill
Processing batch 12 of 29...
running
💾
write_results_skill
Awaiting enriched records
waiting
⚖️
rubric_eval_skill
Quality evaluation pending
waiting

How it works

From instruction to results in one API call.

Write a runbook. Call the API. The agent handles the entire pipeline autonomously — the runbook is your contract, the agent figures out the implementation.

1.Write a runbook

A markdown document that becomes the agent’s mission. Use the Jetty skill in your editor to help you write it — or start from a template. Not a prompt — a full spec with steps, tools, and evaluation criteria.

Jetty SkillMarkdownVersion control

2.Call the API

Send your runbook through the OpenAI-compatible completions endpoint. Jetty provisions an isolated sandbox, installs the agent, and it executes freely — shell, network, Python, browser automation. Full autonomy.

Jetty APIOpenAI-compatibleSandboxedFull autonomy

3.Get results back

Output files, execution trajectory, and real-time progress via streaming or webhook. Every artifact persisted to cloud storage.

Jetty PlatformStreamingWebhooksCloud storage

Runbooks

A runbook is a recipe for your AI agent.

Write once, run anywhere. Runbooks encode your agent’s objective, parameters, dependencies, and step-by-step logic in a single, portable Markdown file.

YAML frontmatter
Version, evaluation strategy (programmatic or rubric), agent, model, and snapshot environment.
Objective and output manifest
What the agent is doing and the exact files it must produce. The task isn’t complete until every file exists.
Parameters and dependencies
Template variables injected at runtime, plus tools and skills the agent needs. The runtime checks availability before execution.
Steps
Sequential plain-language instructions. Each step can run code, call tools, or invoke skills.
Evaluation and iteration
Rubric-based scoring or programmatic validation. If the agent fails, it retries with bounded iteration (typically 3 rounds).
Learn how to write runbooks →
RUNBOOK-etl-pipeline.md
---
version: "1.0.0"
evaluation: rubric
agent: claude-code
model: claude-sonnet-4-6
snapshot: python312-uv
---
 
# ETL Pipeline Agent
 
## Objective
Fetch new events from the source database,
enrich each record with AI-generated summaries,
and persist the results to the output table.
Produce a validation report and summary.
 
## REQUIRED OUTPUT FILES
You MUST write all of the following files.
The task is NOT complete until every file
exists and is non-empty.
 
| {{results_dir}}/validation_report.json |
| {{results_dir}}/summary.md |
| {{results_dir}}/enriched_events.csv |
 
## Parameters
| Source table | {{source_table}} | raw.events | Input |
| Results dir | {{results_dir}} | /app/results | Output |
| Batch size | {{batch_size}} | 100 | Records per batch |
 
## Dependencies
| Database | Tool | Yes |
| Summarizer | Skill | Yes |
| CSV Writer | Tool | Yes |
 
## Step 1: Environment Setup
Create {{results_dir}} if it doesn't exist.
Verify database connectivity and
Summarizer skill availability.
 
## Step 2: Fetch Records
Query {{source_table}} for new rows since
last checkpoint. Process in batches of
{{batch_size}}.
 
## Step 3: Enrich & Transform
For each batch, invoke the Summarizer
skill to generate a summary for each
event record. Append to enriched dataset.
 
## Step 4: Write Results
Persist enriched records to
{{results_dir}}/enriched_events.csv.
 
## Evaluation
| # | Criterion | 5 (Pass) | 1 (Fail) |
| 1 | Completeness | All rows enriched | Missing rows |
| 2 | Quality | Summaries coherent | Gibberish |
| 3 | Schema | Valid CSV output | Malformed |
 
Pass if score ≥ 4.0, no criterion below 3.
 
## Iteration
If evaluation fails, review the Common
Fixes table and retry. Max 3 rounds.
 
| Failure | Fix |
| Missing rows | Re-fetch with offset |
| Bad summaries | Increase context window |
| Schema errors | Validate before write |
 
## Final Checklist
```bash
#!/bin/bash
for f in validation_report.json summary.md
enriched_events.csv; do
[ -s "{{results_dir}}/$f" ] || exit 1
done
```

Agent Skill

One command. Any agent.

Give your AI agent access to the Jetty platform. Write runbooks, run workflows, and monitor results.

Claude CodeCursorVS CodeWindsurfZedGemini CLICodex CLI

Install for Claude Code

Instructions for other agents
$ claude plugin marketplace add jettyio/jettyio-skills
$ claude plugin install jetty@jetty

You can’t improve what you can’t see.

You can’t trust what you can’t trace.

Every Jetty run ships with a full execution trace, rubric-based evaluation scores, and output artifacts. Agents don’t just run — they self-evaluate against your quality criteria and iterate until they pass.

Ready to get started?

See Jetty in action with your team today.

Join engineering teams who are already using Jetty to build this generation of agentic AI workflows.

Request a demo

Free to start · No credit card required

Jetty — Managed infrastructure for agentic AI workflows