Skip to main content
v1.0 — YAML-native LLM workflow orchestration

One action tips.
The rest cascade.

A framework for orchestrating AI agents into reliable, composable action chains. Define agents, wire them together, ship.

$pip install agent-actions
Why agent-actions

Everything agents need to act.

A complete toolkit for building reliable AI agent chains that actually work in production.

Action Composition

Define actions in YAML with explicit dependencies. The framework resolves execution order — each domino knows when to fall.

Schema Validation

Every LLM output validated against declared schemas. Failed validations trigger auto-retry with error context.

Context Scoping

Control what each action sees. Observe sends fields to the LLM, passthrough carries data without tokens.

Built-in Retry

Automatic reprompting with configurable max attempts. A failed domino doesn't break the chain — it tries again.

Any LLM Provider

OpenAI, Anthropic, Ollama, or your own. Swap providers per-agent without touching chain logic.

Batch Execution

One flag enables provider batch APIs for 50% cost savings. Retry chains track failures across attempts automatically.

Quick Start

Three actions.
One YAML file.

Define actions with dependencies, validate outputs against schemas, and auto-retry failures. No glue code — just config.

Read the docs →
workflow.yaml
1name: document-analysis 2defaults: 3 model_vendor: openai 4 model_name: gpt-4o-mini 5 6actions: 7 - name: extract_facts 8 prompt: $prompts.Fact_Extraction 9 schema: facts_schema # validated 10 11 - name: classify_type 12 dependencies: extract_facts # wired 13 prompt: $prompts.Classify_Type 14 15 - name: generate_summary 16 dependencies: classify_type 17 schema: summary_schema 18 reprompt: 19 max_attempts: 3 # auto-retry

Start the cascade.

Build reliable AI agent pipelines with YAML-native workflows, schema validation, and multi-provider orchestration.