Skip to Content

Commands

launchgate run <suite>

Run an eval suite and display results.

Arguments

ArgumentDescription
suiteThe eval suite slug (e.g., rag-faithfulness)

Options

FlagDescription
--output <text>The AI-generated output to evaluate
--output-file <path>Read output from a file instead
--input <json>Input payload as inline JSON
--input-file <path>Read input from a JSON file
--api-key <key>LaunchGate API key (overrides env var)
--api-url <url>LaunchGate API URL (overrides env var)
--asyncReturn immediately with run ID (don’t wait for results)

Examples

Inline output and input

launchgate run my-suite \ --output "Paris is the capital of France." \ --input '{"query": "What is the capital of France?"}'

From files

launchgate run my-suite \ --output-file ./generated-output.txt \ --input-file ./test-input.json

Async mode

launchgate run large-suite \ --output "..." \ --async # Prints run ID immediately, exits 0

With explicit API key

launchgate run my-suite \ --api-key lg_live_abc123 \ --output "..."

Output format

On success (cleared):

✓ Cleared for launch Pass rate: 100% (5/5) Duration: 1234ms Run ID: run_abc123 ✓ Case name 1 1.0 / 1.0 ✓ Case name 2 0.9 / 0.7 ✓ Case name 3 1.0 / 0.5

On failure (aborted):

✗ Launch aborted Pass rate: 40% (2/5) Duration: 2341ms Run ID: run_def456 ✓ Case name 1 1.0 / 1.0 ✗ Case name 2 0.3 / 0.7 — Output does not match expected pattern ✓ Case name 3 0.9 / 0.5 ✗ Case name 4 0.1 / 0.8 — Hallucinated information detected ✗ Case name 5 0.0 / 1.0 — Expected value not found in output

On skip (API unreachable):

⚠ Eval skipped (API unreachable) Run ID: run_ghi789
Last updated on