LaunchGate Documentation
Automated quality gating for AI applications.
LaunchGate stands between your prompt changes, model swaps, or pipeline updates and your users — running evaluations and blocking deployments when quality drops below defined thresholds.
How it works
- Define what good looks like — Create eval suites with test cases and scoring criteria
- Run evals on every change — Trigger evaluations from your SDK, CLI, or CI/CD pipeline
- Gate deployment on results — Automatically pass or fail based on your quality thresholds
- Ship with confidence — Deploy knowing your AI outputs meet your standards
Core loop
Code change → LaunchGate eval → Cleared for launch ✓ → Deploy
Launch aborted ✗ → Fix & retryChoose your integration
| Integration | Best for |
|---|---|
| SDK | Programmatic eval runs from your application code |
| CLI | Running evals from the terminal or shell scripts |
| GitHub Action | Automated PR gating in GitHub CI/CD |
| REST API | Direct HTTP integration with any platform |
Quick example
import { LaunchGate } from "@launchgate/sdk";
const lg = new LaunchGate({
apiKey: process.env.LAUNCHGATE_API_KEY,
});
const result = await lg.run("rag-faithfulness", {
input: {
context: "The Eiffel Tower was built in 1889.",
query: "When was the Eiffel Tower built?",
},
output: "The Eiffel Tower was built in 1889.",
});
console.log(result.status); // "cleared"
console.log(result.passRate); // 1.0Key concepts
- Projects — Top-level grouping for your eval suites
- Eval Suites — A collection of test cases with a pass threshold
- Eval Cases — Individual checks with inputs, scorers, and thresholds
- Scorers — Five evaluation functions:
exact_match,regex,json_schema,contains, andllm_judge - Runs — Each execution of a suite, with immutable results
- BYOK Keys — Bring your own LLM provider keys for AI-powered scoring
Last updated on