Your Prompts Are Your API
Write prompts with models, tools & variables. We instantly turn them into production-ready APIs.
No backend. No auth. No infrastructure headaches.
// Before: Complex backend setup for AI features with o3
// 1. Backend API Route (api/generate-advanced-text.ts)
import OpenAI from "openai"; // Standard OpenAI SDK
// Assume some auth util: import { verifyUserAccess } from "./auth-utils";
// Assume some logging util: import { logAIUsage } from "./logging-utils";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
defaultHeaders: {
"X-Organization-ID": process.env.OPENAI_ORG_ID,
"User-Agent": "MyApp/1.0.0",
},
timeout: 30000, // 30 seconds
maxRetries: 2,
});
export async function POST(req: Request) {
try {
const { /* prompt, */ userQuery, contextDetails, outputStyle } = await req.json(); // Removed 'prompt'
const authorization = req.headers.get("Authorization");
// Example: Manual authorization check
// const { userId, error: authError } = await verifyUserAccess(authorization);
// if (authError) {
// return Response.json({ error: authError.message }, { status: 401 });
// }
const userId = "example-user-123"; // Placeholder
// Corrected: Messages array approach with system guidance and user query
const completionParams: OpenAI.Chat.ChatCompletionCreateParams = {
model: "o3", // OpenAI's most powerful frontier model for reasoning
messages: [
{
role: "system",
content: `You are an expert AI assistant. Your responses should be tailored to a ${outputStyle} style.`
},
{
role: "user",
content: `Context: ${contextDetails}\n\nUser Query: ${userQuery}\n\nPlease provide a detailed answer.`
}
],
temperature: 0.6,
max_tokens: 1800,
};
// For o3 specific parameters via custom SDK version
if (completionParams.model === "o3") {
// Add reasoning_effort parameter (might need type assertion for some SDK versions)
(completionParams as any).reasoning_effort = 'medium'; // Can be 'low', 'medium', or 'high'
// Additional o3 specific settings could go here
// e.g., specific prompt handling, specialized headers, etc.
}
// Additional headers for this specific request (beyond defaults)
const requestOptions = {
headers: {
"X-Request-ID": crypto.randomUUID(),
}
};
const response = await openai.chat.completions.create(
completionParams,
requestOptions
);
const resultText = response.choices[0]?.message?.content;
const usageData = response.usage;
// Example: Manual logging of usage
// await logAIUsage(userId, completionParams.model, {
// prompt_tokens: usageData?.prompt_tokens,
// completion_tokens: usageData?.completion_tokens,
// total_tokens: usageData?.total_tokens,
// reasoning_tokens: (usageData as any)?.completion_tokens_details?.reasoning_tokens,
// });
return Response.json({ resultText });
} catch (error: any) {
console.error("AI generation failed:", error);
// More detailed error handling and logging would be needed in a real app
return Response.json({ error: error.message || "Generation failed" }, { status: 500 });
}
}
// 2. Frontend call (conceptual)
async function getAdvancedTextFromBackend(prompt: string, userToken: string) {
const apiResponse = await fetch("/api/generate-advanced-text", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${userToken}`, // Assuming JWT based auth
},
body: JSON.stringify({
prompt: prompt,
// Alternatively, you could structure the request differently:
// userQuery: "What are the main concepts of quantum mechanics?",
// contextDetails: "For a university freshman physics course",
// outputStyle: "academic"
}),
});
if (!apiResponse.ok) {
const errorData = await apiResponse.json();
// Handle error appropriately in the UI
throw new Error(errorData.error || "API request failed");
}
const data = await apiResponse.json();
return data.resultText;
}
// Plus:
// - Managing different API keys for dev/staging/prod.
// - Complex retry logic and rate limit handling for API calls.
// - Versioning and A/B testing prompts.
// - Implementing caching strategies.
// - Setting up and maintaining the backend infrastructure (server, scaling).
// - Ensuring robust security and compliance.
// - Building observability (detailed logging, monitoring, alerting).
// - Keeping SDKs and model knowledge up-to-date.

Visually Engineer & Test Your Prompts
Our interactive Prompt Editor is your command center for designing, testing, and managing powerful multi-modal AI configurations. Experiment with different models, define input variables, attach tools, specify output schemas, and see live previews for text, image, and audio generation—all in one place.
From Prompt to Production
Three steps to turn your AI prompt configurations into production-ready intelligent APIs.
Craft & Deploy
Design multi-modal prompt configurations (text, image, audio) with models, tools, and variables. Save locally or via dashboard, and we instantly generate callable API routes.
Commit & Iterate
Push your prompt configurations to your repo. We automatically deploy them as versioned, type-safe, intelligent API endpoints to our edge network.
Integrate & Scale
Call your AI features via Relayr's secure gateway using JWT auth or API keys. We handle model routing, rate limits, user identity, and all infrastructure.
Write Once, Call Everywhere
Turn your prompts into type-safe functions that work across your entire stack.
content-writer.prompt.md
Versioned prompt file
---
model: claude-3-sonnet
temperature: 0.7
input:
topic: string
tone: "professional" | "casual"
keywords: string[]
output:
title: string
content: string
meta: { description: string }
---
You are a skilled content writer.
Write a blog post about {{topic}}.
Use a {{tone}} tone.
Include these keywords: {{keywords}}.
Call from Anywhere
Type-safe, auto-complete enabled
// Frontend (React, Vue, Svelte)
const { response } = await relayr.go({
id: "content-writer",
input: {
topic: "AI Deployment Best Practices",
tone: "professional",
keywords: ["security", "scalability"]
},
auth: { jwt: userToken }
});
// response.title ✓
// response.content ✓
// response.meta.description ✓
// Backend (Node, Python, Go)
const { response } = await relayr.go({
id: "content-writer",
input: { ... },
auth: { apiKey: process.env.RELAYR_KEY }
});
Why Use Relayr?
Stop fighting infrastructure. Start shipping all your AI features, faster.
The Old Way
Complex, time-consuming, error-prone
Authentication
Manual JWT verification + API key juggling
Prompt Management
Hardcoded strings scattered in backend code
Security
API keys exposed in environment variables
Deployment
Custom backend routes + infrastructure headaches
Monitoring
Manual logging setup and analytics implementation
Slow Iteration & Updates
Lengthy dev cycles for prompt changes or trying new models.
⏱️ Weeks of development time + ongoing maintenance burden
The Relayr Way
Simple, fast, production-ready
Authentication
Automatic JWT verification via your auth provider
Advanced Prompt Engineering
Assisted editing, Git-versioning, evals & A/B testing for all modalities (text, image, audio).
Secure AI Gateway
Centralized API key management, unified access to diverse models, and robust security for all your AI needs.
Instant Deployment
Push to deploy - instant global edge availability for all your AI configurations.
Comprehensive Observability
Built-in logging, cost tracking, performance metrics, and file management for generated outputs.
⚡ Minutes to setup + zero maintenance required
Integrate with your existing auth provider
Relayr offers flexible and secure authentication methods to protect your AI integrations. Choose between JWT-based authentication for standard use cases or session-based authentication with custom domains for seamless user experiences under our Business Plan.
Authenticate with JWTs
Integrate with any JWT provider (Clerk, Auth0, Firebase, Supabase, custom). Pass the user\'s JWT with each request to Relayr. We verify the token against your configured JWKS URL, ensuring that only authenticated users can access your AI prompts and all usage is correctly attributed. Your AI provider keys remain secure within Relayr\'s infrastructure.
- Supports any standard JWT issuer.
- Securely pass JWTs from client or server.
- Automatic user attribution for analytics.
// 1. Configure Relayr with your JWT issuer
// (e.g., https://your-app.clerk.com/.well-known/jwks.json)
// 2. In your application (frontend or backend):
const jwt = await getYourUserJWT(); // From Clerk, Auth0, Firebase, etc.
// 3. Call Relayr with the user\'s JWT
const { response } = await relayr.go({
id: "your-prompt-id",
input: { /* ...your data... */ },
auth: { jwt } // Relayr verifies JWT & attributes usage
});
// ✅ Secure: No API keys exposed client-side.
// ✅ Attributed: Usage tracked per user.
Flexible Pricing for Every Team
Choose the plan that's right for you. Start for free, pay as you go, or scale with our enterprise-grade features. No hidden fees.
Free
For individuals & hobby projects. Always free.
- Up to 3 prompt routes
- 1,000 executions/month
- Basic logging (7-day retention)
- Bring your own API keys
- Community support
Starter
For small teams and early-stage startups.
+ Pay-as-you-go for API usage
- Up to 10 prompt routes
- 10,000 executions/month
- Standard logging (30-day retention)
- 1GB log storage
- Bring your own API keys
- Email support
Growth
For scaling applications and growing businesses.
Includes $10 API credits (Relayr-provided keys)
- Up to 50 prompt routes
- 100,000 executions/month
- Advanced logging (90-day retention)
- 10GB log storage
- Option for Relayr-provided keys (no markup)
- Priority email support
- Up to 5 team members
Business
For established businesses and larger teams.
Includes $25 API credits (Relayr-provided keys)
- Unlimited prompt routes
- 500,000 executions/month
- Audit logging (1-year retention)
- 50GB log storage
- Option for Relayr-provided keys (no markup)
- Dedicated support channel
- Up to 10 team members
All plans include access to our core features: prompt versioning, Git integration, type-safe SDKs, and multi-provider support.
Developers Love Relayr
Join thousands of developers shipping AI features faster.
"Relayr completely changed how we manage AI prompts. What used to take days of backend work now takes minutes."
Sarah Chen
CTO at TechStartup
"The Git-based prompt management is genius. Our entire team can now contribute to prompt improvements."
Michael Torres
Lead Engineer at SaaS Co
"Finally, a solution that lets me call AI from the frontend without exposing API keys. Security done right!"
Emily Davis
Indie Developer
Frequently Asked Questions
How do I set up authentication with my existing auth provider?
In your Relayr dashboard, add your JWT issuer (like Clerk, Auth0, Firebase) by providing your JWKS URL. Then call your prompt route with either a JWT token from the frontend or your secret API key from the backend. Relayr handles verification and attributes usage to specific users.
Do I need to expose my OpenAI/Anthropic API keys?
Never! Your AI provider API keys are securely stored in Relayr. Frontend calls use your public API key + user JWT, while backend calls use your secret API key. We handle the authentication proxy to AI services.
Can I use Relayr with my existing backend?
Absolutely! Call your prompt routes from anywhere. Use JWT auth for frontend calls (we\'ll validate against your auth provider) or use your secret API key for backend calls with user attribution.
What if I want to switch AI providers?
Just update the model in your prompt file\'s configuration block
. Your API routes stay the same - Relayr handles the provider routing.
How do I manage and version my prompts?
Your prompts are markdown files with a configuration block and model context protocol (MCP). Add {{variables}} for dynamic content, configure tools, and set output formats. Edit locally or in our dashboard, then call your API routes directly.
Ready to ship AI features faster?
Join innovative teams using Relayr to build the next generation of AI-powered applications. Get started in minutes, not days.
No credit card required • 1,000 free requests/month • 5 minute setup