Building Mockator: An AI-Powered Mock Data Generator

The Problem
Every developer faces the same challenge: generating realistic test data. Libraries like Faker.js exist but they're static and limited. Ask ChatGPT to "generate 50 e-commerce orders" and you'll get nice JSON, but with zero consistency—a customer named "John Doe" might have an email like "alice@example.com".
I needed a tool that combines AI's flexibility with type safety, offers polyglot output (JSON → SQL → CSV), and follows a privacy-first approach. That's why I built Mockator.
🎯 Why Mockator?
Mockator = Mock + Generator
Three main goals:
- Natural Language or TypeScript Schema: Describe what you need, AI generates smart data.
- Polyglot Output: Generate JSON once, transform to SQL or CSV client-side (no re-fetch).
- BYOK (Bring Your Own Key): Use your own OpenAI/Anthropic/Google/Groq key, nothing stored on our servers.
🧰 Tech Stack
Frontend
- Next.js 16.0.10 (App Router) – Modern React SSR framework
- React 19 – With new concurrent rendering features
- Tailwind CSS 4 – Utility-first styling
- Monaco Editor – VS Code's editor for syntax highlighting JSON/SQL/CSV
- Shadcn UI + Radix – Accessible UI components (Dialog, Tabs, Select)
- Lucide Icons – Minimal and clean icons
Backend & AI
- Vercel AI SDK – Streaming AI responses
- Multi-Provider Support:
- OpenAI (gpt-4o-mini)
- Anthropic (Claude Haiku)
- Google (Gemini Flash)
- Groq (Llama 3)
- Edge Runtime – Serverless, fast, global deployment
State Management
- React Context – Global state via
MockatorProvider - Session-only keys – API keys stored in memory instead of localStorage
🏗️ Architecture: How It Works
1. Stateless Server
The API route (/api/generate) acts as a simple proxy:
// User sends their key in the header
const provider = request.headers.get("x-provider"); // openai, anthropic, etc.
const apiKey = request.headers.get("x-api-key");
// Stream with AI SDK
const result = await streamText({
model: client(modelId),
system: systemPrompt, // "You are a Mock Data Generator..."
prompt: userPrompt,
temperature: 0.5,
});
return result.toTextStreamResponse();
Key point: No keys stored on the server. Direct pass-through to provider.
2. Client-Side Transformers
We always receive JSON from AI. When users click SQL or CSV tabs, we transform instantly in the browser:
// lib/transformers.ts
export function jsonToSql(json: any[], tableName = "mock_data"): string {
// INSERT INTO mock_data (id, name) VALUES (1, 'Alice');
}
export function jsonToCsv(json: any[]): string {
// id,name\n1,Alice
}
Advantage: No extra network requests, instant transformation.
3. Privacy-First Design
// MockatorProvider
const [providerKeys, setProviderKeys] = useState<ProviderKeys>({});
// No useEffect, no localStorage!
// Keys only in memory, gone when tab closes
API keys are sensitive. We shouldn't persist them even in the browser. New approach:
- Keys only in
useState(memory) - Disappear on tab close or refresh
- UI shows "Local & Private" badge for trust
📱 Mobile-First Responsive
Side-by-side split panels on desktop, vertical stack on mobile:
// Workbench.tsx
<div className="flex-1 overflow-hidden flex flex-col md:flex-row">
{/* Desktop: Horizontal ResizablePanel */}
<div className="hidden md:flex flex-1">
<ResizablePanel>Input</ResizablePanel>
<ResizableHandle />
<ResizablePanel>Output</ResizablePanel>
</div>
{/* Mobile: Vertical Stack */}
<div className="flex md:hidden flex-col">
<InputPanel />
<OutputPanel />
</div>
</div>
Tailwind's responsive classes (md:, sm:) make the layout automatically adapt.
✨ Key Features
1. Schema Mode (TypeScript)
Paste a TypeScript interface:
interface User {
id: number;
name: string;
email: string;
role: "admin" | "user";
}
AI understands it and generates type-safe data.
2. Natural Language Mode
"Generate 10 e-commerce orders with customer name, total price, status, and date"
Boom. 10 rows of realistic data.
3. Streaming UX
Thanks to Vercel AI SDK, data arrives chunk by chunk. Monaco Editor displays it live—like watching console.log in a terminal.
4. Multi-Provider BYOK
If one provider is expensive, switch to another:
- OpenAI → $0.15/1M tokens
- Groq → Blazing fast, cheap
- Anthropic → Better reasoning
Select from dropdown, enter key, ready to go.
🗺️ Roadmap: Next Steps
- 👤 User Profiles & Cloud Sync: Auth with Supabase, store schemas in PostgreSQL.
- 💳 SaaS Mode: Managed service for users who don't want to bring their own keys.
- 🌐 Public Schema Sharing: Share schemas via links like
mockator.com/s/e-commerce-v1.
🎓 Lessons Learned
1. Vercel AI SDK is Really Good
With streamText, you can use OpenAI, Anthropic, and Groq with the same API. Provider switching becomes trivial.
2. Client-Side Transformers > Backend Format Conversion
Initially, I thought "let's generate SQL on the backend." Then I realized:
- Adds network overhead
- Complex caching
- Client-side is instant and zero cost
3. Monaco Editor = VS Code in Browser
Syntax highlighting, line numbers, copy/paste—all out-of-the-box. 5-minute setup with @monaco-editor/react.
📦 Deploy: 30 Seconds to Vercel
git push
# Vercel auto-deploys
Edge runtime + ISR means fast globally. No cold starts.
🤝 Open Source & Contact
Project on GitHub: github.com/berkinduz/mockator
Questions or feedback: berkinduz.com/en/about
Conclusion
Mockator is a great example of building practical AI tools. Not fancy, but solves real problems in daily workflow:
- Describe in natural language or TypeScript
- Get polyglot output (JSON, SQL, CSV)
- Privacy-first—your key stays yours
Next time you need test data, try Mockator instead of Faker.js. 🚀