AI Features
Zap.ts integrates AI capabilities using the Vercel AI SDK and Zustand for client-side state management. Users can select an AI provider (OpenAI or Mistral by default) and input an API key via the frontend, which the backend uses to stream responses or generate completions. This guide explains how AI is implemented in Zap.ts and how to use it in your project.
Why AI could be cool?
Zap.ts enables AI-driven features:
- Provider Choice: Switch between OpenAI, Mistral AI and more. You can add more by following the same logic detailled below.
- Streaming Responses: Deliver real-time text output for completions or chats.
- Ease of Use: Pre-configured utilities handle model selection and streaming.
This setup is ideal for adding chatbots, text generation, or other AI-powered tools with minimal effort.
How AI Works in Zap.ts
Zap.ts combines a client-side store with backend processing:
- Client-Side (Zustand): The
useAIProviderStore
instores/ai.store.ts
lets users pick a provider (openai
ormistral
) and set API keys, persisted in local storage. - Backend (Vercel AI SDK): API routes (
/api/ai/completion
and/api/ai/chat
) usegetModel
fromlib/ai.ts
to select the model based on the provider and API key, streaming responses via@vercel/ai
. - Data:
AI_PROVIDERS_OBJECT
indata/ai.ts
defines the supported providers.
Using AI Features
No extra setup is required—Zap.ts provides everything out of the box. Here’s how to use it.
1. Managing Provider and API Keys
The useAIProviderStore
hook lets users set the provider and API keys:
// stores/ai.store.ts
"use client";
import { AI_PROVIDERS_OBJECT } from "@/data/ai";
import { AIProviderEnum } from "@/schemas/ai.schema";
import { create } from "zustand";
import { persist } from "zustand/middleware";
interface AIProviderStore {
aiProvider: AIProviderEnum;
apiKeys: Record<AIProviderEnum, string>;
setAIProvider: (provider: AIProviderEnum) => void;
setApiKey: (provider: AIProviderEnum, apiKey: string) => void;
getProviderName: (provider: AIProviderEnum) => string;
}
export const useAIProviderStore = create<AIProviderStore>()(
persist(
(set) => ({
aiProvider: "openai",
apiKeys: {
openai: "",
mistral: "",
},
setAIProvider: (provider) => set({ aiProvider: provider }),
setApiKey: (provider, apiKey) => {
set((state) => ({
apiKeys: {
...state.apiKeys,
[provider]: apiKey,
},
}));
},
getProviderName: (provider) =>
AI_PROVIDERS_OBJECT.find((p) => p.provider === provider)?.name ??
"Select AI Provider",
}),
{
name: "ai-provider-store",
}
)
);
- Usage: In your frontend, use
setAIProvider
to choose"openai"
or"mistral"
, andsetApiKey
to store the API key for the selected provider. AccessaiProvider
andapiKeys
to retrieve the current values.
2. Streaming Completions
The /api/ai/completion
endpoint streams text responses for a single prompt:
// app/api/ai/completion/route.ts
import { SYSTEM_PROMPT } from "@/data/ai";
import { getModel } from "@/lib/ai";
import { AIProviderEnumSchema } from "@/schemas/ai.schema";
import { streamText } from "ai";
import { z } from "zod";
export const maxDuration = 60;
const BodySchema = z.object({
prompt: z.string(),
provider: AIProviderEnumSchema,
apiKey: z.string(),
});
export async function POST(req: Request) {
const unvalidatedBody = await req.json();
const body = BodySchema.parse(unvalidatedBody);
const result = streamText({
model: getModel(body.provider, body.apiKey),
prompt: body.prompt,
system: SYSTEM_PROMPT,
});
return result.toDataStreamResponse();
}
- Request: Send a POST request with
{ prompt: "Your text", provider: "openai", apiKey: "your-key" }
. - Response: Streams text generated by the selected model.
3. Streaming Chat Responses
The /api/ai/chat
endpoint streams conversational responses:
// app/api/ai/chat/route.ts
import { SYSTEM_PROMPT } from "@/data/ai";
import { getModel } from "@/lib/ai";
import { AIProviderEnumSchema } from "@/schemas/ai.schema";
import { streamText } from "ai";
import { z } from "zod";
export const maxDuration = 60;
const BodySchema = z.object({
messages: z.any(),
provider: AIProviderEnumSchema,
apiKey: z.string(),
});
export async function POST(req: Request) {
const unvalidatedBody = await req.json();
const body = BodySchema.parse(unvalidatedBody);
const result = streamText({
model: getModel(body.provider, body.apiKey),
messages: body.messages,
system: SYSTEM_PROMPT,
});
return result.toDataStreamResponse();
}
- Request: Send a POST request with
{ messages: [{ role: "user", content: "Hi" }], provider: "mistral", apiKey: "your-key" }
. - Response: Streams the chat response from the model.
4. Backend Model Selection
The getModel
function in lib/ai.ts
handles provider-specific models:
import { createOpenAI } from "@ai-sdk/openai";
import { createMistral } from "@ai-sdk/mistral";
import { AIProviderEnum } from "@/schemas/ai.schema";
export const getModel = (provider: AIProviderEnum, apiKey: string) => {
const openAI = createOpenAI({ apiKey });
const mistral = createMistral({ apiKey });
let model = null;
switch (provider) {
case "openai":
model = openAI("gpt-4o-mini");
break;
case "mistral":
model = mistral("mistral-large-latest");
break;
default:
throw new Error("Invalid provider");
}
return model;
};
Frontend Integration
To use these features:
- Build a UI to set the provider and API key using
useAIProviderStore
. - Send requests to
/api/ai/completion
or/api/ai/chat
with the storedaiProvider
andapiKeys[aiProvider]
. - Use the Vercel AI SDK’s hooks to handle streaming responses in your frontend.
Learning More
Zap.ts provides a foundation for AI. For advanced features (e.g., RAG, tool calling), see the Vercel AI SDK documentation.
Why AI in Zap.ts?
- User-Driven: Users control the provider and key from the frontend.
- Streamlined: Pre-built utilities simplify AI integration.
- Fast: Start building AI features in a zap.
Zap into AI-powered development now!