@surebob/ollama-local-provider
A client-side Ollama provider for the Vercel AI SDK that makes API calls directly from the browser to your local Ollama instance.
Features
- 🌐 Browser-side API calls to local Ollama
- 🔄 Full streaming support
- 🛠 Compatible with Vercel AI SDK
- 🎯 Zero server-side Ollama calls
- 📦 Easy integration with Next.js and other frameworks
Installation
npm install @surebob/ollama-local-provider
Usage
Basic Usage
// In your Next.js page or component
'use client';
import { useChat } from 'ai/react';
import { ollama } from '@surebob/ollama-local-provider';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: '/api/chat',
initialMessages: [],
});
return (
<div>
{messages.map((m) => (
<div key={m.id}>{m.content}</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
</div>
);
}
API Route Setup
// app/api/chat/route.ts
import { StreamingTextResponse } from 'ai';
import { ollama } from '@surebob/ollama-local-provider';
export async function POST(req: Request) {
const { messages } = await req.json();
const model = ollama('deepscaler:1.5b', {
temperature: 0.7,
top_p: 0.9,
num_ctx: 4096,
repeat_penalty: 1.1,
});
const response = await model.doStream({
inputFormat: 'messages',
mode: { type: 'regular' },
prompt: messages,
});
return new StreamingTextResponse(response.stream);
}
Requirements
- Ollama running locally on port 11434
- Vercel AI SDK
- Next.js (or other framework with streaming support)
How It Works
This provider uses the ollama/browser
client to make API calls directly from the browser to your local Ollama instance. This means:
-
When running locally:
- Browser → Local Ollama (
localhost:11434
)
-
When deployed:
- Browser → User's Local Ollama (
localhost:11434
)
- Server never handles Ollama calls
License
MIT