Gathering detailed insights and metrics for ollama-api-facade-js
Gathering detailed insights and metrics for ollama-api-facade-js
OllamaApiFacadeJS is an open-source library for running an ExpressJS backend as an Ollama API using LangChainJS. It supports local language models services like LmStudio and allows seamless message conversion and streaming between LangChainJS and Ollama clients like Open WebUI. Contributions welcome!
npm install ollama-api-facade-js
Typescript
Module System
Min. Node Version
Node Version
NPM Version
TypeScript (98.09%)
JavaScript (1.91%)
Verify real, reachable, and deliverable emails with instant MX records, SMTP checks, and disposable email detection.
Total Downloads
547
Last Day
17
Last Week
42
Last Month
547
Last Year
547
MIT License
7 Stars
26 Commits
1 Watchers
1 Branches
1 Contributors
Updated on Feb 28, 2025
Minified
Minified + Gzipped
Latest Version
1.0.6
Package Id
ollama-api-facade-js@1.0.6
Unpacked Size
57.55 kB
Size
16.90 kB
File Count
43
NPM Version
10.8.2
Node Version
18.20.7
Published on
Feb 28, 2025
Cumulative downloads
Total Downloads
Last Day
466.7%
17
Compared to previous day
Last Week
-58%
42
Compared to previous week
Last Month
0%
547
Compared to previous month
Last Year
0%
547
Compared to previous year
OllamaApiFacadeJS is an open-source Node.js library designed to seamlessly integrate an Express.js backend with the Ollama API using LangChainJS. This allows clients expecting an Ollama-compatible backend - such as Open WebUI - to interact with your Express.js API effortlessly.
It serves as a Node.js counterpart to the .NET-based OllamaApiFacade, providing a similar level of integration but optimized for the JavaScript/TypeScript ecosystem.
β
Ollama-Compatible API for Express.js β Easily expose your Express backend as an Ollama API.
β
Supports Local AI Models (e.g., LM Studio) β Works with local inference engines like LM Studio.
β
Seamless Integration with LangChainJS β Enables natural language processing with LangChainJS.
β
Automatic Function Calling Support β New: Automatically executes tools (function calling) with ToolCallService
.
β
Streaming Support β Stream AI-generated responses directly to clients.
β
Custom Model Names β Configure custom model names for full flexibility.
β
Optimized for TypeScript β Includes full TypeScript support (.d.ts
files) for better IntelliSense.
You can install OllamaApiFacadeJS via NPM or PNPM:
1pnpm add ollama-api-facade-js
or
1npm install ollama-api-facade-js
Hereβs how to integrate OllamaApiFacadeJS into an Express.js application:
1import express from 'express';
2import { ChatOpenAI } from '@langchain/openai';
3import { createOllamaApiFacade, createLMStudioConfig } from 'ollama-api-facade-js';
4
5const chatOpenAI = new ChatOpenAI(createLMStudioConfig());
6
7const app = express();
8const ollamaApi = createOllamaApiFacade(app, chatOpenAI);
9
10ollamaApi.postApiChat(async (chatRequest, chatModel, chatResponse) => {
11 chatRequest.addSystemMessage(
12 `You are a fun, slightly drunk coding buddy.
13 You joke around but still give correct and helpful programming advice.
14 Your tone is informal, chaotic, and enthusiasticβlike a tipsy friend debugging at 2 AM. Cheers!`
15 );
16
17 const result = await chatModel.invoke(chatRequest.messages);
18 chatResponse.asStream(result);
19});
20
21ollamaApi.listen();
π What does this setup do?
http://localhost:11434
(default Ollama port).ToolCallService
Normally, when using LangChainJS Function Calling, you need to:
bindTools([...])
).tool_calls
.OllamaApiFacadeJS simplifies this with ToolCallService
, handling everything for you!
ToolCallService
1import express from 'express';
2import { ChatOpenAI } from '@langchain/openai';
3import { createOllamaApiFacade, createLMStudioConfig } from 'ollama-api-facade-js';
4import { dateTimeTool } from './tools/dateTimeTool';
5
6const chatOpenAI = new ChatOpenAI(createLMStudioConfig());
7const tools = [dateTimeTool];
8
9const app = express();
10const ollamaApi = createOllamaApiFacade(app, chatOpenAI);
11
12ollamaApi.postApiChat(async (chatRequest, chatModel, chatResponse, toolCallService) => {
13 chatRequest.addSystemMessage(`You are a helpful Devbot.
14 You have a dateTimeTool registered, execute it when asked about the time / date / day.
15 `);
16
17 const response = await toolCallService.with(tools).invoke(chatRequest.messages);
18
19 chatResponse.asStream(response);
20});
21
22ollamaApi.listen();
π What happens under the hood?
ToolCallService
automatically binds the tools.tool_calls
anymore!After setting up your Express.js backend, you can integrate it with Open WebUI by running:
1docker run -d -p 8181:8080 --add-host=host.docker.internal:host-gateway --name open-webui ghcr.io/open-webui/open-webui:main
β‘ Open WebUI will now be accessible at:
http://localhost:8181
For advanced configurations (e.g., GPU support), refer to the official Open WebUI GitHub repo.
By default, the API uses the model name "nodeapi"
. To specify a custom model name, pass it as an argument:
1const ollamaApi = createOllamaApiFacade(app, chatOpenAI, 'my-custom-model');
OllamaApiFacadeJS supports streaming responses to improve response times and user experience:
1ollamaApi.postApiChat(async (chatRequest, chatModel, chatResponse) => {
2 const result = await chatModel.stream(chatRequest.messages);
3 chatResponse.asStream(result); // Handles both streams & single responses
4});
π‘ Automatically detects whether streaming is supported and adapts accordingly.
https-proxy-agent
πTo analyze the HTTP communication between LangChainJS and language model APIs, you can use a proxy tool like Burp Suite Community Edition or OWASP ZAP. This allows you to inspect the exchanged data in detail.
Install https-proxy-agent
in your project:
1npm install https-proxy-agent
Configure HttpsProxyAgent
in your code:
1import { ChatOpenAI } from '@langchain/openai'; 2import { HttpsProxyAgent } from 'https-proxy-agent'; 3import { createLMStudioConfig } from 'ollama-api-facade-js'; 4 5const chatOpenAI = new ChatOpenAI( 6 createLMStudioConfig({ 7 httpAgent: new HttpsProxyAgent('http://localhost:8080'), 8 }) 9);
Or for cloud API usage:
1import { ChatOpenAI } from '@langchain/openai'; 2import { HttpsProxyAgent } from 'https-proxy-agent'; 3import { createLMStudioConfig } from 'ollama-api-facade-js'; 4 5// Disable certificate verification 6process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; 7 8const chatOpenAI = new ChatOpenAI({ 9 model: 'gpt-4o-mini', 10 configuration: { 11 apiKey: openAiApiKey, 12 httpAgent: new HttpsProxyAgent('http://localhost:8080'), 13 }, 14});
Start Burp Suite Community Edition or OWASP ZAP and ensure the proxy is listening on http://localhost:8080
.
If another service is already using port 8080, update the proxy port accordingly and adjust the HttpsProxyAgent
URL in the code, e.g.:
1httpAgent: new HttpsProxyAgent('http://127.0.0.1:8888');
This method is for development and debugging purposes only. It should not be used in a production environment as it bypasses SSL validation.
With this setup, you can monitor all HTTP requests and responses exchanged between LangChainJS and the API endpoints, making it easier to debug and analyze the communication.
We welcome contributions from the community!
To contribute:
feature/new-feature
).This project is licensed under the MIT License.
π‘ Created by Gregor Biswanger β Microsoft MVP for Azure AI & Web App Development.
If you have questions, feel free to open an issue on GitHub.
β
This README follows best practices
β
Clear structure with installation, usage, and advanced setup
β
Code snippets are formatted & easy to follow
β
Includes streaming, debugging, and customization options
β
Encourages contributions & community engagement
Let me know if you need any refinements! ππ₯
No vulnerabilities found.
No security vulnerabilities found.