Gathering detailed insights and metrics for @ai-d/aid
Gathering detailed insights and metrics for @ai-d/aid
Gathering detailed insights and metrics for @ai-d/aid
Gathering detailed insights and metrics for @ai-d/aid
A.I. :D | Aid provides a structured and type-safe way to interact with LLMs.
npm install @ai-d/aid
Typescript
Module System
Node Version
NPM Version
@ai-d/aid@0.1.5
Updated on Dec 01, 2023
@ai-d/aid@0.1.4
Updated on Dec 01, 2023
@ai-d/aid@0.1.3
Updated on Nov 27, 2023
@ai-d/aid@0.1.2
Updated on Nov 27, 2023
@ai-d/aid@0.1.1
Updated on Nov 27, 2023
@ai-d/aid@0.1.0
Updated on Nov 24, 2023
TypeScript (99.3%)
Shell (0.7%)
Total Downloads
4,084
Last Day
6
Last Week
39
Last Month
165
Last Year
2,077
MIT License
29 Commits
2 Watchers
11 Branches
1 Contributors
Updated on Apr 30, 2025
Latest Version
0.1.5
Package Id
@ai-d/aid@0.1.5
Unpacked Size
31.25 kB
Size
6.89 kB
File Count
7
NPM Version
9.8.1
Node Version
18.18.2
Published on
Dec 01, 2023
Cumulative downloads
Total Downloads
Last Day
200%
6
Compared to previous day
Last Week
62.5%
39
Compared to previous week
Last Month
-29.2%
165
Compared to previous month
Last Year
3.5%
2,077
Compared to previous year
2
7
A.I. :D
Aid is a TypeScript library designed for developers working with Large Language Models (LLMs) such as OpenAI's GPT-4 (including Vision) and GPT-3.5. The library focuses on ensuring consistent, typed outputs from LLM queries, enhancing the reliability and usability of LLM responses. Advanced users can leverage few-shot examples for more sophisticated use cases. It provides a structured and type-safe way to interact with LLMs.
QueryEngine
function. Example1pnpm install @ai-d/aid
First, import the necessary modules and set up your OpenAI instance:
1import { OpenAI } from "openai"; 2import { Aid } from "@ai-d/aid"; 3 4const openai = new OpenAI({ apiKey: "your-api-key" }); 5const aid = Aid.from(openai, { model: "gpt-4-1106-preview" });
1import { OpenAI } from "openai";
2import { Aid, OpenAIQuery } from "@ai-d/aid";
3
4const openai = new OpenAI({ apiKey: "your-api-key" });
5const aid = Aid.vision(
6 OpenAIQuery(openai, { model: "gpt-4-vision-preview", max_tokens: 2048 }),
7);
For example, Cohere's Command.
1import { Aid, CohereQuery } from "@ai-d/aid"; 2 3const aid = Aid.chat( 4 CohereQuery(COHERE_TOKEN, { model: "command" }), 5);
You can implement your own
QueryEngine
function.
Define a custom task with expected output types:
1import { z } from "zod"; 2 3const analyze = aid.task( 4 "Summarize and extract keywords", 5 z.object({ 6 summary: z.string().max(300), 7 keywords: z.array(z.string().max(30)).max(10), 8 }), 9);
1const analyze = aid.task( 2 "Analyze the person in the image", 3 z.object({ 4 gender: z.enum(["boy", "girl", "other"]), 5 age: z.enum(["child", "teen", "adult", "elderly"]), 6 emotion: z.enum(["happy", "sad", "angry", "surprised", "neutral"]), 7 clothing: z.string().max(100), 8 background: z.string().max(100), 9 }), 10);
Execute the task and handle the output:
1const { result } = await analyze("Your input here, e.g. a news article"); 2console.log(result); // { summary: "...", keywords: ["...", "..."] }
1const datauri = `data:image/png;base64,${fs.readFileSync("path/to/image.png" "base64")}`; 2 3const { result } = await analyze({ images: [{ url: datauri }] }); 4console.log(result); // { "gender": "boy", "age": "teen", ... }
For more complex scenarios, you can use few-shot examples:
1const run_advanced_task = aid.task( 2 "Some Advanced Task", 3 z.object({ 4 // Define your output schema here 5 }), 6 { 7 examples: [ 8 // Provide few-shot examples here 9 ], 10 } 11);
Case Parameter -> (join) Task Defination -> (join) Format Constraint -> (perform) Query
Query
and Format Constraint
are defined and implemented by the QueryEngine
and FormatEngine
.
Task Defination
is defined by the user with task
method. Task Goal, Expected Schema, Examples, etc.
Case Parameter
is defined by the user on each single call. Text, Image, etc.
Contributions are welcome! Please submit pull requests with any bug fixes or feature enhancements.
No vulnerabilities found.