Gathering detailed insights and metrics for @samchon/openapi
Gathering detailed insights and metrics for @samchon/openapi
Gathering detailed insights and metrics for @samchon/openapi
Gathering detailed insights and metrics for @samchon/openapi
OpenAPI definitions, converters and LLM function calling schema composer.
npm install @samchon/openapi
Typescript
Module System
Node Version
NPM Version
99.7
Supply Chain
100
Quality
94
Maintenance
100
Vulnerability
100
License
TypeScript (99.67%)
JavaScript (0.21%)
HTML (0.12%)
Total Downloads
4,351,305
Last Day
38,508
Last Week
214,459
Last Month
808,762
Last Year
4,342,588
MIT License
99 Stars
373 Commits
6 Forks
3 Watchers
3 Branches
3 Contributors
Updated on Apr 29, 2025
Minified
Minified + Gzipped
Latest Version
4.2.0
Package Id
@samchon/openapi@4.2.0
Unpacked Size
2.29 MB
Size
413.06 kB
File Count
651
NPM Version
10.8.2
Node Version
20.19.0
Published on
Apr 21, 2025
Cumulative downloads
Total Downloads
Last Day
11.9%
38,508
Compared to previous day
Last Week
1.2%
214,459
Compared to previous week
Last Month
-25%
808,762
Compared to previous month
Last Year
49,717.5%
4,342,588
Compared to previous year
@samchon/openapi
1flowchart 2 subgraph "OpenAPI Specification" 3 v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]] 4 v30("OpenAPI v3.0") --upgrades--> emended 5 v31("OpenAPI v3.1") --emends--> emended 6 end 7 subgraph "OpenAPI Generator" 8 emended --normalizes--> migration[["Migration Schema"]] 9 migration --"Artificial Intelligence"--> lfc{{"LLM Function Calling"}} 10 lfc --"OpenAI"--> chatgpt("ChatGPT") 11 lfc --"Google"--> gemini("Gemini") 12 lfc --"Anthropic"--> claude("Claude") 13 lfc --"High-Flyer"--> deepseek("DeepSeek") 14 lfc --"Meta"--> llama("Llama") 15 chatgpt --"3.1"--> custom(["Custom JSON Schema"]) 16 gemini --"3.0"--> custom(["Custom JSON Schema"]) 17 claude --"3.1"--> standard(["Standard JSON Schema"]) 18 deepseek --"3.1"--> standard 19 llama --"3.1"--> standard 20 end
OpenAPI definitions, converters and LLM function calling application composer.
@samchon/openapi
is a collection of OpenAPI types for every versions, and converters for them. In the OpenAPI types, there is an "emended" OpenAPI v3.1 specification, which has removed ambiguous and duplicated expressions for the clarity. Every conversions are based on the emended OpenAPI v3.1 specification.
@samchon/openapi
also provides LLM (Large Language Model) function calling application composer from the OpenAPI document with many strategies. With the HttpLlm
module, you can perform the LLM function calling extremely easily just by delivering the OpenAPI (Swagger) document.
HttpLlm.application()
IHttpLlmApplication<Model>
IHttpLlmFunction<Model>
IChatGptSchema
: OpenAI ChatGPTIClaudeSchema
: Anthropic ClaudeIDeepSeekSchema
: High-Flyer DeepSeekIGeminiSchema
: Google GeminiILlamaSchema
: Meta LlamaILlmSchemaV3
: middle layer based on OpenAPI v3.0 specificationILlmSchemaV3_1
: middle layer based on OpenAPI v3.1 specificationAdditionally, @samchon/openapi
supports MCP (Model Context Protocol) function calling. Due to model specification, validation feedback, and selector agent reasons, function calling to MCP server is much better than directly using mcp_servers
property of LLM API.
https://github.com/user-attachments/assets/e1faf30b-c703-4451-b68b-2e7a8170bce5
Demonstration video composing A.I. chatbot with
@samchon/openapi
and@agentica
- Shopping A.I. Chatbot Application: https://nestia.io/chat/shopping
- Shopping Backend Repository: https://github.com/samchon/shopping-backend
- Shopping Swagger Document (
@nestia/editor
): https://nestia.io/editor/?url=...
1npm install @samchon/openapi
Just install by npm i @samchon/openapi
command.
Here is an example code utilizing the @samchon/openapi
for LLM function calling purpose.
1import { 2 HttpLlm, 3 IChatGptSchema, 4 IHttpLlmApplication, 5 IHttpLlmFunction, 6 OpenApi, 7 OpenApiV3, 8 OpenApiV3_1, 9 SwaggerV2, 10} from "@samchon/openapi"; 11import fs from "fs"; 12import typia from "typia"; 13 14const main = async (): Promise<void> => { 15 // read swagger document and validate it 16 const swagger: 17 | SwaggerV2.IDocument 18 | OpenApiV3.IDocument 19 | OpenApiV3_1.IDocument = JSON.parse( 20 await fs.promises.readFile("swagger.json", "utf8"), 21 ); 22 typia.assert(swagger); // recommended 23 24 // convert to emended OpenAPI document, 25 // and compose LLM function calling application 26 const document: OpenApi.IDocument = OpenApi.convert(swagger); 27 const application: IHttpLlmApplication<"chatgpt"> = HttpLlm.application({ 28 model: "chatgpt", 29 document, 30 }); 31 32 // Let's imagine that LLM has selected a function to call 33 const func: IHttpLlmFunction<"chatgpt"> | undefined = 34 application.functions.find( 35 // (f) => f.name === "llm_selected_function_name" 36 (f) => f.path === "/bbs/articles" && f.method === "post", 37 ); 38 if (func === undefined) throw new Error("No matched function exists."); 39 40 // actual execution is by yourself 41 const article = await HttpLlm.execute({ 42 connection: { 43 host: "http://localhost:3000", 44 }, 45 application, 46 function: func, 47 arguments: { 48 // arguments composed by LLM 49 body: { 50 title: "Hello, world!", 51 body: "Let's imagine that this argument is composed by LLM.", 52 thumbnail: null, 53 }, 54 }, 55 }); 56 console.log("article", article); 57}; 58main().catch(console.error);
1flowchart 2 v20(Swagger v2.0) --upgrades--> emended[["<b><u>OpenAPI v3.1 (emended)</u></b>"]] 3 v30(OpenAPI v3.0) --upgrades--> emended 4 v31(OpenAPI v3.1) --emends--> emended 5 emended --downgrades--> v20d(Swagger v2.0) 6 emended --downgrades--> v30d(Swagger v3.0)
@samchon/openapi
support every versions of OpenAPI specifications with detailed TypeScript types.
Also, @samchon/openapi
provides "emended OpenAPI v3.1 definition" which has removed ambiguous and duplicated expressions for clarity. It has emended original OpenAPI v3.1 specification like above. You can compose the "emended OpenAPI v3.1 document" by calling the OpenApi.convert()
function.
OpenApiV3_1.IPathItem.parameters
to OpenApi.IOperation.parameters
OpenApiV3_1.IOperation
membersOpenApiV3_1.IComponents.examples
OpenApiV3_1.IJsonSchema.IMixed
OpenApiV3_1.IJsonSchema.__ISignificant.nullable
OpenAPI.IJsonSchema.IArray.items
OpenApi.IJsonSchema.ITuple.prefixItems
OpenApiV3_1.IJsonSchema.IAnyOf
to OpenApi.IJsonSchema.IOneOf
OpenApiV3_1.IJsonSchema.IRecursiveReference
to OpenApi.IJsonSchema.IReference
OpenApiV3_1.IJsonSchema.IAllOf
to OpenApi.IJsonSchema.IObject
Conversions to another version's OpenAPI document is also based on the "emended OpenAPI v3.1 specification" like above diagram. You can do it through OpenApi.downgrade()
function. Therefore, if you want to convert Swagger v2.0 document to OpenAPI v3.0 document, you have to call two functions; OpenApi.convert()
and then OpenApi.downgrade()
.
At last, if you utilize typia
library with @samchon/openapi
types, you can validate whether your OpenAPI document is following the standard specification or not. Just visit one of below playground links, and paste your OpenAPI document URL address. This validation strategy would be superior than any other OpenAPI validator libraries.
1import { OpenApi, OpenApiV3, OpenApiV3_1, SwaggerV2 } from "@samchon/openapi"; 2import typia from "typia"; 3 4const main = async (): Promise<void> => { 5 // GET YOUR OPENAPI DOCUMENT 6 const response: Response = await fetch( 7 "https://raw.githubusercontent.com/samchon/openapi/master/examples/v3.0/openai.json" 8 ); 9 const document: any = await response.json(); 10 11 // TYPE VALIDATION 12 const result = typia.validate< 13 | OpenApiV3_1.IDocument 14 | OpenApiV3.IDocument 15 | SwaggerV2.IDocument 16 >(document); 17 if (result.success === false) { 18 console.error(result.errors); 19 return; 20 } 21 22 // CONVERT TO EMENDED 23 const emended: OpenApi.IDocument = OpenApi.convert(document); 24 console.info(emended); 25}; 26main().catch(console.error);
1flowchart 2 subgraph "OpenAPI Specification" 3 v20("Swagger v2.0") --upgrades--> emended[["OpenAPI v3.1 (emended)"]] 4 v30("OpenAPI v3.0") --upgrades--> emended 5 v31("OpenAPI v3.1") --emends--> emended 6 end 7 subgraph "OpenAPI Generator" 8 emended --normalizes--> migration[["Migration Schema"]] 9 migration --"Artificial Intelligence"--> lfc{{"<b><u>LLM Function Calling</u></b>"}} 10 lfc --"OpenAI"--> chatgpt("ChatGPT") 11 lfc --"Google"--> gemini("Gemini") 12 lfc --"Anthropic"--> claude("Claude") 13 lfc --"High-Flyer"--> deepseek("DeepSeek") 14 lfc --"Meta"--> llama("Llama") 15 chatgpt --"3.1"--> custom(["Custom JSON Schema"]) 16 gemini --"3.0"--> custom(["Custom JSON Schema"]) 17 claude --"3.1"--> standard(["Standard JSON Schema"]) 18 deepseek --"3.1"--> standard 19 llama --"3.1"--> standard 20 end
LLM function calling application from OpenAPI document.
@samchon/openapi
provides LLM (Large Language Model) function calling application from the "emended OpenAPI v3.1 document". Therefore, if you have any HTTP backend server and succeeded to build an OpenAPI document, you can easily make the A.I. chatbot application.
In the A.I. chatbot, LLM will select proper function to remotely call from the conversations with user, and fill arguments of the function automatically. If you actually execute the function call through the HttpLlm.execute()
function, it is the "LLM function call."
Let's enjoy the fantastic LLM function calling feature very easily with @samchon/openapi
.
IChatGptSchema
: OpenAI ChatGPTIClaudeSchema
: Anthropic ClaudeIDeepSeekSchema
: High-Flyer DeepSeekIGeminiSchema
: Google GeminiILlamaSchema
: Meta LlamaILlmSchemaV3
: middle layer based on OpenAPI v3.0 specificationILlmSchemaV3_1
: middle layer based on OpenAPI v3.1 specification[!NOTE]
You also can compose
ILlmApplication
from a class type withtypia
.https://typia.io/docs/llm/application
1import { ILlmApplication } from "@samchon/openapi"; 2import typia from "typia"; 3 4const app: ILlmApplication<"chatgpt"> = 5 typia.llm.application<YourClassType, "chatgpt">();
[!TIP]
LLM selects proper function and fill arguments.
In nowadays, most LLM (Large Language Model) like OpenAI are supporting "function calling" feature. The "LLM function calling" means that LLM automatically selects a proper function and fills parameter values from conversation with the user (may by chatting text).
Actual function call execution is by yourself.
LLM (Large Language Model) providers like OpenAI selects a proper function to call from the conversations with users, and fill arguments of it. However, function calling feature supported by LLM providers do not perform the function call execution. The actual execution responsibility is on you.
In @samchon/openapi
, you can execute the LLM function calling by HttpLlm.execute()
(or HttpLlm.propagate()
) function. Here is an example code executing the LLM function calling through the HttpLlm.execute()
function. As you can see, to execute the LLM function call, you have to deliver these information:
Here is the example code executing the LLM function call with @samchon/openapi
.
test/examples/chatgpt-function-call-to-sale-create.ts
Microsoft Surface Pro 9
examples/arguments/chatgpt.microsoft-surface-pro-9.input.json
1import { 2 HttpLlm, 3 IChatGptSchema, 4 IHttpLlmApplication, 5 IHttpLlmFunction, 6 OpenApi, 7 OpenApiV3, 8 OpenApiV3_1, 9 SwaggerV2, 10} from "@samchon/openapi"; 11import OpenAI from "openai"; 12import typia from "typia"; 13 14const main = async (): Promise<void> => { 15 // Read swagger document and validate it 16 const swagger: 17 | SwaggerV2.IDocument 18 | OpenApiV3.IDocument 19 | OpenApiV3_1.IDocument = JSON.parse( 20 await fetch( 21 "https://github.com/samchon/shopping-backend/blob/master/packages/api/swagger.json", 22 ).then((r) => r.json()), 23 ); 24 typia.assert(swagger); // recommended 25 26 // convert to emended OpenAPI document, 27 // and compose LLM function calling application 28 const document: OpenApi.IDocument = OpenApi.convert(swagger); 29 const application: IHttpLlmApplication<"chatgpt"> = HttpLlm.application({ 30 model: "chatgpt", 31 document, 32 }); 33 34 // Let's imagine that LLM has selected a function to call 35 const func: IHttpLlmFunction<"chatgpt"> | undefined = 36 application.functions.find( 37 // (f) => f.name === "llm_selected_function_name" 38 (f) => f.path === "/shoppings/sellers/sale" && f.method === "post", 39 ); 40 if (func === undefined) throw new Error("No matched function exists."); 41 42 // Get arguments by ChatGPT function calling 43 const client: OpenAI = new OpenAI({ 44 apiKey: "<YOUR_OPENAI_API_KEY>", 45 }); 46 const completion: OpenAI.ChatCompletion = 47 await client.chat.completions.create({ 48 model: "gpt-4o", 49 messages: [ 50 { 51 role: "assistant", 52 content: 53 "You are a helpful customer support assistant. Use the supplied tools to assist the user.", 54 }, 55 { 56 role: "user", 57 content: "<DESCRIPTION ABOUT THE SALE>", 58 // https://github.com/samchon/openapi/blob/master/examples/function-calling/prompts/microsoft-surface-pro-9.md 59 }, 60 ], 61 tools: [ 62 { 63 type: "function", 64 function: { 65 name: func.name, 66 description: func.description, 67 parameters: func.parameters as Record<string, any>, 68 }, 69 }, 70 ], 71 }); 72 const toolCall: OpenAI.ChatCompletionMessageToolCall = 73 completion.choices[0].message.tool_calls![0]; 74 75 // Actual execution by yourself 76 const article = await HttpLlm.execute({ 77 connection: { 78 host: "http://localhost:37001", 79 }, 80 application, 81 function: func, 82 input: JSON.parse(toolCall.function.arguments), 83 }); 84 console.log("article", article); 85}; 86main().catch(console.error);
1import { IHttpLlmFunction, IValidation } from "@samchon/openapi"; 2import { FunctionCall } from "pseudo"; 3 4export const correctFunctionCall = (p: { 5 call: FunctionCall; 6 functions: Array<IHttpLlmFunction<"chatgpt">>; 7 retry: (reason: string, errors?: IValidation.IError[]) => Promise<unknown>; 8}): Promise<unknown> => { 9 // FIND FUNCTION 10 const func: IHttpLlmFunction<"chatgpt"> | undefined = 11 p.functions.find((f) => f.name === p.call.name); 12 if (func === undefined) { 13 // never happened in my experience 14 return p.retry( 15 "Unable to find the matched function name. Try it again.", 16 ); 17 } 18 19 // VALIDATE 20 const result: IValidation<unknown> = func.validate(p.call.arguments); 21 if (result.success === false) { 22 // 1st trial: 70% (gpt-4o-mini in shopping mall chatbot) 23 // 2nd trial with validation feedback: 98% 24 // 3rd trial with validation feedback again: never have failed 25 return p.retry( 26 "Type errors are detected. Correct it through validation errors", 27 { 28 errors: result.errors, 29 }, 30 ); 31 } 32 return result.data; 33}
Is LLM Function Calling perfect? No, absolutely not.
LLM (Large Language Model) service vendor like OpenAI takes a lot of type level mistakes when composing the arguments of function calling or structured output. Even though target schema is super simple like Array<string>
type, LLM often fills it just by a string
typed value.
In my experience, OpenAI gpt-4o-mini
(8b
parameters) is taking about 70% of type level mistakes when filling the arguments of function calling to Shopping Mall service. To overcome the imperfection of such LLM function calling, @samchon/openapi
supports validation feedback strategy.
The key concept of validation feedback strategy is, let LLM function calling to construct invalid typed arguments first, and informing detailed type errors to the LLM, so that induce LLM to emend the wrong typed arguments at the next turn by using IHttpLlmFunction<Model>.validate()
function.
Embedded validator function in IHttpLlmFunction<Model>.validate()
is exactly the same as typia.validate<T>()
and is more detailed and accurate than other validators. By using this validation feedback strategy, the 70% success rate of the first function calling trial increased to 98% on the second trial and has never failed from the third trial onward.
Components | typia | TypeBox | ajv | io-ts | zod | C.V. |
---|---|---|---|---|---|---|
Easy to use | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Object (simple) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Object (hierarchical) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Object (recursive) | ✔ | ❌ | ✔ | ✔ | ✔ | ✔ |
Object (union, implicit) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Object (union, explicit) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌ |
Object (additional tags) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Object (template literal types) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌ |
Object (dynamic properties) | ✔ | ✔ | ✔ | ❌ | ❌ | ❌ |
Array (rest tuple) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Array (hierarchical) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Array (recursive) | ✔ | ✔ | ✔ | ✔ | ✔ | ❌ |
Array (recursive, union) | ✔ | ✔ | ❌ | ✔ | ✔ | ❌ |
Array (R+U, implicit) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Array (repeated) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Array (repeated, union) | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
Ultimate Union Type | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
C.V.
meansclass-validator
Arguments from both Human and LLM sides.
When composing parameter arguments through the LLM (Large Language Model) function calling, there can be a case that some parameters (or nested properties) must be composed not by LLM, but by Human. File uploading feature, or sensitive information like secret key (password) cases are the representative examples.
In that case, you can configure the LLM function calling schemas to exclude such Human side parameters (or nested properties) by IHttpLlmApplication.options.separate
property. Instead, you have to merge both Human and LLM composed parameters into one by calling the HttpLlm.mergeParameters()
before the LLM function call execution of HttpLlm.execute()
function.
Here is the example code separating the file uploading feature from the LLM function calling schema, and combining both Human and LLM composed parameters into one before the LLM function call execution.
test/examples/claude-function-call-separate-to-sale-create.ts
Microsoft Surpace Pro 9
examples/arguments/claude.microsoft-surface-pro-9.input.json
1import Anthropic from "@anthropic-ai/sdk"; 2import { 3 ClaudeTypeChecker, 4 HttpLlm, 5 IClaudeSchema, 6 IHttpLlmApplication, 7 IHttpLlmFunction, 8 OpenApi, 9 OpenApiV3, 10 OpenApiV3_1, 11 SwaggerV2, 12} from "@samchon/openapi"; 13import typia from "typia"; 14 15const main = async (): Promise<void> => { 16 // Read swagger document and validate it 17 const swagger: 18 | SwaggerV2.IDocument 19 | OpenApiV3.IDocument 20 | OpenApiV3_1.IDocument = JSON.parse( 21 await fetch( 22 "https://github.com/samchon/shopping-backend/blob/master/packages/api/swagger.json", 23 ).then((r) => r.json()), 24 ); 25 typia.assert(swagger); // recommended 26 27 // convert to emended OpenAPI document, 28 // and compose LLM function calling application 29 const document: OpenApi.IDocument = OpenApi.convert(swagger); 30 const application: IHttpLlmApplication<"claude"> = HttpLlm.application({ 31 model: "claude", 32 document, 33 options: { 34 reference: true, 35 separate: (schema) => 36 ClaudeTypeChecker.isString(schema) && 37 !!schema.contentMediaType?.startsWith("image"), 38 }, 39 }); 40 41 // Let's imagine that LLM has selected a function to call 42 const func: IHttpLlmFunction<"claude"> | undefined = 43 application.functions.find( 44 // (f) => f.name === "llm_selected_function_name" 45 (f) => f.path === "/shoppings/sellers/sale" && f.method === "post", 46 ); 47 if (func === undefined) throw new Error("No matched function exists."); 48 49 // Get arguments by ChatGPT function calling 50 const client: Anthropic = new Anthropic({ 51 apiKey: "<YOUR_ANTHROPIC_API_KEY>", 52 }); 53 const completion: Anthropic.Message = await client.messages.create({ 54 model: "claude-3-5-sonnet-latest", 55 max_tokens: 8_192, 56 messages: [ 57 { 58 role: "assistant", 59 content: 60 "You are a helpful customer support assistant. Use the supplied tools to assist the user.", 61 }, 62 { 63 role: "user", 64 content: "<DESCRIPTION ABOUT THE SALE>", 65 // https://github.com/samchon/openapi/blob/master/examples/function-calling/prompts/microsoft-surface-pro-9.md 66 }, 67 ], 68 tools: [ 69 { 70 name: func.name, 71 description: func.description, 72 input_schema: func.separated!.llm as any, 73 }, 74 ], 75 }); 76 const toolCall: Anthropic.ToolUseBlock = completion.content.filter( 77 (c) => c.type === "tool_use", 78 )[0]!; 79 80 // Actual execution by yourself 81 const article = await HttpLlm.execute({ 82 connection: { 83 host: "http://localhost:37001", 84 }, 85 application, 86 function: func, 87 input: HttpLlm.mergeParameters({ 88 function: func, 89 llm: toolCall.input as any, 90 human: { 91 // Human composed parameter values 92 content: { 93 files: [], 94 thumbnails: [ 95 { 96 name: "thumbnail", 97 extension: "jpeg", 98 url: "https://serpapi.com/searches/673d3a37e45f3316ecd8ab3e/images/1be25e6e2b1fb7509f1af89c326cb41749301b94375eb5680b9bddcdf88fabcb.jpeg", 99 }, 100 // ... 101 ], 102 }, 103 }, 104 }), 105 }); 106 console.log("article", article); 107}; 108main().catch(console.error);
1flowchart 2 subgraph "JSON Schema Specification" 3 schemav4("JSON Schema v4") --upgrades--> emended[["OpenAPI v3.1 (emended)"]] 4 schemav7("JSON Schema v7") --upgrades--> emended 5 schema2020("JSON Schema 2020-12") --emends--> emended 6 end 7 subgraph "Model Context Protocol" 8 emended --"Artificial Intelligence"--> lfc{{"LLM Function Calling"}} 9 lfc --"OpenAI"--> chatgpt("ChatGPT") 10 lfc --"Google"--> gemini("Gemini") 11 lfc --"Anthropic"--> claude("Claude") 12 lfc --"High-Flyer"--> deepseek("DeepSeek") 13 lfc --"Meta"--> llama("Llama") 14 chatgpt --"3.1"--> custom(["Custom JSON Schema"]) 15 gemini --"3.0"--> custom(["Custom JSON Schema"]) 16 claude --"3.1"--> standard(["Standard JSON Schema"]) 17 deepseek --"3.1"--> standard 18 llama --"3.1"--> standard 19 end
LLM function calling schema from MCP document.
As MCP (Model Context Protocol) contains function caller itself, it is possible to execute MCP server's functions without any extra dedication just by using mcp_servers
property of LLM API. However, due to JSON schema model specification, validation feedback and selector agent's filtering for context reducing, @samchon/openapi
recommends to use function calling instead of using the mcp_servers
.
For example, if you bring a GitHub MCP server to Claude Desktop and request it to do something, you will often see the AI agent crash immediately. This is because there are 30 functions in the GitHub MCP server, and if you put them all by using mcp_servers
, the context will be huge and hallucination will occur.
https://github.com/user-attachments/assets/72390cb4-d9b1-4d31-a6dd-d866da5a433b
GitHub MCP server to
mcp_servers
often breaks down AI agent.However, if call the function of GitHub MCP server by function calling with
@agentica
, it works properly without any problem.
- Function calling to GitHub MCP: https://www.youtube.com/watch?v=rLlHkc24cJs To make function calling schemas, call
McpLlm.application()
function.IMcpLlmApplication
typed application instance would be returned, and it will contain theIMcpLlmFunction.validate()
function utilized for the validation feedback strategy.
Don't worry about the JSON schema specification. As MCP (Model Context Protocol) does not restrict any JSON schema specification, the McpLlm.application()
function has been designed to support every JSON schema specifications.
1import { 2 IMcpApplication, 3 IMcpFunction, 4 IValidation, 5 McpLlm, 6} from "@samchon/openapi"; 7 8const application: IMcpLlmApplication<"chatgpt"> = McpLlm.application({ 9 model: "chatgpt", 10 tools: [...], 11}); 12const func: IMcpLlmFunction<"chatgpt"> = application.functions.find( 13 (f) => f.name === "create", 14); 15const result: IValidation<unknown> = func.validate({ 16 title: "Hello World", 17 body: "Nice to meet you AI developers", 18 thumbnail: "https://wrtnlabs.io/agentica/thumbnail.jpg", 19}); 20console.log(result);
https://github.com/wrtnlabs/agentica
agentica
is the simplest Agentic AI library, specialized in LLM Function Calling with @samchon/openapi
.
With it, you don't need to compose a complex agent graph or workflow. Instead, just deliver Swagger/OpenAPI/MCP documents or TypeScript class types linearly to the agentica
. Then agentica
will do everything with the function calling.
Look at the below demonstration, and feel how agentica
is easy and powerful combining with @samchon/openapi
.
1import { Agentica } from "@agentica/core"; 2import typia from "typia"; 3 4const agent = new Agentica({ 5 controllers: [ 6 await fetch( 7 "https://shopping-be.wrtn.ai/editor/swagger.json", 8 ).then(r => r.json()), 9 typia.llm.application<ShoppingCounselor>(), 10 typia.llm.application<ShoppingPolicy>(), 11 typia.llm.application<ShoppingSearchRag>(), 12 ], 13}); 14await agent.conversate("I wanna buy MacBook Pro");
No vulnerabilities found.
No security vulnerabilities found.