Utilities to use the Hugging Face Hub API
Installations
npm install @huggingface/jinja
Developer Guide
Typescript
Yes
Module System
ESM
Min. Node Version
>=18
Node Version
20.18.0
NPM Version
10.8.2
Score
99.6
Supply Chain
93.5
Quality
84.9
Maintenance
100
Vulnerability
100
License
Contributors
Languages
TypeScript (71.24%)
Svelte (16.39%)
JavaScript (11.59%)
CSS (0.38%)
Python (0.25%)
Shell (0.12%)
HTML (0.03%)
Developer
Download Statistics
Total Downloads
3,315,222
Last Day
12,256
Last Week
78,664
Last Month
459,349
Last Year
3,305,259
GitHub Statistics
1,456 Stars
1,112 Commits
254 Forks
48 Watching
54 Branches
272 Contributors
Package Meta Information
Latest Version
0.3.2
Package Id
@huggingface/jinja@0.3.2
Unpacked Size
224.95 kB
Size
47.36 kB
File Count
24
NPM Version
10.8.2
Node Version
20.18.0
Publised On
30 Oct 2024
Total Downloads
Cumulative downloads
Total Downloads
3,315,222
Last day
6%
12,256
Compared to previous day
Last week
-24.9%
78,664
Compared to previous week
Last month
14.3%
459,349
Compared to previous month
Last year
33,075.3%
3,305,259
Compared to previous year
Daily Downloads
Weekly Downloads
Monthly Downloads
Yearly Downloads
Dev Dependencies
3
Jinja
A minimalistic JavaScript implementation of the Jinja templating engine, specifically designed for parsing and rendering ML chat templates.
Usage
Load template from a model on the Hugging Face Hub
First, install the jinja and hub packages:
1npm i @huggingface/jinja 2npm i @huggingface/hub
You can then load a tokenizer from the Hugging Face Hub and render a list of chat messages, as follows:
1import { Template } from "@huggingface/jinja"; 2import { downloadFile } from "@huggingface/hub"; 3 4const config = await ( 5 await downloadFile({ 6 repo: "mistralai/Mistral-7B-Instruct-v0.1", 7 path: "tokenizer_config.json", 8 }) 9).json(); 10 11const chat = [ 12 { role: "user", content: "Hello, how are you?" }, 13 { role: "assistant", content: "I'm doing great. How can I help you today?" }, 14 { role: "user", content: "I'd like to show off how chat templating works!" }, 15]; 16 17const template = new Template(config.chat_template); 18const result = template.render({ 19 messages: chat, 20 bos_token: config.bos_token, 21 eos_token: config.eos_token, 22}); 23// "<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]"
Transformers.js
First, install @huggingface/transformers
:
1npm i @huggingface/transformers
You can then render a list of chat messages using a tokenizer's apply_chat_template
method.
1import { AutoTokenizer } from "@huggingface/transformers"; 2 3// Load tokenizer from the Hugging Face Hub 4const tokenizer = await AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1"); 5 6// Define chat messages 7const chat = [ 8 { role: "user", content: "Hello, how are you?" }, 9 { role: "assistant", content: "I'm doing great. How can I help you today?" }, 10 { role: "user", content: "I'd like to show off how chat templating works!" }, 11]; 12 13const text = tokenizer.apply_chat_template(chat, { tokenize: false }); 14// "<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]"
Notice how the entire chat is condensed into a single string. If you would instead like to return the tokenized version (i.e., a list of token IDs), you can use the following:
1const input_ids = tokenizer.apply_chat_template(chat, { tokenize: true, return_tensor: false }); 2// [1, 733, 16289, 28793, 22557, 28725, 910, 460, 368, 28804, 733, 28748, 16289, 28793, 28737, 28742, 28719, 2548, 1598, 28723, 1602, 541, 315, 1316, 368, 3154, 28804, 2, 28705, 733, 16289, 28793, 315, 28742, 28715, 737, 298, 1347, 805, 910, 10706, 5752, 1077, 3791, 28808, 733, 28748, 16289, 28793]
For more information about chat templates, check out the transformers documentation.
No vulnerabilities found.
No security vulnerabilities found.