Gathering detailed insights and metrics for @mrlol/inference
Gathering detailed insights and metrics for @mrlol/inference
Gathering detailed insights and metrics for @mrlol/inference
Gathering detailed insights and metrics for @mrlol/inference
npm install @mrlol/inference
Typescript
Module System
Min. Node Version
Node Version
NPM Version
58.8
Supply Chain
98.1
Quality
75
Maintenance
100
Vulnerability
100
License
TypeScript (99.47%)
JavaScript (0.53%)
Total Downloads
451
Last Day
2
Last Week
5
Last Month
11
Last Year
311
198 Commits
1 Branches
1 Contributors
Minified
Minified + Gzipped
Latest Version
2.0.1
Package Id
@mrlol/inference@2.0.1
Unpacked Size
119.58 kB
Size
24.53 kB
File Count
42
NPM Version
9.5.1
Node Version
19.8.1
Publised On
22 Apr 2023
Cumulative downloads
Total Downloads
Last day
100%
2
Compared to previous day
Last week
150%
5
Compared to previous week
Last month
-93.2%
11
Compared to previous month
Last year
122.1%
311
Compared to previous year
1
4
A Typescript powered wrapper for the Hugging Face Inference API. Learn more about the Inference API at Hugging Face. It also works with Inference Endpoints.
Check out the full documentation.
You can also try out a live interactive notebook or see some demos on hf.co/huggingfacejs.
1npm install @huggingface/inference 2 3yarn add @huggingface/inference 4 5pnpm add @huggingface/inference
❗Important note: Using an access token is optional to get started, however you will be rate limited eventually. Join Hugging Face and then visit access tokens to generate your access token for free.
Your access token should be kept private. If you need to protect it in front-end applications, we suggest setting up a proxy server that stores the access token.
1import { HfInference } from '@huggingface/inference' 2 3const hf = new HfInference('your access token') 4 5// Natural Language 6 7await hf.fillMask({ 8 model: 'bert-base-uncased', 9 inputs: '[MASK] world!' 10}) 11 12await hf.summarization({ 13 model: 'facebook/bart-large-cnn', 14 inputs: 15 'The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930.', 16 parameters: { 17 max_length: 100 18 } 19}) 20 21await hf.questionAnswering({ 22 model: 'deepset/roberta-base-squad2', 23 inputs: { 24 question: 'What is the capital of France?', 25 context: 'The capital of France is Paris.' 26 } 27}) 28 29await hf.tableQuestionAnswering({ 30 model: 'google/tapas-base-finetuned-wtq', 31 inputs: { 32 query: 'How many stars does the transformers repository have?', 33 table: { 34 Repository: ['Transformers', 'Datasets', 'Tokenizers'], 35 Stars: ['36542', '4512', '3934'], 36 Contributors: ['651', '77', '34'], 37 'Programming language': ['Python', 'Python', 'Rust, Python and NodeJS'] 38 } 39 } 40}) 41 42await hf.textClassification({ 43 model: 'distilbert-base-uncased-finetuned-sst-2-english', 44 inputs: 'I like you. I love you.' 45}) 46 47await hf.textGeneration({ 48 model: 'gpt2', 49 inputs: 'The answer to the universe is' 50}) 51 52for await (const output of hf.textGenerationStream({ 53 model: "google/flan-t5-xxl", 54 inputs: 'repeat "one two three four"', 55 parameters: { max_new_tokens: 250 } 56})) { 57 console.log(output.token.text, output.generated_text); 58} 59 60await hf.tokenClassification({ 61 model: 'dbmdz/bert-large-cased-finetuned-conll03-english', 62 inputs: 'My name is Sarah Jessica Parker but you can call me Jessica' 63}) 64 65await hf.translation({ 66 model: 't5-base', 67 inputs: 'My name is Wolfgang and I live in Berlin' 68}) 69 70await hf.zeroShotClassification({ 71 model: 'facebook/bart-large-mnli', 72 inputs: [ 73 'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!' 74 ], 75 parameters: { candidate_labels: ['refund', 'legal', 'faq'] } 76}) 77 78await hf.conversational({ 79 model: 'microsoft/DialoGPT-large', 80 inputs: { 81 past_user_inputs: ['Which movie is the best ?'], 82 generated_responses: ['It is Die Hard for sure.'], 83 text: 'Can you explain why ?' 84 } 85}) 86 87await hf.sentenceSimilarity({ 88 model: 'sentence-transformers/paraphrase-xlm-r-multilingual-v1', 89 inputs: { 90 source_sentence: 'That is a happy person', 91 sentences: [ 92 'That is a happy dog', 93 'That is a very happy person', 94 'Today is a sunny day' 95 ] 96 } 97}) 98 99await hf.featureExtraction({ 100 model: "sentence-transformers/distilbert-base-nli-mean-tokens", 101 inputs: "That is a happy person", 102}); 103 104// Audio 105 106await hf.automaticSpeechRecognition({ 107 model: 'facebook/wav2vec2-large-960h-lv60-self', 108 data: readFileSync('test/sample1.flac') 109}) 110 111await hf.audioClassification({ 112 model: 'superb/hubert-large-superb-er', 113 data: readFileSync('test/sample1.flac') 114}) 115 116// Computer Vision 117 118await hf.imageClassification({ 119 data: readFileSync('test/cheetah.png'), 120 model: 'google/vit-base-patch16-224' 121}) 122 123await hf.objectDetection({ 124 data: readFileSync('test/cats.png'), 125 model: 'facebook/detr-resnet-50' 126}) 127 128await hf.imageSegmentation({ 129 data: readFileSync('test/cats.png'), 130 model: 'facebook/detr-resnet-50-panoptic' 131}) 132 133await hf.textToImage({ 134 inputs: 'award winning high resolution photo of a giant tortoise/((ladybird)) hybrid, [trending on artstation]', 135 model: 'stabilityai/stable-diffusion-2', 136 parameters: { 137 negative_prompt: 'blurry', 138 } 139}) 140 141await hf.imageToText({ 142 data: readFileSync('test/cats.png'), 143 model: 'nlpconnect/vit-gpt2-image-captioning' 144}) 145 146// Multimodal 147 148await hf.visualQuestionAnswering({ 149 model: 'dandelin/vilt-b32-finetuned-vqa', 150 inputs: { 151 question: 'How many cats are lying down?', 152 image: await (await fetch('https://placekitten.com/300/300')).blob() 153 } 154}) 155 156await hf.documentQuestionAnswering({ 157 model: 'impira/layoutlm-document-qa', 158 inputs: { 159 question: 'Invoice number?', 160 image: await (await fetch('https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png')).blob(), 161 } 162}) 163 164// Custom call, for models with custom parameters / outputs 165await hf.request({ 166 model: 'my-custom-model', 167 inputs: 'hello world', 168 parameters: { 169 custom_param: 'some magic', 170 } 171}) 172 173// Custom streaming call, for models with custom parameters / outputs 174for await (const output of hf.streamingRequest({ 175 model: 'my-custom-model', 176 inputs: 'hello world', 177 parameters: { 178 custom_param: 'some magic', 179 } 180})) { 181 ... 182} 183 184// Using your own inference endpoint: https://hf.co/docs/inference-endpoints/ 185const gpt2 = hf.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2'); 186const { generated_text } = await gpt2.textGeneration({inputs: 'The answer to the universe is'});
You can import the functions you need directly from the module, rather than using the HfInference
class:
1import {textGeneration} from "@huggingface/inference"; 2 3await textGeneration({ 4 accessToken: "hf_...", 5 model: "model_or_endpoint", 6 inputs: ..., 7 parameters: ... 8})
This will enable tree-shaking by your bundler.
1HF_ACCESS_TOKEN="your access token" npm run test
We have an informative documentation project called Tasks to list available models for each task and explain how each task works in detail.
It also contains demos, example outputs, and other resources should you want to dig deeper into the ML side of things.
No vulnerabilities found.
No security vulnerabilities found.