Gathering detailed insights and metrics for @google-ai/generativelanguage
Gathering detailed insights and metrics for @google-ai/generativelanguage
npm install @google-ai/generativelanguage
Typescript
Module System
Min. Node Version
Node Version
NPM Version
65
Supply Chain
96.3
Quality
81.2
Maintenance
100
Vulnerability
99.3
License
admanager: v0.1.0
Published on 18 Dec 2024
products: v0.1.1
Published on 18 Dec 2024
managedkafka: v0.2.0
Published on 18 Dec 2024
chat: v0.11.0
Published on 18 Dec 2024
parallelstore: v0.7.0
Published on 18 Dec 2024
servicehealth: v0.5.0
Published on 18 Dec 2024
TypeScript (89.19%)
JavaScript (10.77%)
Shell (0.02%)
Python (0.02%)
Total Downloads
1,481,777
Last Day
3,408
Last Week
23,710
Last Month
123,910
Last Year
1,291,316
2,942 Stars
24,687 Commits
598 Forks
187 Watching
225 Branches
332 Contributors
Latest Version
2.8.0
Package Id
@google-ai/generativelanguage@2.8.0
Unpacked Size
9.80 MB
Size
631.26 kB
File Count
133
NPM Version
6.14.18
Node Version
14.21.3
Publised On
21 Nov 2024
Cumulative downloads
Total Downloads
Last day
-10.7%
3,408
Compared to previous day
Last week
-19.2%
23,710
Compared to previous week
Last month
0.9%
123,910
Compared to previous month
Last year
578%
1,291,316
Compared to previous year
Generative Language API client for Node.js
A comprehensive list of changes in each version may be found in the CHANGELOG.
Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.
Table of contents:
1npm install @google-ai/generativelanguage
1/** 2 * This snippet has been automatically generated and should be regarded as a code template only. 3 * It will require modifications to work. 4 * It may require correct/in-range values for request initialization. 5 * TODO(developer): Uncomment these variables before running the sample. 6 */ 7/** 8 * Required. The model name to use with the format name=models/{model}. 9 */ 10// const model = 'abc123' 11/** 12 * Required. The free-form input text given to the model as a prompt. 13 * Given a prompt, the model will generate a TextCompletion response it 14 * predicts as the completion of the input text. 15 */ 16// const prompt = { 17// text: 'abc123' 18// } 19/** 20 * Controls the randomness of the output. 21 * Note: The default value varies by model, see the `Model.temperature` 22 * attribute of the `Model` returned the `getModel` function. 23 * Values can range from 0.0,1.0, 24 * inclusive. A value closer to 1.0 will produce responses that are more 25 * varied and creative, while a value closer to 0.0 will typically result in 26 * more straightforward responses from the model. 27 */ 28// const temperature = 1234 29/** 30 * Number of generated responses to return. 31 * This value must be between 1, 8, inclusive. If unset, this will default 32 * to 1. 33 */ 34// const candidateCount = 1234 35/** 36 * The maximum number of tokens to include in a candidate. 37 * If unset, this will default to 64. 38 */ 39// const maxOutputTokens = 1234 40/** 41 * The maximum cumulative probability of tokens to consider when sampling. 42 * The model uses combined Top-k and nucleus sampling. 43 * Tokens are sorted based on their assigned probabilities so that only the 44 * most liekly tokens are considered. Top-k sampling directly limits the 45 * maximum number of tokens to consider, while Nucleus sampling limits number 46 * of tokens based on the cumulative probability. 47 * Note: The default value varies by model, see the `Model.top_p` 48 * attribute of the `Model` returned the `getModel` function. 49 */ 50// const topP = 1234 51/** 52 * The maximum number of tokens to consider when sampling. 53 * The model uses combined Top-k and nucleus sampling. 54 * Top-k sampling considers the set of `top_k` most probable tokens. 55 * Defaults to 40. 56 * Note: The default value varies by model, see the `Model.top_k` 57 * attribute of the `Model` returned the `getModel` function. 58 */ 59// const topK = 1234 60/** 61 * The set of character sequences (up to 5) that will stop output generation. 62 * If specified, the API will stop at the first appearance of a stop 63 * sequence. The stop sequence will not be included as part of the response. 64 */ 65// const stopSequences = 'abc123' 66 67// Imports the Generativelanguage library 68const {TextServiceClient} = require('@google-ai/generativelanguage').v1beta2; 69 70// Instantiates a client 71const generativelanguageClient = new TextServiceClient(); 72 73async function callGenerateText() { 74 // Construct request 75 const request = { 76 model, 77 prompt, 78 }; 79 80 // Run request 81 const response = await generativelanguageClient.generateText(request); 82 console.log(response); 83} 84 85callGenerateText(); 86
Samples are in the samples/
directory. Each sample's README.md
has instructions for running its sample.
Sample | Source Code | Try it |
---|---|---|
Generative_service.batch_embed_contents | source code | |
Generative_service.count_tokens | source code | |
Generative_service.embed_content | source code | |
Generative_service.generate_content | source code | |
Generative_service.stream_generate_content | source code | |
Model_service.get_model | source code | |
Model_service.list_models | source code | |
Cache_service.create_cached_content | source code | |
Cache_service.delete_cached_content | source code | |
Cache_service.get_cached_content | source code | |
Cache_service.list_cached_contents | source code | |
Cache_service.update_cached_content | source code | |
Discuss_service.count_message_tokens | source code | |
Discuss_service.generate_message | source code | |
File_service.create_file | source code | |
File_service.delete_file | source code | |
File_service.get_file | source code | |
File_service.list_files | source code | |
Generative_service.batch_embed_contents | source code | |
Generative_service.count_tokens | source code | |
Generative_service.embed_content | source code | |
Generative_service.generate_answer | source code | |
Generative_service.generate_content | source code | |
Generative_service.stream_generate_content | source code | |
Model_service.create_tuned_model | source code | |
Model_service.delete_tuned_model | source code | |
Model_service.get_model | source code | |
Model_service.get_tuned_model | source code | |
Model_service.list_models | source code | |
Model_service.list_tuned_models | source code | |
Model_service.update_tuned_model | source code | |
Permission_service.create_permission | source code | |
Permission_service.delete_permission | source code | |
Permission_service.get_permission | source code | |
Permission_service.list_permissions | source code | |
Permission_service.transfer_ownership | source code | |
Permission_service.update_permission | source code | |
Prediction_service.predict | source code | |
Retriever_service.batch_create_chunks | source code | |
Retriever_service.batch_delete_chunks | source code | |
Retriever_service.batch_update_chunks | source code | |
Retriever_service.create_chunk | source code | |
Retriever_service.create_corpus | source code | |
Retriever_service.create_document | source code | |
Retriever_service.delete_chunk | source code | |
Retriever_service.delete_corpus | source code | |
Retriever_service.delete_document | source code | |
Retriever_service.get_chunk | source code | |
Retriever_service.get_corpus | source code | |
Retriever_service.get_document | source code | |
Retriever_service.list_chunks | source code | |
Retriever_service.list_corpora | source code | |
Retriever_service.list_documents | source code | |
Retriever_service.query_corpus | source code | |
Retriever_service.query_document | source code | |
Retriever_service.update_chunk | source code | |
Retriever_service.update_corpus | source code | |
Retriever_service.update_document | source code | |
Text_service.batch_embed_text | source code | |
Text_service.count_text_tokens | source code | |
Text_service.embed_text | source code | |
Text_service.generate_text | source code | |
Discuss_service.count_message_tokens | source code | |
Discuss_service.generate_message | source code | |
Model_service.get_model | source code | |
Model_service.list_models | source code | |
Text_service.embed_text | source code | |
Text_service.generate_text | source code | |
Discuss_service.count_message_tokens | source code | |
Discuss_service.generate_message | source code | |
Model_service.create_tuned_model | source code | |
Model_service.delete_tuned_model | source code | |
Model_service.get_model | source code | |
Model_service.get_tuned_model | source code | |
Model_service.list_models | source code | |
Model_service.list_tuned_models | source code | |
Model_service.update_tuned_model | source code | |
Permission_service.create_permission | source code | |
Permission_service.delete_permission | source code | |
Permission_service.get_permission | source code | |
Permission_service.list_permissions | source code | |
Permission_service.transfer_ownership | source code | |
Permission_service.update_permission | source code | |
Text_service.batch_embed_text | source code | |
Text_service.count_text_tokens | source code | |
Text_service.embed_text | source code | |
Text_service.generate_text | source code | |
Quickstart | source code |
The Generative Language API Node.js Client API Reference documentation also contains samples.
Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js. If you are using an end-of-life version of Node.js, we recommend that you update as soon as possible to an actively supported LTS version.
Google's client libraries support legacy versions of Node.js runtimes on a best-efforts basis with the following warnings:
Client libraries targeting some end-of-life versions of Node.js are available, and
can be installed through npm dist-tags.
The dist-tags follow the naming convention legacy-(version)
.
For example, npm install @google-ai/generativelanguage@legacy-8
installs client libraries
for versions compatible with Node.js 8.
This library follows Semantic Versioning.
This library is considered to be in preview. This means it is still a work-in-progress and under active development. Any release is subject to backwards-incompatible changes at any time.
More Information: Google Cloud Platform Launch Stages
Contributions welcome! See the Contributing Guide.
Please note that this README.md
, the samples/README.md
,
and a variety of configuration files in this repository (including .nycrc
and tsconfig.json
)
are generated from a central template. To edit one of these files, make an edit
to its templates in
directory.
Apache Version 2.0
See LICENSE
No vulnerabilities found.
Reason
all changesets reviewed
Reason
30 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 10
Reason
no dangerous workflow patterns detected
Reason
security policy file detected
Details
Reason
license file detected
Details
Reason
0 existing vulnerabilities detected
Reason
no binaries found in the repo
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
project is not fuzzed
Details
Score
Last Scanned on 2024-12-30
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More