Gathering detailed insights and metrics for @google-ai/generativelanguage
Gathering detailed insights and metrics for @google-ai/generativelanguage
Gathering detailed insights and metrics for @google-ai/generativelanguage
Gathering detailed insights and metrics for @google-ai/generativelanguage
Google Cloud Client Library for Node.js
npm install @google-ai/generativelanguage
Typescript
Module System
Min. Node Version
Node Version
NPM Version
76.5
Supply Chain
97.1
Quality
79.9
Maintenance
100
Vulnerability
99.3
License
financialservices: v0.3.0
Updated on Jun 04, 2025
areainsights: v0.4.0
Updated on Jun 04, 2025
parametermanager: v0.4.0
Updated on Jun 04, 2025
reviews: v0.3.0
Updated on Jun 04, 2025
gdchardwaremanagement: v0.7.0
Updated on Jun 04, 2025
accounts: v2.2.0
Updated on Jun 04, 2025
TypeScript (88.99%)
JavaScript (10.99%)
Shell (0.01%)
Python (0.01%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
Apache-2.0 License
3,034 Stars
25,184 Commits
625 Forks
189 Watchers
252 Branches
347 Contributors
Updated on Jul 05, 2025
Latest Version
3.2.0
Package Id
@google-ai/generativelanguage@3.2.0
Unpacked Size
16.41 MB
Size
1.01 MB
File Count
222
NPM Version
6.14.18
Node Version
14.21.3
Published on
May 13, 2025
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
Generative Language API client for Node.js
A comprehensive list of changes in each version may be found in the CHANGELOG.
Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.
Table of contents:
1npm install @google-ai/generativelanguage
1/** 2 * This snippet has been automatically generated and should be regarded as a code template only. 3 * It will require modifications to work. 4 * It may require correct/in-range values for request initialization. 5 * TODO(developer): Uncomment these variables before running the sample. 6 */ 7/** 8 * Required. The model name to use with the format name=models/{model}. 9 */ 10// const model = 'abc123' 11/** 12 * Required. The free-form input text given to the model as a prompt. 13 * Given a prompt, the model will generate a TextCompletion response it 14 * predicts as the completion of the input text. 15 */ 16// const prompt = { 17// text: 'abc123' 18// } 19/** 20 * Controls the randomness of the output. 21 * Note: The default value varies by model, see the `Model.temperature` 22 * attribute of the `Model` returned the `getModel` function. 23 * Values can range from 0.0,1.0, 24 * inclusive. A value closer to 1.0 will produce responses that are more 25 * varied and creative, while a value closer to 0.0 will typically result in 26 * more straightforward responses from the model. 27 */ 28// const temperature = 1234 29/** 30 * Number of generated responses to return. 31 * This value must be between 1, 8, inclusive. If unset, this will default 32 * to 1. 33 */ 34// const candidateCount = 1234 35/** 36 * The maximum number of tokens to include in a candidate. 37 * If unset, this will default to 64. 38 */ 39// const maxOutputTokens = 1234 40/** 41 * The maximum cumulative probability of tokens to consider when sampling. 42 * The model uses combined Top-k and nucleus sampling. 43 * Tokens are sorted based on their assigned probabilities so that only the 44 * most liekly tokens are considered. Top-k sampling directly limits the 45 * maximum number of tokens to consider, while Nucleus sampling limits number 46 * of tokens based on the cumulative probability. 47 * Note: The default value varies by model, see the `Model.top_p` 48 * attribute of the `Model` returned the `getModel` function. 49 */ 50// const topP = 1234 51/** 52 * The maximum number of tokens to consider when sampling. 53 * The model uses combined Top-k and nucleus sampling. 54 * Top-k sampling considers the set of `top_k` most probable tokens. 55 * Defaults to 40. 56 * Note: The default value varies by model, see the `Model.top_k` 57 * attribute of the `Model` returned the `getModel` function. 58 */ 59// const topK = 1234 60/** 61 * The set of character sequences (up to 5) that will stop output generation. 62 * If specified, the API will stop at the first appearance of a stop 63 * sequence. The stop sequence will not be included as part of the response. 64 */ 65// const stopSequences = 'abc123' 66 67// Imports the Generativelanguage library 68const {TextServiceClient} = require('@google-ai/generativelanguage').v1beta2; 69 70// Instantiates a client 71const generativelanguageClient = new TextServiceClient(); 72 73async function callGenerateText() { 74 // Construct request 75 const request = { 76 model, 77 prompt, 78 }; 79 80 // Run request 81 const response = await generativelanguageClient.generateText(request); 82 console.log(response); 83} 84 85callGenerateText(); 86
Samples are in the samples/
directory. Each sample's README.md
has instructions for running its sample.
No vulnerabilities found.
Reason
all changesets reviewed
Reason
30 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 10
Reason
security policy file detected
Details
Reason
no dangerous workflow patterns detected
Reason
license file detected
Details
Reason
no binaries found in the repo
Reason
1 existing vulnerabilities detected
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
project is not fuzzed
Details
Score
Last Scanned on 2025-06-30
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More