Gathering detailed insights and metrics for protobufjs
Gathering detailed insights and metrics for protobufjs
Gathering detailed insights and metrics for protobufjs
Gathering detailed insights and metrics for protobufjs
@protobufjs/inquire
Requires a module only if available and hides the require call from bundlers.
@protobufjs/pool
A general purpose buffer pool.
@protobufjs/aspromise
Returns a promise from a node-style callback function.
@protobufjs/utf8
A minimal UTF8 implementation for number arrays.
Protocol Buffers for JavaScript & TypeScript.
npm install protobufjs
63.5
Supply Chain
100
Quality
81.1
Maintenance
100
Vulnerability
100
License
protobufjs: v7.4.0
Published on 22 Aug 2024
protobufjs-cli: v1.1.4
Published on 22 Aug 2024
protobufjs: v7.3.3
Published on 16 Aug 2024
protobufjs-cli: v1.1.3
Published on 16 Aug 2024
protobufjs: v7.3.2
Published on 12 Jun 2024
protobufjs: v7.3.1
Published on 07 Jun 2024
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
9,955 Stars
917 Commits
1,414 Forks
170 Watching
35 Branches
110 Contributors
Updated on 26 Nov 2024
Minified
Minified + Gzipped
JavaScript (99.78%)
TypeScript (0.22%)
Cumulative downloads
Total Downloads
Last day
-3%
5,053,923
Compared to previous day
Last week
7.2%
31,171,577
Compared to previous week
Last month
27.4%
120,841,167
Compared to previous month
Last year
44%
933,538,229
Compared to previous year
12
30
protobuf.js
Protocol Buffers are a language-neutral, platform-neutral, extensible way of serializing structured data for use in communications protocols, data storage, and more, originally designed at Google (see).
protobuf.js is a pure JavaScript implementation with TypeScript support for Node.js and the browser. It's easy to use, does not sacrifice on performance, has good conformance and works out of the box with .proto files!
Installation
How to include protobuf.js in your project.
Usage
A brief introduction to using the toolset.
Examples
A few examples to get you started.
Additional documentation
A list of available documentation resources.
Performance
A few internals and a benchmark on performance.
Compatibility
Notes on compatibility regarding browsers and optional libraries.
Building
How to build the library and its components yourself.
1npm install protobufjs --save
1// Static code + Reflection + .proto parser 2var protobuf = require("protobufjs"); 3 4// Static code + Reflection 5var protobuf = require("protobufjs/light"); 6 7// Static code only 8var protobuf = require("protobufjs/minimal");
The optional command line utility to generate static code and reflection bundles lives in the protobufjs-cli
package and can be installed separately:
1npm install protobufjs-cli --save-dev
Pick the variant matching your needs and replace the version tag with the exact release your project depends upon. For example, to use the minified full variant:
1<script src="//cdn.jsdelivr.net/npm/protobufjs@7.X.X/dist/protobuf.min.js"></script>
Distribution | Location |
---|---|
Full | https://cdn.jsdelivr.net/npm/protobufjs/dist/ |
Light | https://cdn.jsdelivr.net/npm/protobufjs/dist/light/ |
Minimal | https://cdn.jsdelivr.net/npm/protobufjs/dist/minimal/ |
All variants support CommonJS and AMD loaders and export globally as window.protobuf
.
Because JavaScript is a dynamically typed language, protobuf.js utilizes the concept of a valid message in order to provide the best possible performance (and, as a side product, proper typings):
A valid message is an object (1) not missing any required fields and (2) exclusively composed of JS types understood by the wire format writer.
There are two possible types of valid messages and the encoder is able to work with both of these for convenience:
In a nutshell, the wire format writer understands the following types:
Field type | Expected JS type (create, encode) | Conversion (fromObject) |
---|---|---|
s-/u-/int32 s-/fixed32 | number (32 bit integer) | value | 0 if signedvalue >>> 0 if unsigned |
s-/u-/int64 s-/fixed64 | Long -like (optimal)number (53 bit integer) | Long.fromValue(value) with long.jsparseInt(value, 10) otherwise |
float double | number | Number(value) |
bool | boolean | Boolean(value) |
string | string | String(value) |
bytes | Uint8Array (optimal)Buffer (optimal under node)Array.<number> (8 bit integers) | base64.decode(value) if a string Object with non-zero .length is assumed to be buffer-like |
enum | number (32 bit integer) | Looks up the numeric id if a string |
message | Valid message | Message.fromObject(value) |
repeated T | Array<T> | Copy |
map<K, V> | Object<K,V> | Copy |
undefined
and null
are considered as not set if the field is optional.Long
-likes.With that in mind and again for performance reasons, each message class provides a distinct set of methods with each method doing just one thing. This avoids unnecessary assertions / redundant operations where performance is a concern but also forces a user to perform verification (of plain JavaScript objects that might just so happen to be a valid message) explicitly where necessary - for example when dealing with user input.
Note that Message
below refers to any message class.
Message.verify(message: Object
): null|string
verifies that a plain JavaScript object satisfies the requirements of a valid message and thus can be encoded without issues. Instead of throwing, it returns the error message as a string, if any.
1var payload = "invalid (not an object)"; 2var err = AwesomeMessage.verify(payload); 3if (err) 4 throw Error(err);
Message.encode(message: Message|Object
[, writer: Writer
]): Writer
encodes a message instance or valid plain JavaScript object. This method does not implicitly verify the message and it's up to the user to make sure that the payload is a valid message.
1var buffer = AwesomeMessage.encode(message).finish();
Message.encodeDelimited(message: Message|Object
[, writer: Writer
]): Writer
works like Message.encode
but additionally prepends the length of the message as a varint.
Message.decode(reader: Reader|Uint8Array
): Message
decodes a buffer to a message instance. If required fields are missing, it throws a util.ProtocolError
with an instance
property set to the so far decoded message. If the wire format is invalid, it throws an Error
.
1try { 2 var decodedMessage = AwesomeMessage.decode(buffer); 3} catch (e) { 4 if (e instanceof protobuf.util.ProtocolError) { 5 // e.instance holds the so far decoded message with missing required fields 6 } else { 7 // wire format is invalid 8 } 9}
Message.decodeDelimited(reader: Reader|Uint8Array
): Message
works like Message.decode
but additionally reads the length of the message prepended as a varint.
Message.create(properties: Object
): Message
creates a new message instance from a set of properties that satisfy the requirements of a valid message. Where applicable, it is recommended to prefer Message.create
over Message.fromObject
because it doesn't perform possibly redundant conversion.
1var message = AwesomeMessage.create({ awesomeField: "AwesomeString" });
Message.fromObject(object: Object
): Message
converts any non-valid plain JavaScript object to a message instance using the conversion steps outlined within the table above.
1var message = AwesomeMessage.fromObject({ awesomeField: 42 });
2// converts awesomeField to a string
Message.toObject(message: Message
[, options: ConversionOptions
]): Object
converts a message instance to an arbitrary plain JavaScript object for interoperability with other libraries or storage. The resulting plain JavaScript object might still satisfy the requirements of a valid message depending on the actual conversion options specified, but most of the time it does not.
1var object = AwesomeMessage.toObject(message, {
2 enums: String, // enums as string names
3 longs: String, // longs as strings (requires long.js)
4 bytes: String, // bytes as base64 encoded strings
5 defaults: true, // includes default values
6 arrays: true, // populates empty arrays (repeated fields) even if defaults=false
7 objects: true, // populates empty objects (map fields) even if defaults=false
8 oneofs: true // includes virtual oneof fields set to the present field's name
9});
For reference, the following diagram aims to display relationships between the different methods and the concept of a valid message:
In other words:
verify
indicates that callingcreate
orencode
directly on the plain object will [result in a valid message respectively] succeed.fromObject
, on the other hand, does conversion from a broader range of plain objects to create valid messages. (ref)
It is possible to load existing .proto files using the full library, which parses and compiles the definitions to ready to use (reflection-based) message classes:
1// awesome.proto 2package awesomepackage; 3syntax = "proto3"; 4 5message AwesomeMessage { 6 string awesome_field = 1; // becomes awesomeField 7}
1protobuf.load("awesome.proto", function(err, root) {
2 if (err)
3 throw err;
4
5 // Obtain a message type
6 var AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage");
7
8 // Exemplary payload
9 var payload = { awesomeField: "AwesomeString" };
10
11 // Verify the payload if necessary (i.e. when possibly incomplete or invalid)
12 var errMsg = AwesomeMessage.verify(payload);
13 if (errMsg)
14 throw Error(errMsg);
15
16 // Create a new message
17 var message = AwesomeMessage.create(payload); // or use .fromObject if conversion is necessary
18
19 // Encode a message to an Uint8Array (browser) or Buffer (node)
20 var buffer = AwesomeMessage.encode(message).finish();
21 // ... do something with buffer
22
23 // Decode an Uint8Array (browser) or Buffer (node) to a message
24 var message = AwesomeMessage.decode(buffer);
25 // ... do something with message
26
27 // If the application uses length-delimited buffers, there is also encodeDelimited and decodeDelimited.
28
29 // Maybe convert the message back to a plain object
30 var object = AwesomeMessage.toObject(message, {
31 longs: String,
32 enums: String,
33 bytes: String,
34 // see ConversionOptions
35 });
36});
Additionally, promise syntax can be used by omitting the callback, if preferred:
1protobuf.load("awesome.proto") 2 .then(function(root) { 3 ... 4 });
The library utilizes JSON descriptors that are equivalent to a .proto definition. For example, the following is identical to the .proto definition seen above:
1// awesome.json 2{ 3 "nested": { 4 "awesomepackage": { 5 "nested": { 6 "AwesomeMessage": { 7 "fields": { 8 "awesomeField": { 9 "type": "string", 10 "id": 1 11 } 12 } 13 } 14 } 15 } 16 } 17}
JSON descriptors closely resemble the internal reflection structure:
Type (T) | Extends | Type-specific properties |
---|---|---|
ReflectionObject | options | |
Namespace | ReflectionObject | nested |
Root | Namespace | nested |
Type | Namespace | fields |
Enum | ReflectionObject | values |
Field | ReflectionObject | rule, type, id |
MapField | Field | keyType |
OneOf | ReflectionObject | oneof (array of field names) |
Service | Namespace | methods |
Method | ReflectionObject | type, requestType, responseType, requestStream, responseStream |
T.fromJSON(name, json)
creates the respective reflection object from a JSON descriptorT#toJSON()
creates a JSON descriptor from the respective reflection object (its name is used as the key within the parent)Exclusively using JSON descriptors instead of .proto files enables the use of just the light library (the parser isn't required in this case).
A JSON descriptor can either be loaded the usual way:
1protobuf.load("awesome.json", function(err, root) { 2 if (err) throw err; 3 4 // Continue at "Obtain a message type" above 5});
Or it can be loaded inline:
1var jsonDescriptor = require("./awesome.json"); // exemplary for node 2 3var root = protobuf.Root.fromJSON(jsonDescriptor); 4 5// Continue at "Obtain a message type" above
Both the full and the light library include full reflection support. One could, for example, define the .proto definitions seen in the examples above using just reflection:
1... 2var Root = protobuf.Root, 3 Type = protobuf.Type, 4 Field = protobuf.Field; 5 6var AwesomeMessage = new Type("AwesomeMessage").add(new Field("awesomeField", 1, "string")); 7 8var root = new Root().define("awesomepackage").add(AwesomeMessage); 9 10// Continue at "Create a new message" above 11...
Detailed information on the reflection structure is available within the API documentation.
Message classes can also be extended with custom functionality and it is also possible to register a custom constructor with a reflected message type:
1...
2
3// Define a custom constructor
4function AwesomeMessage(properties) {
5 // custom initialization code
6 ...
7}
8
9// Register the custom constructor with its reflected type (*)
10root.lookupType("awesomepackage.AwesomeMessage").ctor = AwesomeMessage;
11
12// Define custom functionality
13AwesomeMessage.customStaticMethod = function() { ... };
14AwesomeMessage.prototype.customInstanceMethod = function() { ... };
15
16// Continue at "Create a new message" above
(*) Besides referencing its reflected type through AwesomeMessage.$type
and AwesomeMesage#$type
, the respective custom class is automatically populated with:
AwesomeMessage.create
AwesomeMessage.encode
and AwesomeMessage.encodeDelimited
AwesomeMessage.decode
and AwesomeMessage.decodeDelimited
AwesomeMessage.verify
AwesomeMessage.fromObject
, AwesomeMessage.toObject
and AwesomeMessage#toJSON
Afterwards, decoded messages of this type are instanceof AwesomeMessage
.
Alternatively, it is also possible to reuse and extend the internal constructor if custom initialization code is not required:
1...
2
3// Reuse the internal constructor
4var AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage").ctor;
5
6// Define custom functionality
7AwesomeMessage.customStaticMethod = function() { ... };
8AwesomeMessage.prototype.customInstanceMethod = function() { ... };
9
10// Continue at "Create a new message" above
The library also supports consuming services but it doesn't make any assumptions about the actual transport channel. Instead, a user must provide a suitable RPC implementation, which is an asynchronous function that takes the reflected service method, the binary request and a node-style callback as its parameters:
1function rpcImpl(method, requestData, callback) { 2 // perform the request using an HTTP request or a WebSocket for example 3 var responseData = ...; 4 // and call the callback with the binary response afterwards: 5 callback(null, responseData); 6}
Below is a working example with a typescript implementation using grpc npm package.
1const grpc = require('grpc')
2
3const Client = grpc.makeGenericClientConstructor({})
4const client = new Client(
5 grpcServerUrl,
6 grpc.credentials.createInsecure()
7)
8
9const rpcImpl = function(method, requestData, callback) {
10 client.makeUnaryRequest(
11 method.name,
12 arg => arg,
13 arg => arg,
14 requestData,
15 callback
16 )
17}
Example:
1// greeter.proto 2syntax = "proto3"; 3 4service Greeter { 5 rpc SayHello (HelloRequest) returns (HelloReply) {} 6} 7 8message HelloRequest { 9 string name = 1; 10} 11 12message HelloReply { 13 string message = 1; 14}
1...
2var Greeter = root.lookup("Greeter");
3var greeter = Greeter.create(/* see above */ rpcImpl, /* request delimited? */ false, /* response delimited? */ false);
4
5greeter.sayHello({ name: 'you' }, function(err, response) {
6 console.log('Greeting:', response.message);
7});
Services also support promises:
1greeter.sayHello({ name: 'you' }) 2 .then(function(response) { 3 console.log('Greeting:', response.message); 4 });
There is also an example for streaming RPC.
Note that the service API is meant for clients. Implementing a server-side endpoint pretty much always requires transport channel (i.e. http, websocket, etc.) specific code with the only common denominator being that it decodes and encodes messages.
The library ships with its own type definitions and modern editors like Visual Studio Code will automatically detect and use them for code completion.
The npm package depends on @types/node because of Buffer
and @types/long because of Long
. If you are not building for node and/or not using long.js, it should be safe to exclude them manually.
The API shown above works pretty much the same with TypeScript. However, because everything is typed, accessing fields on instances of dynamically generated message classes requires either using bracket-notation (i.e. message["awesomeField"]
) or explicit casts. Alternatively, it is possible to use a typings file generated for its static counterpart.
1import { load } from "protobufjs"; // respectively "./node_modules/protobufjs" 2 3load("awesome.proto", function(err, root) { 4 if (err) 5 throw err; 6 7 // example code 8 const AwesomeMessage = root.lookupType("awesomepackage.AwesomeMessage"); 9 10 let message = AwesomeMessage.create({ awesomeField: "hello" }); 11 console.log(`message = ${JSON.stringify(message)}`); 12 13 let buffer = AwesomeMessage.encode(message).finish(); 14 console.log(`buffer = ${Array.prototype.toString.call(buffer)}`); 15 16 let decoded = AwesomeMessage.decode(buffer); 17 console.log(`decoded = ${JSON.stringify(decoded)}`); 18});
If you generated static code to bundle.js
using the CLI and its type definitions to bundle.d.ts
, then you can just do:
1import { AwesomeMessage } from "./bundle.js";
2
3// example code
4let message = AwesomeMessage.create({ awesomeField: "hello" });
5let buffer = AwesomeMessage.encode(message).finish();
6let decoded = AwesomeMessage.decode(buffer);
The library also includes an early implementation of decorators.
Note that decorators are an experimental feature in TypeScript and that declaration order is important depending on the JS target. For example, @Field.d(2, AwesomeArrayMessage)
requires that AwesomeArrayMessage
has been defined earlier when targeting ES5
.
1import { Message, Type, Field, OneOf } from "protobufjs/light"; // respectively "./node_modules/protobufjs/light.js" 2 3export class AwesomeSubMessage extends Message<AwesomeSubMessage> { 4 5 @Field.d(1, "string") 6 public awesomeString: string; 7 8} 9 10export enum AwesomeEnum { 11 ONE = 1, 12 TWO = 2 13} 14 15@Type.d("SuperAwesomeMessage") 16export class AwesomeMessage extends Message<AwesomeMessage> { 17 18 @Field.d(1, "string", "optional", "awesome default string") 19 public awesomeField: string; 20 21 @Field.d(2, AwesomeSubMessage) 22 public awesomeSubMessage: AwesomeSubMessage; 23 24 @Field.d(3, AwesomeEnum, "optional", AwesomeEnum.ONE) 25 public awesomeEnum: AwesomeEnum; 26 27 @OneOf.d("awesomeSubMessage", "awesomeEnum") 28 public which: string; 29 30} 31 32// example code 33let message = new AwesomeMessage({ awesomeField: "hello" }); 34let buffer = AwesomeMessage.encode(message).finish(); 35let decoded = AwesomeMessage.decode(buffer);
Supported decorators are:
Type.d(typeName?: string
) Â (optional)
annotates a class as a protobuf message type. If typeName
is not specified, the constructor's runtime function name is used for the reflected type.
Field.d<T>(fieldId: number
, fieldType: string | Constructor<T>
, fieldRule?: "optional" | "required" | "repeated"
, defaultValue?: T
)
annotates a property as a protobuf field with the specified id and protobuf type.
MapField.d<T extends { [key: string]: any }>(fieldId: number
, fieldKeyType: string
, fieldValueType. string | Constructor<{}>
)
annotates a property as a protobuf map field with the specified id, protobuf key and value type.
OneOf.d<T extends string>(...fieldNames: string[]
)
annotates a property as a protobuf oneof covering the specified fields.
Other notes:
protobuf.roots["decorated"]
using a flat structure, so no duplicate names.ProTip! Not as pretty, but you can use decorators in plain JavaScript as well.
The package includes a benchmark that compares protobuf.js performance to native JSON (as far as this is possible) and Google's JS implementation. On an i7-2600K running node 6.9.1 it yields:
benchmarking encoding performance ...
protobuf.js (reflect) x 541,707 ops/sec ±1.13% (87 runs sampled)
protobuf.js (static) x 548,134 ops/sec ±1.38% (89 runs sampled)
JSON (string) x 318,076 ops/sec ±0.63% (93 runs sampled)
JSON (buffer) x 179,165 ops/sec ±2.26% (91 runs sampled)
google-protobuf x 74,406 ops/sec ±0.85% (86 runs sampled)
protobuf.js (static) was fastest
protobuf.js (reflect) was 0.9% ops/sec slower (factor 1.0)
JSON (string) was 41.5% ops/sec slower (factor 1.7)
JSON (buffer) was 67.6% ops/sec slower (factor 3.1)
google-protobuf was 86.4% ops/sec slower (factor 7.3)
benchmarking decoding performance ...
protobuf.js (reflect) x 1,383,981 ops/sec ±0.88% (93 runs sampled)
protobuf.js (static) x 1,378,925 ops/sec ±0.81% (93 runs sampled)
JSON (string) x 302,444 ops/sec ±0.81% (93 runs sampled)
JSON (buffer) x 264,882 ops/sec ±0.81% (93 runs sampled)
google-protobuf x 179,180 ops/sec ±0.64% (94 runs sampled)
protobuf.js (reflect) was fastest
protobuf.js (static) was 0.3% ops/sec slower (factor 1.0)
JSON (string) was 78.1% ops/sec slower (factor 4.6)
JSON (buffer) was 80.8% ops/sec slower (factor 5.2)
google-protobuf was 87.0% ops/sec slower (factor 7.7)
benchmarking combined performance ...
protobuf.js (reflect) x 275,900 ops/sec ±0.78% (90 runs sampled)
protobuf.js (static) x 290,096 ops/sec ±0.96% (90 runs sampled)
JSON (string) x 129,381 ops/sec ±0.77% (90 runs sampled)
JSON (buffer) x 91,051 ops/sec ±0.94% (90 runs sampled)
google-protobuf x 42,050 ops/sec ±0.85% (91 runs sampled)
protobuf.js (static) was fastest
protobuf.js (reflect) was 4.7% ops/sec slower (factor 1.0)
JSON (string) was 55.3% ops/sec slower (factor 2.2)
JSON (buffer) was 68.6% ops/sec slower (factor 3.2)
google-protobuf was 85.5% ops/sec slower (factor 6.9)
These results are achieved by
You can also run the benchmark ...
$> npm run bench
and the profiler yourself (the latter requires a recent version of node):
$> npm run prof <encode|decode|encode-browser|decode-browser> [iterations=10000000]
Note that as of this writing, the benchmark suite performs significantly slower on node 7.2.0 compared to 6.9.1 because moths.
google/protobuf/descriptor.proto
, options are parsed and presented literally.Long
instance instead of a possibly unsafe JavaScript number (see).To build the library or its components yourself, clone it from GitHub and install the development dependencies:
$> git clone https://github.com/protobufjs/protobuf.js.git
$> cd protobuf.js
$> npm install
Building the respective development and production versions with their respective source maps to dist/
:
$> npm run build
Building the documentation to docs/
:
$> npm run docs
Building the TypeScript definition to index.d.ts
:
$> npm run build:types
By default, protobuf.js integrates into any browserify build-process without requiring any optional modules. Hence:
If int64 support is required, explicitly require the long
module somewhere in your project as it will be excluded otherwise. This assumes that a global require
function is present that protobuf.js can call to obtain the long module.
If there is no global require
function present after bundling, it's also possible to assign the long module programmatically:
1var Long = ...; 2 3protobuf.util.Long = Long; 4protobuf.configure();
If you have any special requirements, there is the bundler for reference.
License: BSD 3-Clause License
The latest stable version of the package.
Stable Version
2
9.8/10
Summary
protobufjs Prototype Pollution vulnerability
Affected Versions
>= 7.0.0, < 7.2.5
Patched Versions
7.2.5
9.8/10
Summary
protobufjs Prototype Pollution vulnerability
Affected Versions
>= 6.10.0, < 6.11.4
Patched Versions
6.11.4
2
7.5/10
Summary
Prototype Pollution in protobufjs
Affected Versions
>= 6.10.0, < 6.10.3
Patched Versions
6.10.3
7.5/10
Summary
Prototype Pollution in protobufjs
Affected Versions
>= 6.11.0, < 6.11.3
Patched Versions
6.11.3
4
0/10
Summary
Denial of Service in protobufjs
Affected Versions
< 5.0.3
Patched Versions
5.0.3
0/10
Summary
Denial of Service in protobufjs
Affected Versions
>= 6.0.0, < 6.8.6
Patched Versions
6.8.6
5.5/10
Summary
Denial of Service in protobufjs
Affected Versions
< 5.0.3
Patched Versions
5.0.3
5.5/10
Summary
Denial of Service in protobufjs
Affected Versions
>= 6.0.0, < 6.8.6
Patched Versions
6.8.6
Reason
all changesets reviewed
Reason
no dangerous workflow patterns detected
Reason
30 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 10
Reason
no binaries found in the repo
Reason
project is fuzzed
Details
Reason
license file detected
Details
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
security policy file not detected
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Reason
16 existing vulnerabilities detected
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More