Gathering detailed insights and metrics for onnxjs
Gathering detailed insights and metrics for onnxjs
Gathering detailed insights and metrics for onnxjs
Gathering detailed insights and metrics for onnxjs
onnxjs-node
ONNXRuntime backend for ONNX.js
onnxjs-voka
A Javascript library for running ONNX models on browsers and on Node.js
onnxjs-tensor-checks-disabled
A Javascript library for running ONNX models on browsers and on Node.js
@s9y/onnxjs-dev
A Javascript library for running ONNX models on browsers and on Node.js
npm install onnxjs
Typescript
Module System
Node Version
NPM Version
58.5
Supply Chain
99.3
Quality
74
Maintenance
100
Vulnerability
98.2
License
TypeScript (93%)
C++ (4.22%)
JavaScript (2.34%)
C (0.44%)
PureBasic (0.01%)
Total Downloads
234,506
Last Day
41
Last Week
876
Last Month
4,589
Last Year
67,208
1,774 Stars
148 Commits
127 Forks
39 Watching
31 Branches
10,000 Contributors
Latest Version
0.1.8
Package Id
onnxjs@0.1.8
Unpacked Size
3.48 MB
Size
745.94 kB
File Count
484
NPM Version
6.14.4
Node Version
12.18.0
Cumulative downloads
Total Downloads
Last day
-77.6%
41
Compared to previous day
Last week
-17.7%
876
Compared to previous week
Last month
-10.4%
4,589
Compared to previous month
Last year
43.2%
67,208
Compared to previous year
2
45
ONNX.js is a Javascript library for running ONNX models on browsers and on Node.js.
ONNX.js has adopted WebAssembly and WebGL technologies for providing an optimized ONNX model inference runtime for both CPUs and GPUs.
The Open Neural Network Exchange (ONNX) is an open standard for representing machine learning models. The biggest advantage of ONNX is that it allows interoperability across different open source AI frameworks, which itself offers more flexibility for AI frameworks adoption. See Getting ONNX Models.
With ONNX.js, web developers can score pre-trained ONNX models directly on browsers with various benefits of reducing server-client communication and protecting user privacy, as well as offering install-free and cross-platform in-browser ML experience.
ONNX.js can run on both CPU and GPU. For running on CPU, WebAssembly is adopted to execute the model at near-native speed. Furthermore, ONNX.js utilizes Web Workers to provide a "multi-threaded" environment to parallelize data processing. Empirical evaluation shows very promising performance gains on CPU by taking full advantage of WebAssembly and Web Workers. For running on GPUs, a popular standard for accessing GPU capabilities - WebGL is adopted. ONNX.js has further adopted several novel optimization techniques for reducing data transfer between CPU and GPU, as well as some techniques to reduce GPU processing cycles to further push the performance to the maximum.
See Compatibility and Operators Supported for a list of platforms and operators ONNX.js currently supports.
Benchmarks have been run against the most prominent open source solutions in the same market. Below are the results collected for Chrome and Edge browsers on one sample machine (computations run on both CPU and GPU):
NOTE:
- Keras.js doesn't support WebGL usage on Edge
- Keras.js and TensorFlow.js don't support WebAssembly usage on any browser
The specs of the machine that was used to perform the benchmarking is listed below:
- OS: Microsoft Windows 10 Enterprise Insider Preview
- Model: HP Z240 Tower Workstation
- Processor: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz, 3401 Mhz, 4 Core(s), 8 Logical Processor(s)
- Installed Physical Memory (RAM): 32.0 GB
- GPU make / Chip type: AMD FirePro W2100 / AMD FirePro SDI (0x6608)
- GPU Memory (approx.): 18.0 GB
ONNX.js demo website shows the capabilities of ONNX.js. Check the code.
There are multiple ways to use ONNX.js in a project:
<script>
tagThis is the most straightforward way to use ONNX.js. The following HTML example shows how to use it:
1<html> 2 <head> </head> 3 4 <body> 5 <!-- Load ONNX.js --> 6 <script src="https://cdn.jsdelivr.net/npm/onnxjs/dist/onnx.min.js"></script> 7 <!-- Code that consume ONNX.js --> 8 <script> 9 // create a session 10 const myOnnxSession = new onnx.InferenceSession(); 11 // load the ONNX model file 12 myOnnxSession.loadModel("./my-model.onnx").then(() => { 13 // generate model input 14 const inferenceInputs = getInputs(); 15 // execute the model 16 myOnnxSession.run(inferenceInputs).then((output) => { 17 // consume the output 18 const outputTensor = output.values().next().value; 19 console.log(`model output tensor: ${outputTensor.data}.`); 20 }); 21 }); 22 </script> 23 </body> 24</html>
Refer to browser/Add for an example.
Modern browser based applications are usually built by frameworks like Angular, React, Vue.js and so on. This solution usually builds the source code into one or more bundle file(s). The following TypeScript example shows how to use ONNX.js in an async context:
Tensor
and InferenceSession
.1import { Tensor, InferenceSession } from "onnxjs";
InferenceSession
.1const session = new InferenceSession();
1// use the following in an async method 2const url = "./data/models/resnet/model.onnx"; 3await session.loadModel(url);
1// creating an array of input Tensors is the easiest way. For other options see the API documentation 2const inputs = [ 3 new Tensor(new Float32Array([1.0, 2.0, 3.0, 4.0]), "float32", [2, 2]), 4];
1// run this in an async method: 2const outputMap = await session.run(inputs); 3const outputTensor = outputMap.values().next().value;
More verbose examples on how to use ONNX.js are located under the examples
folder. For further info see Examples
ONNX.js can run in Node.js as well. This is usually for testing purpose. Use the require()
function to load ONNX.js:
1require("onnxjs");
You can also use NPM package onnxjs-node
, which offers a Node.js binding of ONNXRuntime.
1require("onnxjs-node");
See usage of onnxjs-node.
Refer to node/Add for a detailed example.
For information on ONNX.js development, please check Development
For API reference, please check API.
You can get ONNX models easily in multiple ways:
Learn more about ONNX
OS/Browser | Chrome | Edge | FireFox | Safari | Opera | Electron | Node.js |
---|---|---|---|---|---|---|---|
Windows 10 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | - | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
macOS | :heavy_check_mark: | - | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
Ubuntu LTS 18.04 | :heavy_check_mark: | - | :heavy_check_mark: | - | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
OS/Browser | Chrome | Edge | FireFox | Safari | Opera |
---|---|---|---|---|---|
iOS | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
Android | :heavy_check_mark: | :heavy_check_mark: | Coming soon | - | :heavy_check_mark: |
ONNX.js currently supports most operators in ai.onnx operator set v7 (opset v7). See operators.md for a complete, detailed list of which ONNX operators are supported by the 3 available builtin backends (cpu, wasm, and webgl).
Support for ai.onnx.ml operators is coming soon. operators-ml.md has the most recent status of ai.onnx.ml operators.
We’d love to embrace your contribution to ONNX.js. Please refer to CONTRIBUTING.md.
Thanks to BrowserStack for providing cross browser testing support.
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
security policy file detected
Details
Reason
license file detected
Details
Reason
Found 22/26 approved changesets -- score normalized to 8
Reason
project is archived
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
project is not fuzzed
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Reason
68 existing vulnerabilities detected
Details
Score
Last Scanned on 2024-12-23
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More