Gathering detailed insights and metrics for @paddlejs/paddlejs-core
Gathering detailed insights and metrics for @paddlejs/paddlejs-core
Gathering detailed insights and metrics for @paddlejs/paddlejs-core
Gathering detailed insights and metrics for @paddlejs/paddlejs-core
npm install @paddlejs/paddlejs-core
Typescript
Module System
Node Version
NPM Version
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
No dependencies detected.
As the core part of the Paddle.js ecosystem, this package hosts @paddlejs/paddlejs-core
,
which is responsible for the operation of the inference process of the entire engine,
and provides interfaces for backend registration and environment variable registration.
When registering the engine you need to configure the engine, you must configure the items modelPath
, feedShape
, all items are configured as follows.
1 2// model struture 3enum GraphType { 4 SingleOutput = 'single', 5 MultipleOutput = 'multiple', 6 MultipleInput = 'multipleInput' 7} 8 9interface RunnerConfig { 10 modelPath: string; // model path (local or web address) 11 modelName?: string; // model name 12 feedShape: { // input feed shape 13 fc?: number; // feed channel, default is 3. 14 fw: number; // feed width 15 fh: number; // feed height 16 }; 17 fill?: Color; // the color used to padding 18 mean?: number[]; // mean value 19 std?: number[]; // standard deviation 20 bgr?: boolean; // whether the image channel alignment is BGR, default is false (RGB) 21 type?: GraphType; // model structure, default is singleInput and singleOutput 22 needPreheat?: boolean; // whether to warm up the engine during initialization, default is true 23 plugins?: { // register model topology transform plugin 24 preTransforms?: Transformer[]; // transform before creating network topology 25 transforms?: Transformer[]; // transform when traversing model layers 26 postTransforms?: Transformer[]; // transform the model topology diagram after it has been created 27 }; 28} 29
You can install this package via npm., @paddlejs/paddlejs-core
1// Import @paddlejs/paddlejs-core
2import { Runner } from '@paddlejs/paddlejs-core';
3// Import the registered WebGL backend.
4import '@paddlejs/paddlejs-backend-webgl';
5
6const runner = new Runner({
7 modelPath: '/model/mobilenetv2', // model path, e.g. http://xx.cc/path, http://xx.cc/path/model.json, /localModelDir/model.json, /localModelDir
8 feedShape: { // input shape
9 fw: 256,
10 fh: 256
11 },
12 fill?: '#fff', // fill color when resize image, default value is #fff
13 webglFeedProcess?: true // Turn on `webglFeedProcess` to convert all pre-processing parts of the model to shader processing, and keep the original image texture.
14});
15
16// init runner
17await runner.init();
18// predict and get result
19const res = await runner.predict(mediadata, callback?);
Note: If you are importing the Core package, you also need to import a backend (e.g., paddlejs-backend-webgl, paddlejs-backend-webgpu).
@paddlejs/paddlejs-core
provides the interface registerOp
, through which developers can register custom operators.
@paddlejs/paddlejs-core
provides the global variable env
module, through which developers can register environment variables, using the following method:
1// set env key/flag and value 2env.set(key, value); 3 4// get value by key/flag 5env.get(key);
transform model stucture
By registering the model transformers through runnerConfig.plugins
, developers can make changes (add, delete, change) to the model structure, such as pruning to remove unnecessary layers to speed up inference, or adding custom layers to the end of the model and turning post-processing into layers in the model to speed up post-processing.
Turn on performance flag for acceleration
Paddle.js currently provides five performance flags
, which can be set to true
if you want to enable inference acceleration.
1env.set('webgl_pack_channel', true);
Turn on webgl_pack_channel
and the eligible conv2d will use packing shader to perform packing transformations to improve performance through vectorization calculations.
1env.set('webgl_force_half_float_texture', true);
Enable webgl_force_half_float_texture
, feature map uses half float HALF_FLOAT
.
1env.set('webgl_gpu_pipeline', true);
Turn on webgl_gpu_pipeline
to convert all model pre-processing parts to shader processing, and render the model results to WebGL2RenderingContext
of webgl backend on screen. Developers can perform model post-processing on the output texture and the original image texture to achieve the GPU_PIPELINE
: pre-processing + inference + post-processing (rendering processing) to get high performance. Take humanseg model case for reference.
1env.set('webgl_pack_output', true);
Enable webgl_pack_output
to migrate the NHWC
to NCHW
layout transformation of the model output to the GPU, and pack
to a four-channel layout to reduce loop processing when reading the results from the GPU
No vulnerabilities found.
No security vulnerabilities found.