Gathering detailed insights and metrics for @camera-processor/virtual-background
Gathering detailed insights and metrics for @camera-processor/virtual-background
Gathering detailed insights and metrics for @camera-processor/virtual-background
Gathering detailed insights and metrics for @camera-processor/virtual-background
A Simple to Use Webcam Filter Framework.
npm install @camera-processor/virtual-background
Typescript
Module System
Node Version
NPM Version
TypeScript (100%)
Total Downloads
6,832
Last Day
2
Last Week
12
Last Month
54
Last Year
609
MIT License
6 Stars
17 Commits
1 Forks
1 Watchers
1 Branches
1 Contributors
Updated on Dec 01, 2024
Minified
Minified + Gzipped
Latest Version
0.9.11
Package Id
@camera-processor/virtual-background@0.9.11
Unpacked Size
66.79 kB
Size
12.94 kB
File Count
37
NPM Version
7.19.0
Node Version
14.12.0
Cumulative downloads
Total Downloads
Last Day
100%
2
Compared to previous day
Last Week
-45.5%
12
Compared to previous week
Last Month
-33.3%
54
Compared to previous month
Last Year
-42.1%
609
Compared to previous year
Simple, Easy-to-use Background Masking Using Camera-Processor.
This package uses the camera-processor framework.
You need to host the MLKit Selfie Segmentation Model's .tflite file (here) somewhere on your server.
Since this package uses tflite-helper, you'll also need to do the steps described in that package's Preparation section.
1import CameraProcessor from 'camera-processor'; 2import { 3 SegmentationAnalyzer, 4 VirtualBackgroundRenderer, 5 SEGMENTATION_BACKEND, 6 RENDER_PIPELINE, 7 VIRTUAL_BACKGROUND_TYPE 8} from '@camera-processor/virtual-background'; 9 10const camera_processor = new CameraProcessor(); // Instantiate framework object 11camera_processor.setCameraStream(camera_stream); // Set the camera stream from somewhere 12camera_processor.start(); // Start the camera processor 13 14// Add the segmentation analyzer 15const segmentation_analyzer = camera_processor.addAnalyzer( 16 'segmentation', 17 new SegmentationAnalyzer(SEGMENTATION_BACKEND.MLKit) 18); 19 20// Add the virtual background renderer 21const background_renderer = camera_processor.addRenderer(new VirtualBackgroundRenderer(RENDER_PIPELINE._2D)); 22 23// Set the virtual background settings 24const image = new Image(); 25image.src = '...some image source'; // Stream will freeze if this image is CORS protected 26background_renderer.setBackground(VIRTUAL_BACKGROUND_TYPE.Image, image); 27 28// Load the model 29// modelPath is the path where you hosted the model's .tflite file 30// modulePath is the path where you hosted tflite-helper's module files 31segmentation_analyzer.loadModel({ 32 modelPath: './tflite/models/selfie_segmentation.tflite', 33 modulePath: './tflite/' 34}); 35 36const output_stream = camera_processor.getOutputStream(); // Get the output stream and use it
1// There are two Segmentation Backends: 2// BodyPix 3// MLKit 4const segmentation_analyzer = new SegmentationAnalyzer(SEGMENTATION_BACKEND.MLKit); 5 6// Depending on the Segmentation Backend you're using the settings here are different 7// The settings for the BodyPix Backend are here: https://www.npmjs.com/package/@tensorflow-models/body-pix#config-params-in-bodypixload 8// And these are the settings for the MLKit Backend 9// modelPath is the path where you hosted the model's .tflite file 10// modulePath is the path where you hosted tflite-helper's module files 11segmentation_analyzer.loadModel({ 12 modelPath: './tflite/models/selfie_segmentation.tflite', 13 modulePath: './tflite/' 14}); 15 16// Depending on the Segmentation Backend you're using the settings here are different 17// The settings for the BodyPix Backend are here (under config): https://www.npmjs.com/package/@tensorflow-models/body-pix#params-in-segmentperson 18// And there are no settings for the MLKit Backend 19segmentation_analyzer.setSegmentationSettings({}); 20 21// You can also dynamically change the Segmentation Backend 22segmentation_analyzer.setBackend(SEGMENTATION_BACKEND.BodyPix); // You might have to load the model again
1// There are two Render Pipelines: 2// _2D 3// WebGL (Unfinished) 4const background_renderer = new VirtualBackgroundRenderer(RENDER_PIPELINE._2D); 5 6// The first argument is the type and the second is some data for that type 7// There are five Virtual Background Types: 8// None - no data - leave the camera alone 9// Transparent - no data 10// Color - string data (canvas fillStyle) 11// Filter - string data (canvas filter) 12// Image - Image data (image object) 13background_renderer.setBackground(VIRTUAL_BACKGROUND_TYPE.Filter, 'blur(20px)'); 14 15// You can also dynamically change the RenderPipeline 16background_renderer.setPipeline(RENDER_PIPELINE.WebGL); // WebGL doesn't work right now though 17 18// The contourFilter is the canvas filter to apply to the segmentation mask. 19// It's usually blur to smoothen it a bit. 20// By default it's 'blur(4px)' which works well for images, but 21// 'blur(8px)' works best for a blurred background 22background_renderer.setRenderSettings({ contourFilter: 'blur(8px)' });
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
2 existing vulnerabilities detected
Details
Reason
Found 0/17 approved changesets -- score normalized to 0
Reason
no SAST tool detected
Details
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Score
Last Scanned on 2025-04-28
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More