Gathering detailed insights and metrics for capacitor-plugin-camera-forked
Gathering detailed insights and metrics for capacitor-plugin-camera-forked
Gathering detailed insights and metrics for capacitor-plugin-camera-forked
Gathering detailed insights and metrics for capacitor-plugin-camera-forked
npm install capacitor-plugin-camera-forked
Typescript
Module System
Node Version
NPM Version
Java (46.28%)
Swift (33.16%)
TypeScript (10.05%)
JavaScript (6.81%)
HTML (1.64%)
Ruby (1.04%)
Objective-C (0.98%)
SCSS (0.04%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
MIT License
151 Commits
2 Branches
Updated on Jul 07, 2025
Latest Version
3.0.113
Package Id
capacitor-plugin-camera-forked@3.0.113
Unpacked Size
20.14 MB
Size
18.33 MB
File Count
37
NPM Version
10.7.0
Node Version
20.15.1
Published on
Jul 07, 2025
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
2
1
A capacitor camera plugin.
For Capacitor 5, use versions 1.x.
For Capacitor 6, use versions 2.x.
For Capacitor 7, use versions 3.x.
1npm install capacitor-plugin-camera 2npx cap sync
If you are developing a plugin, you can use reflection to get the camera frames as Bitmap or UIImage on the native side.
Java:
1Class cls = Class.forName("com.tonyxlh.capacitor.camera.CameraPreviewPlugin"); 2Method m = cls.getMethod("getBitmap",null); 3Bitmap bitmap = (Bitmap) m.invoke(null, null);
Objective-C:
1- (UIImage*)getUIImage{ 2 UIImage *image = ((UIImage* (*)(id, SEL))objc_msgSend)(objc_getClass("CameraPreviewPlugin"), sel_registerName("getBitmap")); 3 return image; 4}
You have to call saveFrame
beforehand.
To use camera and microphone, we need to declare permissions.
Add the following to Android's AndroidManifest.xml
:
1<uses-permission android:name="android.permission.CAMERA" /> 2<uses-permission android:name="android.permission.RECORD_AUDIO" />
Add the following to iOS's Info.plist
:
1<key>NSCameraUsageDescription</key> 2<string>For camera usage</string> 3<key>NSMicrophoneUsageDescription</key> 4<string>For video recording</string>
Why I cannot see the camera?
For native platforms, the plugin puts the native camera view behind the webview and sets the webview as transparent so that we can display HTML elements above the camera.
You may need to add the style below on your app's HTML or body element to avoid blocking the camera view:
1ion-content { 2 --background: transparent; 3}
In dark mode, it is neccessary to set the --ion-blackground-color
property. You can do this with the following code:
1document.documentElement.style.setProperty('--ion-background-color', 'transparent');
initialize(...)
getResolution()
setResolution(...)
getAllCameras()
getSelectedCamera()
selectCamera(...)
setScanRegion(...)
setZoom(...)
setFocus(...)
setDefaultUIElementURL(...)
setElement(...)
startCamera()
stopCamera()
takeSnapshot(...)
detectBlur(...)
saveFrame()
takeSnapshot2(...)
takePhoto(...)
toggleTorch(...)
getOrientation()
startRecording()
stopRecording(...)
setLayout(...)
requestCameraPermission()
requestMicroPhonePermission()
isOpen()
addListener('onPlayed', ...)
addListener('onOrientationChanged', ...)
removeAllListeners()
1initialize(options?: { quality?: number | undefined; } | undefined) => Promise<void>
Param | Type |
---|---|
options | { quality?: number; } |
1getResolution() => Promise<{ resolution: string; }>
Returns: Promise<{ resolution: string; }>
1setResolution(options: { resolution: number; }) => Promise<void>
Param | Type |
---|---|
options | { resolution: number; } |
1getAllCameras() => Promise<{ cameras: string[]; }>
Returns: Promise<{ cameras: string[]; }>
1getSelectedCamera() => Promise<{ selectedCamera: string; }>
Returns: Promise<{ selectedCamera: string; }>
1selectCamera(options: { cameraID: string; }) => Promise<void>
Param | Type |
---|---|
options | { cameraID: string; } |
1setScanRegion(options: { region: ScanRegion; }) => Promise<void>
Param | Type |
---|---|
options | { region: ScanRegion; } |
1setZoom(options: { factor: number; }) => Promise<void>
Param | Type |
---|---|
options | { factor: number; } |
1setFocus(options: { x: number; y: number; }) => Promise<void>
Param | Type |
---|---|
options | { x: number; y: number; } |
1setDefaultUIElementURL(url: string) => Promise<void>
Web Only
Param | Type |
---|---|
url | string |
1setElement(ele: any) => Promise<void>
Web Only
Param | Type |
---|---|
ele | any |
1startCamera() => Promise<void>
1stopCamera() => Promise<void>
1takeSnapshot(options: { quality?: number; checkBlur?: boolean; }) => Promise<{ base64: string; isBlur?: boolean; }>
take a snapshot as base64.
Param | Type |
---|---|
options | { quality?: number; checkBlur?: boolean; } |
Returns: Promise<{ base64: string; isBlur?: boolean; }>
1detectBlur(options: { image: string; }) => Promise<{ isBlur: boolean; blurConfidence: number; sharpConfidence: number; }>
analyze an image for blur detection with detailed confidence scores.
Param | Type |
---|---|
options | { image: string; } |
Returns: Promise<{ isBlur: boolean; blurConfidence: number; sharpConfidence: number; }>
1saveFrame() => Promise<{ success: boolean; }>
save a frame internally. Android and iOS only.
Returns: Promise<{ success: boolean; }>
1takeSnapshot2(options: { canvas: HTMLCanvasElement; maxLength?: number; }) => Promise<{ scaleRatio?: number; }>
take a snapshot on to a canvas. Web Only
Param | Type |
---|---|
options | { canvas: any; maxLength?: number; } |
Returns: Promise<{ scaleRatio?: number; }>
1takePhoto(options: { pathToSave?: string; includeBase64?: boolean; }) => Promise<{ path?: string; base64?: string; blob?: Blob; isBlur?: boolean; }>
Param | Type |
---|---|
options | { pathToSave?: string; includeBase64?: boolean; } |
Returns: Promise<{ path?: string; base64?: string; blob?: any; isBlur?: boolean; }>
1toggleTorch(options: { on: boolean; }) => Promise<void>
Param | Type |
---|---|
options | { on: boolean; } |
1getOrientation() => Promise<{ "orientation": "PORTRAIT" | "LANDSCAPE"; }>
get the orientation of the device.
Returns: Promise<{ orientation: 'PORTRAIT' | 'LANDSCAPE'; }>
1startRecording() => Promise<void>
1stopRecording(options: { includeBase64?: boolean; }) => Promise<{ path?: string; base64?: string; blob?: Blob; }>
Param | Type |
---|---|
options | { includeBase64?: boolean; } |
Returns: Promise<{ path?: string; base64?: string; blob?: any; }>
1setLayout(options: { top: string; left: string; width: string; height: string; }) => Promise<void>
Param | Type |
---|---|
options | { top: string; left: string; width: string; height: string; } |
1requestCameraPermission() => Promise<void>
1requestMicroPhonePermission() => Promise<void>
1isOpen() => Promise<{ isOpen: boolean; }>
Returns: Promise<{ isOpen: boolean; }>
1addListener(eventName: 'onPlayed', listenerFunc: onPlayedListener) => Promise<PluginListenerHandle>
Param | Type |
---|---|
eventName | 'onPlayed' |
listenerFunc | onPlayedListener |
Returns: Promise<PluginListenerHandle>
1addListener(eventName: 'onOrientationChanged', listenerFunc: onOrientationChangedListener) => Promise<PluginListenerHandle>
Param | Type |
---|---|
eventName | 'onOrientationChanged' |
listenerFunc | onOrientationChangedListener |
Returns: Promise<PluginListenerHandle>
1removeAllListeners() => Promise<void>
measuredByPercentage: 0 in pixel, 1 in percent
Prop | Type |
---|---|
left | number |
top | number |
right | number |
bottom | number |
measuredByPercentage | number |
Prop | Type |
---|---|
remove | () => Promise<void> |
(result: { resolution: string; }): void
(): void
The plugin includes blur detection capabilities using TensorFlow Lite models with Laplacian variance fallback, providing consistent results across all platforms.
Use the detectBlur
method to analyze any base64 image with detailed confidence scores:
1// Analyze an existing image (base64 string or data URL) 2const result = await CameraPreview.detectBlur({ 3 image: "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD..." 4 // or just the base64 string: "/9j/4AAQSkZJRgABAQEASABIAAD..." 5}); 6 7console.log('Is Blurry:', result.isBlur); // boolean: true/false 8console.log('Blur Confidence:', result.blurConfidence); // 0.0-1.0 (higher = more blurry) 9console.log('Sharp Confidence:', result.sharpConfidence); // 0.0-1.0 (higher = more sharp) 10 11// Use confidence scores for advanced logic 12if (result.blurConfidence > 0.7) { 13 console.log('High confidence this image is blurry'); 14} else if (result.sharpConfidence > 0.8) { 15 console.log('High confidence this image is sharp'); 16} else { 17 console.log('Uncertain blur status - manual review needed'); 18}
1// Take a snapshot with blur detection 2const result = await CameraPreview.takeSnapshot({ 3 quality: 85, 4 checkBlur: true // Optional, defaults to false for performance 5}); 6 7console.log('Base64:', result.base64); 8if (result.blurScore !== undefined) { 9 console.log('Blur Score:', result.blurScore); 10 11 // Implement your own blur threshold logic 12 const threshold = 50.0; // Adjust based on your quality requirements 13 const isBlurry = result.blurScore < threshold; 14 15 if (isBlurry) { 16 console.log('Image appears to be blurry'); 17 } else { 18 console.log('Image appears to be sharp'); 19 } 20}
Blur detection is disabled by default for optimal performance. Enable it only when needed:
1// Blur detection OFF (default) - faster performance 2const result = await CameraPreview.takeSnapshot({ quality: 85 }); 3 4// Blur detection ON - includes blur analysis 5const resultWithBlur = await CameraPreview.takeSnapshot({ 6 quality: 85, 7 checkBlur: true 8});
New detectBlur
Method (Recommended):
blurConfidence
: Higher values indicate more blur (>0.7 = likely blurry)sharpConfidence
: Higher values indicate more sharpness (>0.8 = likely sharp)isBlur
: Simple boolean result based on confidence thresholdsLegacy takeSnapshot
Method:
0.001
as blurry50-100
as blurryUse detectBlur
for:
Use takeSnapshot
with checkBlur: true
for:
Platform | Without Blur Detection | With Blur Detection | Overhead |
---|---|---|---|
iOS | 100-120ms | 120-145ms | ~20% |
Android | 80-120ms | 100-145ms | ~21% |
Web | 60-100ms | 85-140ms | ~40% |
takeSnapshot
for capture+detection, detectBlur
for analyzing existing imagesNo vulnerabilities found.
No security vulnerabilities found.