Gathering detailed insights and metrics for react-s3-uploader
Gathering detailed insights and metrics for react-s3-uploader
Gathering detailed insights and metrics for react-s3-uploader
Gathering detailed insights and metrics for react-s3-uploader
react-dropzone-uploader
React file dropzone and uploader: fully customizable, progress indicators, upload cancellation and restart, zero deps and excellent TypeScript support
react-dropzone-s3-uploader
Drag and drop s3 file uploader via react-dropzone + react-s3-uploader
@iimm/s3-uploader
s3-uploader react component
aws-s3-react-uploader
Still in development and not ready for production use.
React component that renders an <input type="file"/> and automatically uploads to an S3 bucket
npm install react-s3-uploader
Typescript
Module System
Node Version
NPM Version
JavaScript (100%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
MIT License
829 Stars
276 Commits
239 Forks
10 Watchers
10 Branches
58 Contributors
Updated on May 26, 2025
Latest Version
5.0.0
Package Id
react-s3-uploader@5.0.0
Size
10.29 kB
NPM Version
6.14.6
Node Version
12.18.3
Published on
Nov 04, 2020
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
2
1
Provides a React
component that automatically uploads to an S3 Bucket.
1$ npm install --save react-s3-uploader
1var ReactS3Uploader = require('react-s3-uploader'); 2 3... 4 5<ReactS3Uploader 6 signingUrl="/s3/sign" 7 signingUrlMethod="GET" 8 accept="image/*" 9 s3path="/uploads/" 10 preprocess={this.onUploadStart} 11 onSignedUrl={this.onSignedUrl} 12 onProgress={this.onUploadProgress} 13 onError={this.onUploadError} 14 onFinish={this.onUploadFinish} 15 signingUrlHeaders={{ additional: headers }} 16 signingUrlQueryParams={{ additional: query-params }} 17 signingUrlWithCredentials={ true } // in case when need to pass authentication credentials via CORS 18 uploadRequestHeaders={{ 'x-amz-acl': 'public-read' }} // this is the default 19 contentDisposition="auto" 20 scrubFilename={(filename) => filename.replace(/[^\w\d_\-.]+/ig, '')} 21 server="http://cross-origin-server.com" 22 inputRef={cmp => this.uploadInput = cmp} 23 autoUpload={true} 24 />
The above example shows all supported props
.
This expects a request to /s3/sign
to return JSON with a signedUrl
property that can be used
to PUT the file in S3.
contentDisposition
is optional and can be one of inline
, attachment
or auto
. If given,
the Content-Disposition
header will be set accordingly with the file's original filename.
If it is auto
, the disposition type will be set to inline
for images and attachment
for
all other files.
server
is optional and can be used to specify the location of the server which is
running the ReactS3Uploader server component if it is not the same as the one from
which the client is served.
Use scrubFilename
to provide custom filename scrubbing before uploading. Prior to version 4.0, this library used unorm
and latinize
to filter out characters. Since 4.0, we simply remove all characters that are not alphanumeric, underscores, dashes, or periods.
The resulting DOM is essentially:
1<input type="file" onChange={this.uploadFile} />
The preprocess(file, next)
prop provides an opportunity to do something before the file upload begins,
modify the file (scaling the image for example), or abort the upload by not calling next(file)
.
When a file is chosen, it will immediately be uploaded to S3 (unless autoUpload
is false
). You can listen for progress (and create a status bar, for example) by providing an onProgress
function to the component.
You can pass any extra props to <ReactS3Uploader />
and these will be passed down to the final <input />
. which means that if you give the ReactS3Uploader a className or a name prop the input will have those as well.
It is possible to use a custom function to provide signedUrl
directly to s3uploader
by adding getSignedUrl
prop. The function you provide should take file
and callback
arguments. Callback should be called with an object containing signedUrl
key.
1import ApiClient from './ApiClient'; 2 3function getSignedUrl(file, callback) { 4 const client = new ApiClient(); 5 const params = { 6 objectName: file.name, 7 contentType: file.type 8 }; 9 10 client.get('/my/signing/server', { params }) 11 .then(data => { 12 callback(data); 13 }) 14 .catch(error => { 15 console.error(error); 16 }); 17} 18 19 20<ReactS3Uploader 21 className={uploaderClassName} 22 getSignedUrl={getSignedUrl} 23 accept="image/*" 24 onProgress={onProgress} 25 onError={onError} 26 onFinish={onFinish} 27 uploadRequestHeaders={{ 28 'x-amz-acl': 'public-read' 29 }} 30 contentDisposition="auto" 31/> 32
You can use the Express router that is bundled with this module to answer calls to /s3/sign
1app.use('/s3', require('react-s3-uploader/s3router')({ 2 bucket: "MyS3Bucket", 3 region: 'us-east-1', //optional 4 signatureVersion: 'v4', //optional (use for some amazon regions: frankfurt and others) 5 signatureExpires: 60, //optional, number of seconds the upload signed URL should be valid for (defaults to 60) 6 headers: {'Access-Control-Allow-Origin': '*'}, // optional 7 ACL: 'private', // this is default 8 uniquePrefix: true // (4.0.2 and above) default is true, setting the attribute to false preserves the original filename in S3 9}));
This also provides another endpoint: GET /s3/img/(.*)
and GET /s3/uploads/(.*)
. This will create a temporary URL
that provides access to the uploaded file (which are uploaded privately by default). The
request is then redirected to the URL, so that the image is served to the client.
If you need to use pass more than region and signatureVersion to S3 instead use the getS3
param. getS3
accepts a
function that returns a new AWS.S3 instance. This is also useful if you want to mock S3 for testing purposes.
To use this you will need to include the express module in your package.json dependencies.
The aws-sdk
must be configured with your account's Access Key and Secret Access Key. There are a number of ways to provide these, but setting up environment variables is the quickest. You just have to configure environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
, and AWS automatically picks them up.
1import boto 2import mimetypes 3import json 4 5... 6 7conn = boto.connect_s3('AWS_KEY', 'AWS_SECRET') 8 9def sign_s3_upload(request): 10 object_name = request.GET['objectName'] 11 content_type = mimetypes.guess_type(object_name)[0] 12 13 signed_url = conn.generate_url( 14 300, 15 "PUT", 16 'BUCKET_NAME', 17 'FOLDER_NAME' + object_name, 18 headers = {'Content-Type': content_type, 'x-amz-acl':'public-read'}) 19 20 return HttpResponse(json.dumps({'signedUrl': signed_url}))
1# Usual fog config, set as an initializer 2storage = Fog::Storage.new( 3 provider: 'AWS', 4 aws_access_key_id: ENV['AWS_ACCESS_KEY_ID'], 5 aws_secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'] 6) 7 8# In the controller 9options = {path_style: true} 10headers = {"Content-Type" => params[:contentType], "x-amz-acl" => "public-read"} 11 12url = storage.put_object_url(ENV['S3_BUCKET_NAME'], "user_uploads/#{params[:objectName]}", 15.minutes.from_now.to_time.to_i, headers, options) 13 14respond_to do |format| 15 format.json { render json: {signedUrl: url} } 16end
1const aws = require('aws-sdk') 2const uuidv4 = require('uuid/v4') 3const { createError } = require('micro') 4 5const options = { 6 bucket: 'S3_BUCKET_NAME', 7 region: 'S3_REGION', 8 signatureVersion: 'v4', 9 ACL: 'public-read' 10} 11 12const s3 = new aws.S3(options) 13 14module.exports = (req, res) => { 15 const originalFilename = req.query.objectName 16 17 // custom filename using random uuid + file extension 18 const fileExtension = originalFilename.split('.').pop() 19 const filename = `${uuidv4()}.${fileExtension}` 20 21 const params = { 22 Bucket: options.bucket, 23 Key: filename, 24 Expires: 60, 25 ContentType: req.query.contentType, 26 ACL: options.ACL 27 } 28 29 const signedUrl = s3.getSignedUrl('putObject', params) 30 31 if (signedUrl) { 32 // you may also simply return the signed url, i.e. `return { signedUrl }` 33 return { 34 signedUrl, 35 filename, 36 originalFilename, 37 publicUrl: signedUrl.split('?').shift() 38 } 39 } else { 40 throw createError(500, 'Cannot create S3 signed URL') 41 } 42}
If you do some work on another server, and would love to contribute documentation, please send us a PR!
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
Found 9/19 approved changesets -- score normalized to 4
Reason
8 existing vulnerabilities detected
Details
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2025-07-07
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More