Gathering detailed insights and metrics for s3-client
Gathering detailed insights and metrics for s3-client
Gathering detailed insights and metrics for s3-client
Gathering detailed insights and metrics for s3-client
@aws-sdk/client-s3
AWS SDK for JavaScript S3 Client for Node.js, Browser and React Native
minio
S3 Compatible Cloud Storage client
s3
high level amazon s3 client using knox as a backend
s3-sync-client
AWS CLI s3 sync for Node.js provides a modern client to perform S3 sync operations between file systems and S3 buckets in the spirit of the official AWS CLI command
npm install s3-client
Typescript
Module System
Min. Node Version
Node Version
NPM Version
56.8
Supply Chain
96.3
Quality
72.6
Maintenance
100
Vulnerability
97.6
License
JavaScript (100%)
Total Downloads
2,474,861
Last Day
420
Last Week
3,980
Last Month
19,944
Last Year
272,285
17 Stars
226 Commits
10 Forks
5 Watching
4 Branches
1 Contributors
Minified
Minified + Gzipped
Latest Version
4.4.2
Package Id
s3-client@4.4.2
Unpacked Size
83.82 kB
Size
19.68 kB
File Count
11
NPM Version
5.10.0
Node Version
8.11.3
Cumulative downloads
Total Downloads
Last day
-62.1%
420
Compared to previous day
Last week
-23.3%
3,980
Compared to previous week
Last month
-4.6%
19,944
Compared to previous month
Last year
-38.3%
272,285
Compared to previous year
npm install s3-client --save
See also the companion CLI tool which is meant to be a drop-in replacement for s3cmd: s3-cli.
1var s3 = require('s3-client');
2
3var client = s3.createClient({
4 maxAsyncS3: 20, // this is the default
5 s3RetryCount: 3, // this is the default
6 s3RetryDelay: 1000, // this is the default
7 multipartUploadThreshold: 20971520, // this is the default (20 MB)
8 multipartUploadSize: 15728640, // this is the default (15 MB)
9 s3Options: {
10 accessKeyId: "your s3 key",
11 secretAccessKey: "your s3 secret",
12 region: "your region",
13 // endpoint: 's3.yourdomain.com',
14 // sslEnabled: false
15 // any other options are passed to new AWS.S3()
16 // See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Config.html#constructor-property
17 },
18});
1var s3 = require('s3-client'); 2var awsS3Client = new AWS.S3(s3Options); 3var options = { 4 s3Client: awsS3Client, 5 // more options available. See API docs below. 6}; 7var client = s3.createClient(options);
1var params = { 2 localFile: "some/local/file", 3 4 s3Params: { 5 Bucket: "s3 bucket name", 6 Key: "some/remote/file", 7 // other options supported by putObject, except Body and ContentLength. 8 // See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property 9 }, 10}; 11var uploader = client.uploadFile(params); 12uploader.on('error', function(err) { 13 console.error("unable to upload:", err.stack); 14}); 15uploader.on('progress', function() { 16 console.log("progress", uploader.progressMd5Amount, 17 uploader.progressAmount, uploader.progressTotal); 18}); 19uploader.on('end', function() { 20 console.log("done uploading"); 21});
1var params = { 2 localFile: "some/local/file", 3 4 s3Params: { 5 Bucket: "s3 bucket name", 6 Key: "some/remote/file", 7 // other options supported by getObject 8 // See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property 9 }, 10}; 11var downloader = client.downloadFile(params); 12downloader.on('error', function(err) { 13 console.error("unable to download:", err.stack); 14}); 15downloader.on('progress', function() { 16 console.log("progress", downloader.progressAmount, downloader.progressTotal); 17}); 18downloader.on('end', function() { 19 console.log("done downloading"); 20});
1var params = { 2 localDir: "some/local/dir", 3 deleteRemoved: true, // default false, whether to remove s3 objects 4 // that have no corresponding local file. 5 6 s3Params: { 7 Bucket: "s3 bucket name", 8 Prefix: "some/remote/dir/", 9 // other options supported by putObject, except Body and ContentLength. 10 // See: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property 11 }, 12}; 13var uploader = client.uploadDir(params); 14uploader.on('error', function(err) { 15 console.error("unable to sync:", err.stack); 16}); 17uploader.on('progress', function() { 18 console.log("progress", uploader.progressAmount, uploader.progressTotal); 19}); 20uploader.on('end', function() { 21 console.log("done uploading"); 22});
Consider increasing the socket pool size in the http
and https
global
agents. This will improve bandwidth when using uploadDir
and downloadDir
functions. For example:
1http.globalAgent.maxSockets = https.globalAgent.maxSockets = 20;
This contains a reference to the aws-sdk module. It is a valid use case to use both this module and the lower level aws-sdk module in tandem.
Creates an S3 client.
options
:
s3Client
- optional, an instance of AWS.S3
. Leave blank if you provide s3Options
.s3Options
- optional. leave blank if you provide s3Client
.
new AWS.S3()
:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Config.html#constructor-propertymaxAsyncS3
- maximum number of simultaneous requests this client will
ever have open to S3. defaults to 20
.s3RetryCount
- how many times to try an S3 operation before giving up.
Default 3.s3RetryDelay
- how many milliseconds to wait before retrying an S3
operation. Default 1000.multipartUploadThreshold
- if a file is this many bytes or greater, it
will be uploaded via a multipart request. Default is 20MB. Minimum is 5MB.
Maximum is 5GB.multipartUploadSize
- when uploading via multipart, this is the part size.
The minimum size is 5MB. The maximum size is 5GB. Default is 15MB. Note that
S3 has a maximum of 10000 parts for a multipart upload, so if this value is
too small, it will be ignored in favor of the minimum necessary value
required to upload the file.bucket
S3 bucketkey
S3 keybucketLocation
string, one of these:
You can find out your bucket location programatically by using this API: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getBucketLocation-property
returns a string which looks like this:
https://s3.amazonaws.com/bucket/key
or maybe this if you are not in US Standard:
https://s3-eu-west-1.amazonaws.com/bucket/key
bucket
S3 Bucketkey
S3 KeyWorks for any region, and returns a string which looks like this:
http://bucket.s3.amazonaws.com/key
See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property
params
:
s3Params
: params to pass to AWS SDK putObject
.localFile
: path to the file on disk you want to upload to S3.defaultContentType
: Unless you explicitly set the ContentType
parameter in s3Params
, it will be automatically set for you based on the
file extension of localFile
. If the extension is unrecognized,
defaultContentType
will be used instead. Defaults to
application/octet-stream
.The difference between using AWS SDK putObject
and this one:
ContentType
based on file extension if you do not provide it.Returns an EventEmitter
with these properties:
progressMd5Amount
progressAmount
progressTotal
And these events:
'error' (err)
'end' (data)
- emitted when the file is uploaded successfully
data
is the same object that you get from putObject
in AWS SDK'progress'
- emitted when progressMd5Amount
, progressAmount
, and
progressTotal
properties change. Note that it is possible for progress to
go backwards when an upload fails and must be retried.'fileOpened' (fdSlicer)
- emitted when localFile
has been opened. The file
is opened with the fd-slicer
module because we might need to read from multiple locations in the file at
the same time. fdSlicer
is an object for which you can call
createReadStream(options)
. See the fd-slicer README for more information.'fileClosed'
- emitted when localFile
has been closed.And these methods:
abort()
- call this to stop the find operation.See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property
params
:
localFile
- the destination path on disk to write the s3 object intos3Params
: params to pass to AWS SDK getObject
.The difference between using AWS SDK getObject
and this one:
Returns an EventEmitter
with these properties:
progressAmount
progressTotal
And these events:
'error' (err)
'end'
- emitted when the file is downloaded successfully'progress'
- emitted when progressAmount
and progressTotal
properties change.http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property
s3Params
: params to pass to AWS SDK getObject
.The difference between using AWS SDK getObject
and this one:
Returns an EventEmitter
with these properties:
progressAmount
progressTotal
And these events:
'error' (err)
'end' (buffer)
- emitted when the file is downloaded successfully.
buffer
is a Buffer
containing the object data.'progress'
- emitted when progressAmount
and progressTotal
properties change.http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getObject-property
s3Params
: params to pass to AWS SDK getObject
.The difference between using AWS SDK getObject
and this one:
If you want retries, progress, or MD5 checking, you must code it yourself.
Returns a ReadableStream
with these additional events:
'httpHeaders' (statusCode, headers)
- contains the HTTP response
headers and status code.See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property
params
:
s3Params
- params to pass to AWS SDK listObjects
.recursive
- true
or false
whether or not you want to recurse
into directories. Default false
.Note that if you set Delimiter
in s3Params
then you will get a list of
objects and folders in the directory you specify. You probably do not want to
set recursive
to true
at the same time as specifying a Delimiter
because
this will cause a request per directory. If you want all objects that share a
prefix, leave the Delimiter
option null
or undefined
.
Be sure that s3Params.Prefix
ends with a trailing slash (/
) unless you
are requesting the top-level listing, in which case s3Params.Prefix
should
be empty string.
The difference between using AWS SDK listObjects
and this one:
Returns an EventEmitter
with these properties:
progressAmount
objectsFound
dirsFound
And these events:
'error' (err)
'end'
- emitted when done listing and no more 'data' events will be emitted.'data' (data)
- emitted when a batch of objects are found. This is
the same as the data
object in AWS SDK.'progress'
- emitted when progressAmount
, objectsFound
, and
dirsFound
properties change.And these methods:
abort()
- call this to stop the find operation.See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#deleteObjects-property
s3Params
are the same.
The difference between using AWS SDK deleteObjects
and this one:
Returns an EventEmitter
with these properties:
progressAmount
progressTotal
And these events:
'error' (err)
'end'
- emitted when all objects are deleted.'progress'
- emitted when the progressAmount
or progressTotal
properties change.'data' (data)
- emitted when a request completes. There may be more.Syncs an entire directory to S3.
params
:
localDir
- source path on local file system to sync to S3s3Params
Prefix
(required)Bucket
(required)deleteRemoved
- delete s3 objects with no corresponding local file.
default falsegetS3Params
- function which will be called for every file that
needs to be uploaded. You can use this to skip some files. See below.defaultContentType
: Unless you explicitly set the ContentType
parameter in s3Params
, it will be automatically set for you based on the
file extension of localFile
. If the extension is unrecognized,
defaultContentType
will be used instead. Defaults to
application/octet-stream
.followSymlinks
- Set this to false
to ignore symlinks.
Defaults to true
.1function getS3Params(localFile, stat, callback) {
2 // call callback like this:
3 var err = new Error(...); // only if there is an error
4 var s3Params = { // if there is no error
5 ContentType: getMimeType(localFile), // just an example
6 };
7 // pass `null` for `s3Params` if you want to skip uploading this file.
8 callback(err, s3Params);
9}
Returns an EventEmitter
with these properties:
progressAmount
progressTotal
progressMd5Amount
progressMd5Total
deleteAmount
deleteTotal
filesFound
objectsFound
doneFindingFiles
doneFindingObjects
doneMd5
And these events:
'error' (err)
'end'
- emitted when all files are uploaded'progress'
- emitted when any of the above progress properties change.'fileUploadStart' (localFilePath, s3Key)
- emitted when a file begins
uploading.'fileUploadEnd' (localFilePath, s3Key)
- emitted when a file successfully
finishes uploading.uploadDir
works like this:
Prefix
. S3 guarantees
returned objects to be in sorted order.localDir
.deleteRemoved
is set, deleting remote objects whose corresponding local
files are missing.Syncs an entire directory from S3.
params
:
localDir
- destination directory on local file system to sync tos3Params
Prefix
(required)Bucket
(required)deleteRemoved
- delete local files with no corresponding s3 object. default false
getS3Params
- function which will be called for every object that
needs to be downloaded. You can use this to skip downloading some objects.
See below.followSymlinks
- Set this to false
to ignore symlinks.
Defaults to true
.1function getS3Params(localFile, s3Object, callback) { 2 // localFile is the destination path where the object will be written to 3 // s3Object is same as one element in the `Contents` array from here: 4 // http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property 5 6 // call callback like this: 7 var err = new Error(...); // only if there is an error 8 var s3Params = { // if there is no error 9 VersionId: "abcd", // just an example 10 }; 11 // pass `null` for `s3Params` if you want to skip downloading this object. 12 callback(err, s3Params); 13}
Returns an EventEmitter
with these properties:
progressAmount
progressTotal
progressMd5Amount
progressMd5Total
deleteAmount
deleteTotal
filesFound
objectsFound
doneFindingFiles
doneFindingObjects
doneMd5
And these events:
'error' (err)
'end'
- emitted when all files are downloaded'progress'
- emitted when any of the progress properties above change'fileDownloadStart' (localFilePath, s3Key)
- emitted when a file begins
downloading.'fileDownloadEnd' (localFilePath, s3Key)
- emitted when a file successfully
finishes downloading.downloadDir
works like this:
Prefix
. S3 guarantees
returned objects to be in sorted order.localDir
.deleteRemoved
is set, deleting local files whose corresponding objects
are missing.Deletes an entire directory on S3.
s3Params
:
Bucket
Prefix
MFA
Returns an EventEmitter
with these properties:
progressAmount
progressTotal
And these events:
'error' (err)
'end'
- emitted when all objects are deleted.'progress'
- emitted when the progressAmount
or progressTotal
properties change.deleteDir
works like this:
See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#copyObject-property
s3Params
are the same. Don't forget that CopySource
must contain the
source bucket name as well as the source key name.
The difference between using AWS SDK copyObject
and this one:
Returns an EventEmitter
with these events:
'error' (err)
'end' (data)
See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#copyObject-property
s3Params
are the same. Don't forget that CopySource
must contain the
source bucket name as well as the source key name.
Under the hood, this uses copyObject
and then deleteObjects
only if the
copy succeeded.
Returns an EventEmitter
with these events:
'error' (err)
'copySuccess' (data)
'end' (data)
Using the AWS SDK, you can send a HEAD request, which will tell you if a file exists at Key
.
See http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#headObject-property
1var client = require('s3-client').createClient({ /* options */ });
2client.s3.headObject({
3 Bucket: 's3 bucket name',
4 Key: 'some/remote/file'
5}, function(err, data) {
6 if (err) {
7 // file does not exist (err.statusCode == 404)
8 return;
9 }
10 // file exists
11});
S3_KEY=<valid_s3_key> S3_SECRET=<valid_s3_secret> S3_BUCKET=<valid_s3_bucket> npm test
Tests upload and download large amounts of data to and from S3. The test timeout is set to 40 seconds because Internet connectivity waries wildly.
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
4 existing vulnerabilities detected
Details
Reason
Found 0/30 approved changesets -- score normalized to 0
Reason
no SAST tool detected
Details
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Score
Last Scanned on 2024-12-16
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More