Gathering detailed insights and metrics for ssh2-sftp-client
Gathering detailed insights and metrics for ssh2-sftp-client
Gathering detailed insights and metrics for ssh2-sftp-client
Gathering detailed insights and metrics for ssh2-sftp-client
npm install ssh2-sftp-client
New event handling strategy
Published on 07 Aug 2024
Documentation fix Mk2
Published on 22 Jan 2024
Typo fix in packagte.json
Published on 22 Jan 2024
Minor documentation update
Published on 22 Jan 2024
Bug fix and security update
Published on 19 Jan 2024
Maintenance Release
Published on 02 May 2023
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
829 Stars
872 Commits
200 Forks
18 Watching
8 Branches
49 Contributors
Updated on 28 Nov 2024
Minified
Minified + Gzipped
JavaScript (100%)
Cumulative downloads
Total Downloads
Last day
-4%
110,982
Compared to previous day
Last week
0.3%
641,327
Compared to previous week
Last month
9%
2,643,177
Compared to previous month
Last year
32.9%
24,771,123
Compared to previous year
This package provides the class SftpClient, an SFTP client for node.js. It is a promise based decorator class around the excellent SSH2 package, which provides a pure node Javascript event based ssh2 implementation.
Documentation on the methods and available options in the underlying modules can be found on the SSH2 project pages. As the ssh2-sftp-client package is just a wrapper around the ssh2
module, you will find lots of useful information, tips and examples in the ssh2
repository.
Current stable release is *v11.0.0.
Code has been tested against Node versions 18.20.4, 20.16.0 and 22.5.1. Node versions prior to v18.x are not supported. Also note there is currently a deprecation warning when using node v22+. This is from the ssh2
package and outside control of this package.
If you find this module useful and you would like to support the on-going maintenance and support of users, please consider making a small donation.
The main change in v11 concerns how the package manages events raised by the ssh2
packagwe it depends on. Managing events within the context of asynchronous code as ocurs when using promises is challenging. To understand some of the more subtle issues involved, it is recommended you read the Asynchronous vs. synchronous and Error events sections of the Events chapter from the node documentation.
The previous versions of this package use a global event handler approach to manage events raised outside the execution of any promise. However, this approach is problematic as all the global listeners can really do is raise an error. Attempting to catch errors raised inside event handlers is extremely difficult to manage within client code because those errors are raised inside a separate asynchronous execution context. In version 11, this approach has been replaced by a mechanism whereby the client code can pass in application specific global handlers if desired. If no handlers are defined, default handlers will log the event and when necessary, invalidate any existing connection objects. See the 1.2 section for details.
In basic terms ssh2-sftp-client
is a simple wrapper around the ssh2
package which provides a promise base API for interacting with a remote SFTP server . The ssh2
package provides an event based API for interacting with the ssh
protocol. The ssh2-sftp-client
package uses the sftp
subsystem of this protocol to implement the basic operations typically associated with an sftp
client.
Wrapping an event based API with a promised based API comes with a number of challenges. In particular, efficiently and reliably managing events within the context of asynchrounous code execution. This package uses the following strategies;
All direct interactions with the ssh2
API are wrapped in promise objects. When the API call succeeds, the assoiated promise will be sucessfully resolved. When an error occurs, the promise is rejected.
An error can either be due to a low level network error, such as a lost connection to the remote server or due to an operational error, such as a file not existing or not having the appropriate permissions for access.
Each of the available SftpClient
methods wrap the method call inside a promise. In crwating eacvh promise, the class adds temporary event listenrs for the error, end and close events and links those liseners to the method's promise via it's reject()
method.
If a promise is waiting to be fulfilled when either of the two types of errors occurs, the error will be communicated back to client code via a rejected promise.
When the ssh2
emitter raises an event outside the context of any promise, that event will be handled by global event handlers. By default, these event handlers will log the event and will invalidate any existing socket connection objects, preventing any further API calls until a new connection is extablished.
The SftpClient
class constructor supports an optoinal second argument which is an objedct whi8ch can have any of three properties representing event callbaqck funct5ions which will be executed for each of the possible events error, end and close.
The need for both global listeners and temporary promise listeners is because network end, close or error events can occur at any time, including in-betwseen API calls. During an API call, a promise is active and can be used to communicate event information back to the calling code via normal promise communication means i.e. async/await with try/catch or promise chain's then/catch mechanism. However, outside API calls, no promise exists and there is no reliable mechanism to return error and other event information back to calling code. You cannot reliably use try/cdatch to catch errorsx thrown inside event listenrs as you lack control over when the listener code runs. Your try/catch block can easily complete before the error is raised as there is no equivalent await type functionality in this situation.
As there is no simple default way to return error and other event information back to the calling code, ssh2-sftp-client
doesn't try to. Instead, the default action is to just log the event information and invalidate any existing sftp connections. This strategy is often sufficient for many use cases. For those cases where it isn't, client code can pass in end, close and/or error listener functions when instantiating the SftpClient
object. If provided, these listners will be executed whenever the default global listeners are executed, which is whenever the ssh2
event emitter raises an end, close or error event which is not handled by one of the temporary promise linked event listeners.
Version 11 of ssh2-sftp-client
also changes the behaviour of the temporary promise linked end and close listeners. Prior to versioln 11, these listeners did not reject promises. They would only invalidate the underlying ssh
connection object. Only the error listener would actually reject the associated promise. This was done because you cannot guarantee the order in which events are responded to.
In most cases, events will occur in the order error, end and then close. The error event would contain details about the cause of the error while the end and close events just communicvate that these events have been raised. The normal flow would be
Error event occurs including error cause description. Listener catches event, creates an error object and calls the associated promises reject function to reject the promise. The calling process receives a rejected promise object.
End event occurs. The end listener catches the event and marks the connection object as invalid as the socket connection has been ended. There is no need to call the reject method of the associated promise as it has already been called by the error listener and you can only call one promise resolution function.
Close event occurs. This event means the socket connection has been closed. The listener will ensure any connection information has been invalidated. Aga8in, ther is no need to call the reject method of the associated promise as it has already been called by the error listener.
In some cases, no ende event is raised and you only get an error event followed by a close event. In versions of ssh2-sftp-client
prior to version 11, neither the end or the close listeners attempted to call the reject method of the associated promise. It was assumed that all sftp servers would raise an error event whenever a connection was unexpectedly ended or closed. Unfortunately, it turns out some sftp servers are not well behaved and will terminate the connection without providing any error information or raising an error event. When this occurred in versions prior to version 11, it could result in either an API call hanging because its associated promise never gets rejected or resolved or the call gets rejected with a timeout error aftrer a significant delay.
In order to handle the possible hanging issue in version 11, the temporary promise linked end and close listeners have been updated to always call the promise's reject function if they fire. While this works, it can cause a minor issue. As wse cannot gurantee the order in which events are resonded to by listeners, it is possible that either the end or close listener may be executed before the error listener. When this occurs, the promise is rejected, but the only information wse have at that point is that the promise wsas reject due to either an end or close event. We don't yet have any details regarding what error has caused the unexpected end or close event. Furthermore, because only the first promise resolution function call has any effect, calling reject within the error listener (assuming an error event does eventually arrive) has no effect and does not communicate error information back to the caller. This means that in some circumstances, especially when working with some poorly behaved sftp servers, an sftp connection will be lost/closed with no indication as to reason. This can make diagnosis and bug tracking frustrating.
1npm install ssh2-sftp-client
1let Client = require('ssh2-sftp-client'); 2let sftp = new Client(); 3 4sftp.connect({ 5 host: '127.0.0.1', 6 port: '8080', 7 username: 'username', 8 password: '******' 9}).then(() => { 10 return sftp.list('/pathname'); 11}).then(data => { 12 console.log(data, 'the data info'); 13}).catch(err => { 14 console.log(err, 'catch error'); 15});
The connection options are the same as those offered by the underlying SSH2 module, with just a couple of additional properties added to tweak the retry
parameters, add a debug
function and set the promiseLimit
property. For full details on the other properties, please see SSH2 client methods. In particular, see the ssh2
documentation for details relating to setting various key exchange and encryption/signing algorithms used as part of the ssh2 protocol.
All the methods will return a Promise, except for on(), ~removeListener()
, createReadStream
and createWriteStream
, which are typically only used in special use cases.
Note that I don't use Typescript and I don't maintain any typescript definition files. There are some typescript type definition files for this module, but they are maintained separately and have nothing to do with this project. Therefore, please do not log any issues arising from the use of these definition files with this project. Instead, refer your issues to the maintainers of those modules.
The convention with both FTP and SFTP is that paths are specified using a 'nix' style i.e. use /
as the path separator. This means that even if your SFTP server is running on a win32 platform, you should use /
instead of \
as the path separator. For example, for a win32 path of C:\Users\fred
you would actually use /C:/Users/fred
. If your win32 server does not support the 'nix' path convention, you can try setting the remotePathSep
property of the SftpClient
object to the path separator of your remote server. This might work, but has not been tested. Please let me know if you need to do this and provide details of the SFTP server so that I can try to create an appropriate environment and adjust things as necessary. At this point, I'm not aware of any win32 based SFTP servers which do not support the 'nix' path convention.
All remote paths must either be absolute e.g. /absolute/path/to/file
or they can be relative with a prefix of either ./
(relative to current remote directory) or ../
(relative to parent of current remote directory) e.g. ./relative/path/to/file
or ../relative/to/parent/file
. It is also possible to do things like ../../../file
to specify the parent of the parent of the parent of the current remote directory. The shell tilde (~
) and common environment variables like $HOME
are NOT supported.
It is important to recognise that the current remote directory may not always be what you may expect. A lot will depend on the remote platform of the SFTP server and how the SFTP server has been configured. When things don't seem to be working as expected, it is often a good idea to verify your assumptions regarding the remote directory and remote paths. One way to do this is to login using a command line program like sftp
or lftp
.
There is a small performance hit for using ./
and ../
as the module must query the remote server to determine what the root path is and derive the absolute path. Using absolute paths are therefore more efficient and likely more robust.
When specifying file paths, ensure to include a full path i.e. include the remote file name. Don't expect the module to append the local file name to the path you provide. For example, the following will not work
1client.put('/home/fred/test.txt', '/remote/dir');
will not result in the file test.txt
being copied to /remote/dir/test.txt
. You need to specify the target file name as well e.g.
1client.put('/home/fred/test.txt', '/remote/dir/test.txt');
Note that the remote file name does not have to be the same as the local file name. The following works fine;
1client.put('/home/fred/test.txt', '/remote/dir/test-copy.txt');
This will copy the local file test.txt
to the remote file test-copy.txt
in the directory /remote/dir
.
Constructor to create a new ssh2-sftp-client
object. An optional name
string can be provided, which will be used in error messages to help identify which client has thrown the error.
Constructor Arguments
console.log()
.Example Use
1'use strict'; 2 3const Client = require('ssh2-sftp-client'); 4 5const config = { 6 host: 'example.com', 7 username: 'donald', 8 password: 'my-secret' 9}; 10 11const sftp = new Client('example-client'); 12 13sftp.connect(config) 14 .then(() => { 15 return sftp.cwd(); 16 }) 17 .then(p => { 18 console.log(`Remote working directory is ${p}`); 19 return sftp.end(); 20 }) 21 .catch(err => { 22 console.log(`Error: ${err.message}`); // error message will include 'example-client' 23 });
Connect to an sftp server. Full documentation for connection options is available here
Connection Options
This module is based on the excellent SSH2 module. That module is a general SSH2 client and server library and provides much more functionality than just SFTP connectivity. Many of the connect options provided by that module are less relevant for SFTP connections. It is recommended you keep the config options to the minimum needed and stick to the options listed in the commonOpts
below.
The retries
, retry_factor
and retry_minTimeout
options are not part of the SSH2 module. These are part of the configuration for the retry package and what is used to enable retrying of sftp connection attempts. See the documentation for that package for an explanation of these values.
The promiseLimit
is another option which is not part of the ssh2
module and is specific to ssh2-sftp-client
. It is a property used to limit the maximum number of concurrent promises possible when either downloading or uploading a directory tree using the downloadDir()
or uploadDir()
methods. The default setting for this property is 10. NOTE: bigger doe snot mean better. Many factors can affect what is the ideal setting for promiseLimit
. If it is too large, any benefits are lost while node spends time switching contexts and/or withi the overheads associated with creating and cleaning up promises. Lots of factors can affect what the setting should be, including size of files, number of files, speed of network, version of node, capabilities of remote sftp server etc. A setting of 10 seems to be a reasonably good default and should be adequate for most use cases. However, if you feel it needs to be changed, I highly recommend that you benchmark different values to work out what is the best maximum size before you begin to see a performance drop off.
1// common options 2 3let commonOpts { 4 host: 'localhost', // string Hostname or IP of server. 5 port: 22, // Port number of the server. 6 forceIPv4: false, // boolean (optional) Only connect via IPv4 address 7 forceIPv6: false, // boolean (optional) Only connect via IPv6 address 8 username: 'donald', // string Username for authentication. 9 password: 'borsch', // string Password for password-based user authentication 10 agent: process.env.SSH_AGENT, // string - Path to ssh-agent's UNIX socket 11 privateKey: fs.readFileSync('/path/to/key'), // Buffer or string that contains 12 passphrase: 'a pass phrase', // string - For an encrypted private key 13 readyTimeout: 20000, // integer How long (in ms) to wait for the SSH handshake 14 strictVendor: true, // boolean - Performs a strict server vendor check 15 debug: myDebug,// function - Set this to a function that receives a single 16 // string argument to get detailed (local) debug information. 17 retries: 2, // integer. Number of times to retry connecting 18 retry_factor: 2, // integer. Time factor used to calculate time between retries 19 retry_minTimeout: 2000, // integer. Minimum timeout between attempts 20 promiseLimit: 10, // max concurrent promises for downloadDir/uploadDir 21}; 22 23// rarely used options 24 25let advancedOpts { 26 localAddress, 27 localPort, 28 hostHash, 29 hostVerifier, 30 agentForward, 31 localHostname, 32 localUsername, 33 tryKeyboard, 34 authHandler, 35 keepaliveInterval, 36 keepaliveCountMax, 37 sock, 38 algorithms, 39 compress 40};
Example Use
1sftp.connect({ 2 host: 'example.com', 3 port: 22, 4 username: 'donald', 5 password: 'youarefired' 6});
Retrieves a directory listing. This method returns a Promise, which once realised, returns an array of objects representing items in the remote directory.
Example Use
1const Client = require('ssh2-sftp-client'); 2 3const config = { 4 host: 'example.com', 5 port: 22, 6 username: 'red-don', 7 password: 'my-secret' 8}; 9 10let sftp = new Client(); 11 12sftp.connect(config) 13 .then(() => { 14 return sftp.list('/path/to/remote/dir'); 15 }) 16 .then(data => { 17 console.log(data); 18 }) 19 .then(() => { 20 sftp.end(); 21 }) 22 .catch(err => { 23 console.error(err.message); 24 });
Return Objects
The objects in the array returned by list()
have the following properties;
1{ 2 type: '-', // file type(-, d, l) 3 name: 'example.txt', // file name 4 size: 43, // file size 5 modifyTime: 1675645360000, // file timestamp of modified time 6 accessTime: 1675645360000, // file timestamp of access time 7 rights: { 8 user: 'rw', 9 group: 'r', 10 other: 'r', 11 }, 12 owner: 1000, // user ID 13 group: 1000, // group ID 14 longname: '-rw-r--r-- 1 fred fred 43 Feb 6 12:02 exaple.txt', // like ls -l line 15}
Tests to see if remote file or directory exists. Returns type of remote object if it exists or false if it does not.
Example Use
1const Client = require('ssh2-sftp-client'); 2 3const config = { 4 host: 'example.com', 5 port: 22, 6 username: 'red-don', 7 password: 'my-secret' 8}; 9 10let sftp = new Client(); 11 12sftp.connect(config) 13 .then(() => { 14 return sftp.exists('/path/to/remote/dir'); 15 }) 16 .then(data => { 17 console.log(data); // will be false or d, -, l (dir, file or link) 18 }) 19 .then(() => { 20 sftp.end(); 21 }) 22 .catch(err => { 23 console.error(err.message); 24 });
Returns the attributes associated with the object pointed to by path
.
Attributes
The stat()
method returns an object with the following properties;
1let stats = { 2 mode: 33279, // integer representing type and permissions 3 uid: 1000, // user ID 4 gid: 985, // group ID 5 size: 5, // file size 6 accessTime: 1566868566000, // Last access time. milliseconds 7 modifyTime: 1566868566000, // last modify time. milliseconds 8 isDirectory: false, // true if object is a directory 9 isFile: true, // true if object is a file 10 isBlockDevice: false, // true if object is a block device 11 isCharacterDevice: false, // true if object is a character device 12 isSymbolicLink: false, // true if object is a symbolic link 13 isFIFO: false, // true if object is a FIFO 14 isSocket: false // true if object is a socket 15};
Example Use
1let client = new Client(); 2 3client.connect(config) 4 .then(() => { 5 return client.stat('/path/to/remote/file'); 6 }) 7 .then(data => { 8 // do something with data 9 }) 10 .then(() => { 11 client.end(); 12 }) 13 .catch(err => { 14 console.error(err.message); 15 });
Retrieve a file from a remote SFTP server. The dst
argument defines the destination and can be either a string, a stream object or undefined. If it is a string, it is interpreted as the path to a location on the local file system (path should include the file name). If it is a stream object, the remote data is passed to it via a call to pipe(). If dst
is undefined, the method will put the data into a buffer and return that buffer when the Promise is resolved. If dst
is defined, it is returned when the Promise is resolved.
In general, if you're going to pass in a string as the destination, you are better off using the fastGet()
method.
get()
command (see below).Options
The options
argument can be used to pass options to the underlying streams and pipe call used by this method. The argument is an object with three possible properties, readStreamOptions
, writeStreamOptions
and pipeOptions
. The values for each of these properties should be an object containing the required options. For example, possible read stream and pipe options could be defined as
1let options = { 2 readStreamOptions: { 3 flags: 'r', 4 encoding: null, 5 handle: null, 6 mode: 0o666, 7 autoClose: true 8 }, 9 pipeOptions: { 10 end: false 11 }}; 12
Most of the time, you won't want to use any options. Sometimes, it may be useful to set the encoding. For example, to 'utf-8'. However, it is important not to do this for binary files to avoid data corruption.
Example Use
1let client = new Client(); 2 3let remotePath = '/remote/server/path/file.txt'; 4let dst = fs.createWriteStream('/local/file/path/copy.txt'); 5 6client.connect(config) 7 .then(() => { 8 return client.get(remotePath, dst); 9 }) 10 .then(() => { 11 client.end(); 12 }) 13 .catch(err => { 14 console.error(err.message); 15 });
zlib.createGunzip()
writeable stream, you can both download and decompress a gzip file 'on the fly'.Downloads a file at remotePath to localPath using parallel reads for faster throughput. This is the simplest method if you just want to download a file. However, fastGet functionality depends heavily on remote sftp server capabilities and not all servers have the concurrency support required. See the Platform Quirks & Warnings section of this README.
Bottom line, when it works, it tends to work reliably. However, for many servers, it simply won't work or will result in truncated/corrupted data.
fastGet()
(see below)Options
1{ 2 concurrency: 64, // integer. Number of concurrent reads to use 3 chunkSize: 32768, // integer. Size of each read in bytes 4 step: function(total_transferred, chunk, total) // callback called each time a 5 // chunk is transferred 6}
Sample Use
1let client = new Client(); 2let remotePath = '/server/path/file.txt'; 3let localPath = '/local/path/file.txt'; 4 5client.connect(config) 6 .then(() => { 7 client.fastGet(remotePath, localPath); 8 }) 9 .then(() => { 10 client.end(); 11 }) 12 .catch(err => { 13 console.error(err.message); 14 });
Upload data from local system to remote server. If the src
argument is a string, it is interpreted as a local file path to be used for the data to transfer. If the src
argument is a buffer, the contents of the buffer are copied to the remote file and if it is a readable stream, the contents of that stream are piped to the remotePath
on the server.
Options
The options object supports three properties, readStreamOptions
, writeStreamOptions
and pipeOptions
. The value for each property should be an object with options as properties and their associated values representing the option value. For example, you might use the following to set writeStream
options.
1{ 2 writeStreamOptions: { 3 flags: 'w', // w - write and a - append 4 encoding: null, // use null for binary files 5 mode: 0o666, // mode to use for created file (rwx) 6}}
The most common options to use are mode and encoding. The values shown above are the defaults. You do not have to set encoding to utf-8 for text files, null is fine for all file types. However, using utf-8 encoding for binary files will often result in data corruption.
Note that you cannot set autoClose: false
for writeStreamOptions
. If you attempt to set this property to false, it will be ignored. This is necessary to avoid a race condition which may exist when setting autoClose
to false on the writeStream. As there is no easy way to access the writeStream once the promise has been resolved, setting this to autoClose false is not terribly useful as there is no easy way to manually close the stream after the promise has been resolved.
Example Use
1let client = new Client(); 2 3let data = fs.createReadStream('/path/to/local/file.txt'); 4let remote = '/path/to/remote/file.txt'; 5 6client.connect(config) 7 .then(() => { 8 return client.put(data, remote); 9 }) 10 .then(() => { 11 return client.end(); 12 }) 13 .catch(err => { 14 console.error(err.message); 15 });
fastPut()
.Uploads the data in file at localPath
to a new file on remote server at remotePath
using concurrency. The options object allows tweaking of the fast put process. Note that this functionality is heavily dependent on the capabilities of the remote sftp server, which must support the concurrency operations used by this method. This is not part of the standard and therefore is not available in all sftp servers. See the Platform Quirks & Warnings for more details.
Bottom line, when it works, it tends to work well. However, when it doesn't work, it may fail completely or it may result in truncated or corrupted data transfers.
Options
1{ 2 concurrency: 64, // integer. Number of concurrent reads 3 chunkSize: 32768, // integer. Size of each read in bytes 4 mode: 0o755, // mixed. Integer or string representing the file mode to set 5 step: function(total_transferred, chunk, total) // function. Called every time 6 // a part of a file was transferred 7}
Example Use
1let localFile = '/path/to/file.txt'; 2let remoteFile = '/path/to/remote/file.txt'; 3let client = new Client(); 4 5client.connect(config) 6 .then(() => { 7 client.fastPut(localFile, remoteFile); 8 }) 9 .then(() => { 10 client.end(); 11 }) 12 .catch(err => { 13 console.error(err.message); 14 });
Append the input
data to an existing remote file. There is no integrity checking performed apart from normal writeStream checks. This function simply opens a writeStream on the remote file in append mode and writes the data passed in to the file.
Options
The following options are supported;
1{ 2 flags: 'a', // w - write and a - append 3 encoding: null, // use null for binary files 4 mode: 0o666, // mode to use for created file (rwx) 5 autoClose: true // automatically close the write stream when finished 6}
The most common options to use are mode and encoding. The values shown above are the defaults. You do not have to set encoding to utf-8 for text files, null is fine for all file types. Generally, I would not attempt to append binary files.
Example Use
1let remotePath = '/path/to/remote/file.txt'; 2let client = new Client(); 3 4client.connect(config) 5 .then(() => { 6 return client.append(Buffer.from('Hello world'), remotePath); 7 }) 8 .then(() => { 9 return client.end(); 10 }) 11 .catch(err => { 12 console.error(err.message); 13 });
Create a new directory. If the recursive flag is set to true, the method will create any directories in the path which do not already exist. Recursive flag defaults to false.
Example Use
1let remoteDir = '/path/to/new/dir'; 2let client = new Client(); 3 4client.connect(config) 5 .then(() => { 6 return client.mkdir(remoteDir, true); 7 }) 8 .then(() => { 9 return client.end(); 10 }) 11 .catch(err => { 12 console.error(err.message); 13 });
Remove a directory. If removing a directory and recursive flag is set to true
, the specified directory and all sub-directories and files will be deleted. If set to false and the directory has sub-directories or files, the action will fail.
Note: There has been at least one report that some SFTP servers will allow non-empty directories to be removed even without the recursive flag being set to true. While this is not standard behaviour, it is recommended that users verify the behaviour of rmdir if there are plans to rely on the recursive flag to prevent removal of non-empty directories.
Example Use
1let remoteDir = '/path/to/remote/dir'; 2let client = new Client(); 3 4client.connect(config) 5 .then(() => { 6 return client.rmdir(remoteDir, true); 7 }) 8 .then(() => { 9 return client.end(); 10 }) 11 .catch(err => { 12 console.error(err.message); 13 });
Delete a file on the remote server.
path: string. Path to remote file to be deleted.
noErrorOK: boolean. If true, no error is raised when you try to delete a non-existent file. Default is false.
Example Use
1let remoteFile = '/path/to/remote/file.txt'; 2let client = new Client(); 3 4client.connect(config) 5 .then(() => { 6 return client.delete(remoteFile); 7 }) 8 .then(() => { 9 return client.end(); 10 }) 11 .catch(err => { 12 console.error(err.message); 13 });
Rename a file or directory from fromPath
to toPath
. You must have the necessary permissions to modify the remote file.
Example Use
1let from = '/remote/path/to/old.txt'; 2let to = '/remote/path/to/new.txt'; 3let client = new Client(); 4 5client.connect(config) 6 .then(() => { 7 return client.rename(from, to); 8 }) 9 .then(() => { 10 return client.end(); 11 }) 12 .catch(err => { 13 console.error(err.message); 14 });
This method uses the openssh POSIX rename extension introduced in OpenSSH 4.8. The advantage of this version of rename over standard SFTP rename is that it is an atomic operation and will allow renaming a resource where the destination name exists. The POSIX rename will also work on some file systems which do not support standard SFTP rename because they don't support the system hardlink() call. The POSIX rename extension is available on all openSSH servers from 4.8 and some other implementations. This is an extension to the standard SFTP protocol and therefore is not supported on all sftp servers.
1let from = '/remote/path/to/old.txt'; 2let to = '/remote/path/to/new.txt'; 3let client = new Client(); 4 5client.connect(config) 6 .then(() => { 7 return client.posixRename(from, to); 8 }) 9 .then(() => { 10 return client.end(); 11 }) 12 .catch(err => { 13 console.error(err.message); 14 });
Change the mode (read, write or execute permissions) of a remote file or directory.
Example Use
1let path = '/path/to/remote/file.txt'; 2let newMode = 0o644; // rw-r-r 3let client = new Client(); 4 5client.connect(config) 6 .then(() => { 7 return client.chmod(path, newMode); 8 }) 9 .then(() => { 10 return client.end(); 11 }) 12 .catch(err => { 13 console.error(err.message); 14 });
Converts a relative path to an absolute path on the remote server. This method is mainly used internally to resolve remote path names.
Warning: Currently, there is a platform inconsistency with this method on win32 platforms. For servers running on non-win32 platforms, providing a path which does not exist on the remote server will result in an empty e.g. '', absolute path being returned. On servers running on win32 platforms, a normalised path will be returned even if the path does not exist on the remote server. It is therefore advised not to use this method to also verify a path exists. instead, use the exist()
method.
Returns what the server believes is the current remote working directory.
Upload the directory specified by srcDir
to the remote directory specified by dstDir
. The dstDir
will be created if necessary. Any sub directories within srcDir
will also be uploaded. Any existing files in the remote path will be overwritten.
The upload process also emits 'upload' events. These events are fired for each successfully uploaded file. The upload
event calls listeners with 1 argument, an object which has properties source and destination. The source property is the path of the file uploaded and the destination property is the path to where the file was uploaded. The purpose of this event is to provide some way for client code to get feedback on the upload progress. You can add your own listener using the on()
method.
The 3rd argument is an options object with two supported properties, filter
and useFastput
.
The filter
option is a function which will be called for each item to be uploaded. The function will be called with two arguments. The first argument is the full path of the item to be uploaded and the second argument is a boolean, which will be true if the target path is for a directory. The filter function will be called for each item in the source path. If the function returns true, the item will be uploaded. If it returns false, it will be filtered and not uploaded. The filter function is called via the Array.filter
method. These array comprehension methods are known to be unsafe for asynchronous functions. Therefore, only synchronous filter functions are supported at this time.
The useFastput
option is a boolean option. If true
, the method will use the faster fastPut()
method to upload files. Although this method is faster, it is not supported by all SFTP servers. Enabling this option when unsupported by the remote SFTP server will result in failures.
filter
and useFastput
. A filter predicate function which is called for each item in the source path. The argument will receive two arguments. The first is the full path to the item and the second is a boolean which will be true if the item is a directory. If the function returns true, the item will be uploaded, otherwise it will be filtered out and ignored. The useFastput
option is a boolean option. If true
, the method will use the faster, but less supported, fastPut()
method to transfer files. The default is to use the slightly slower, but better supported, put()
method.Example
1'use strict'; 2 3// Example of using the uploadDir() method to upload a directory 4// to a remote SFTP server 5 6const path = require('path'); 7const SftpClient = require('../src/index'); 8 9const dotenvPath = path.join(__dirname, '..', '.env'); 10require('dotenv').config({path: dotenvPath}); 11 12const config = { 13 host: process.env.SFTP_SERVER, 14 username: process.env.SFTP_USER, 15 password: process.env.SFTP_PASSWORD, 16 port: process.env.SFTP_PORT || 22 17}; 18 19async function main() { 20 const client = new SftpClient('upload-test'); 21 const src = path.join(__dirname, '..', 'test', 'testData', 'upload-src'); 22 const dst = '/home/tim/upload-test'; 23 24 try { 25 await client.connect(config); 26 client.on('upload', info => { 27 console.log(`Listener: Uploaded ${info.source}`); 28 }); 29 let rslt = await client.uploadDir(src, dst); 30 return rslt; 31 } catch (err) { 32 console.error(err); 33 } finally { 34 client.end(); 35 } 36} 37 38main() 39 .then(msg => { 40 console.log(msg); 41 }) 42 .catch(err => { 43 console.log(`main error: ${err.message}`); 44 }); 45
Download the content of the remote directory specified by srcDir
to the local file system directory specified by dstDir
. The dstDir
directory will be created if required. All sub directories within srcDir
will also be copied. Any existing files in the local path will be overwritten. No files in the local path will be deleted.
The method also emits download
events to provide a way to monitor download progress. The download event listener is called with one argument, an object with two properties, source and destination. The source property is the path to the remote file that has been downloaded and the destination is the local path to where the file was downloaded to. You can add a listener for this event using the on()
method.
The options
argument is an options object with two supported properties, filter
and useFastget
. The filter
argument is a predicate function which will be called with two arguments for each potential item to be downloaded. The first argument is the full path of the item and the second argument is a boolean, which will be true if the item is a directory. If the function returns true, the item will be included in the download. If it returns false, it will be filtered and ignored. The filter function is called via the Array.filter
method. These array comprehension methods are known to be unsafe for asynchronous functions. Therefore, only synchronous filter functions are supported at this time.
If the useFastget
property is set to true
, the method will use fastGet()
to transfer files. The fastGet
method is faster, but not supported by all SFTP services.
filter
and useFastget
. The filter property is a function accepting two arguments, the full path to an item and a boolean value which will be true if the item is a directory. The function is called for each item in the download path and should return true to include the item and false to exclude it in the download. The useFastget
property is a boolean. If true, the fastGet()
method will be used to transfer files. If false
(the default), the slower but better supported get()
method is used. .Example
1'use strict'; 2 3// Example of using the downloadDir() method to upload a directory 4// to a remote SFTP server 5 6const path = require('path'); 7const SftpClient = require('../src/index'); 8 9const dotenvPath = path.join(__dirname, '..', '.env'); 10require('dotenv').config({path: dotenvPath}); 11 12const config = { 13 host: process.env.SFTP_SERVER, 14 username: process.env.SFTP_USER, 15 password: process.env.SFTP_PASSWORD, 16 port: process.env.SFTP_PORT || 22 17}; 18 19async function main() { 20 const client = new SftpClient('upload-test'); 21 const dst = '/tmp'; 22 const src = '/home/tim/upload-test'; 23 24 try { 25 await client.connect(config); 26 client.on('download', info => { 27console.log(`Listener: Download ${info.source}`); 28 }); 29 let rslt = await client.downloadDir(src, dst); 30 return rslt; 31 } finally { 32 client.end(); 33 } 34} 35 36main() 37 .then(msg => { 38 console.log(msg); 39 }) 40 .catch(err => { 41 console.log(`main error: ${err.message}`); 42 }); 43
Returns a read stream object which is attached to the remote file specified by the remotePath
argument. This is a low level method which just returns a read stream object. Client code is fully responsible for managing and releasing the resources associated with the stream once finished i.e. closing files, removing listeners etc.
Returns a write stream object which is attached to the remote file specified in the remotePath
argument. This is a low level function which just returns the stream object. Client code is fully responsible for managing that object, including closing any file descriptors and removing listeners etc.
Perform a remote file copy. The file identified by the srcPath
argument will be copied to the file specified as the dstPath
argument. The directory where dstPath
will be placed must exist, but the actual file must not i.e. no overwrites allowed.
Ends the current client session, releasing the client socket and associated resources. This function also removes all listeners associated with the client.
Example Use
1let client = new Client(); 2 3client.connect(config) 4 .then(() => { 5 // do some sftp stuff 6 }) 7 .then(() => { 8 return client.end(); 9 }) 10 .catch(err => { 11 console.error(err.message); 12 });
Although normally not required, you can add and remove custom listeners on the ssh2 client object. This object supports a number of events, but only a few of them have any meaning in the context of SFTP. These are
on(eventType, listener)
Adds the specified listener to the specified event type. It the event type is error
, the listener should accept 1 argument, which will be an Error object. The event handlers for end
and close
events have no arguments.
The handlers will be added to the beginning of the listener's event handlers, so it will be called before any of the ssh2-sftp-client
listeners.
removeListener(eventType, listener)
Removes the specified listener from the event specified in eventType. Note that the end()
method automatically removes all listeners from the client object.
All SFTP servers and platforms are not equal. Some facilities provided by ssh2-sftp-client
either depend on capabilities of the remote server or the underlying capabilities of the remote server platform. As an example, consider chmod()
. This command depends on a remote file system which implements the 'nix' concept of users and groups. The win32 platform does not have the same concept of users and groups, so chmod()
will not behave in the same way.
One way to determine whether an issue you are encountering is due to ssh2-sftp-client
or due to the remote server or server platform is to use a simple CLI sftp program, such as openSSH's sftp command. If you observe the same behaviour using plain sftp
on the command line, the issue is likely due to server or remote platform limitations. Note that you should not use a GUI sftp client, like Filezilla
or winSCP
as such GUI programs often attempt to hide these server and platform incompatibilities and will take additional steps to simulate missing functionality etc. You want to use a CLI program which does as little as possible.
fastPut()
and fastGet()
MethodsThe fastPut()
and fastGet()
methods are known to be somewhat dependent on SFTP server capabilities. Some SFTP servers just do not work correctly with concurrent connections and some are known to have issues with negotiating packet sizes. These issues can sometimes be resolved by tweaking the options supplied to the methods, such as setting number of concurrent connections or a specific packet size.
To see an example of the type of issues you can observe with fastPut()
or fastGet()
, have a look at issue 407, which describes the experiences of one user. Bottom line, when it works, it tends to work well and be significantly faster than using just get()
or put()
. However, when developing code to run against different SFTP servers, especially where you are unable to test against each server, you are likely better off just using get()
and put()
or structuring your code so that users can select which method to use (this is what ssh2-sftp-client
does - for example, see the !downloadDir()
and uploadDir()
methods.
One of the challenges in providing a Promise based API over a module like SSH2, which is event based is how to ensure events are handled appropriately. The challenge is due to the synchronous nature of events. You cannot use try/catch
for events because you have no way of knowing when the event might fire. For example, it could easily fire after your try/catch
block as completed execution.
Things become even more complicated once you mix in Promises. When you define a promise, you have to methods which can be called to fulfil a promise, resolve
and reject
. Only one can be called - once you call resolve
, you cannot call reject
(well, you can call it, but it won't have any impact on the fulfilment status of the promise). The problem arises when an event, for example an error
event is fired either after you have resolved a promise or possibly in-between promises. If you don't catch the error
event, your script will likely crash with an uncaught exception
error.
To make matters worse, some servers, particularly servers running on a Windows platform, will raise multiple errors for the same error event. For example, when you attempt to connect with a bad username or password, you will get a All authentication methods have failed
exception. However, under Windows, you will also get a Connection reset by peer
exception. If we reject the connect promise based on the authentication failure exception, what do we do with the reset by peer
exception? More critically, what will handle that exception given the promise has already been fulfilled and completed? To make matters worse, it seems that Windows based servers also raise an error event for non-errors. For example, when you call the end()
method, the connection is closed. On windows, this also results in a connection reset by peer error. While it could be argued that the remote server resetting the connection after receiving a disconnect request is not an error, it doesn't change the fact that one is raised and we need to somehow deal with it.
To handle this, ssh2-sftp-client
implements a couple of strategies. Firstly, when you call one of the module's methods, it adds error
, end
and close
event listeners which will call the reject
method on the enclosing promise. It also keeps track of whether an error has been handled and if it has, it ignores any subsequent errors until the promise ends. Typically, the first error caught has the most relevant information and any subsequent error events are less critical or informative, so ignoring them has no negative impact. Provided one of the events is raised before the promise is fulfilled, these handlers will consume the event and deal with it appropriately.
In testing, it was found that in some situations, particularly during connect operations, subsequent errors fired with a small delay. This prevents the errors from being handled by the event handlers associated with the connect promise. To deal with this, a small 500ms delay has been added to the connect() method, which effectively delays the removal of the event handlers until all events have been caught.
The other area where additional events are fired is during the end() call. To deal with these events, the end()
method sets up listeners which will simply ignore additional error
, end
and close
events. It is assumed that once you have called end()
you really only care about any main error which occurs and no longer care about other errors that may be raised as the connection is terminated.
In addition to the promise based event handlers, ssh2-sftp-client
also implements global event handlers which will catch any error
, end
or close
events. Essentially, these global handlers only reset the sftp
property of the client object, effectively ensuring any subsequent calls are rejected and in the case of an error, send the error to the console.
While the above strategies appear to work for the majority of use cases, there are always going to be edge cases which require more flexible or powerful event handling. To support this, the on()
and removeListener()
methods are provided. Any event listener added using the on()
method will be added at the beginning of the list of handlers for that event, ensuring it will be called before any global or promise local events. See the documentation for the on()
method for details.
It appears that when the sftp server is running on Windows, a ECONNRESET error signal is raised when the end() method is called. Unfortunately, this signal is raised after a considerable delay. This means we cannot remove the error handler used in the end() promise as otherwise you will get an uncaught exception error. Leaving the handler in place, even though we will ignore this error, solves that issue, but unfortunately introduces a new problem. Because we are not removing the listener, if you re-use the client object for subsequent connections, an additional error handler will be added. If this happens more than 11 times, you will eventually see the Node warning about a possible memory leak. This is because node monitors the number of error handlers and if it sees more than 11 added to an object, it assumes there is a problem and generates the warning.
The best way to avoid this issue is to not re-use client objects. Always generate a new sftp client object for each new connection.
Due to an issue with ECONNRESET error signals when connecting to Windows based SFTP servers, it is not possible to remove the error handler in the end() method. This means that if you re-use the SftpClient object for multiple connections e.g. calling connect(), then end(), then connect() etc, you run the risk of multiple error handlers being added to the SftpClient object. After 11 handlers have been added, Node will generate a possible memory leak warning.
To avoid this problem, don't re-use SftpClient objects. Generate a new SftpClient object for each connection. You can perform multiple actions with a single connection e.g. upload multiple files, download multiple files etc, but after you have called end(), you should not try to re-use the object with a further connect() call. Create a new object instead.
Many SFTP servers have rate limiting protection which will drop connections once a limit has been reached. In particular, openSSH has the setting MaxStartups
, which can be a tuple of the form max:drop:full
where max
is the maximum allowed unauthenticated connections, drop
is a percentage value which specifies percentage of connections to be dropped once max
connections has been reached and full
is the number of connections at which point all subsequent connections will be dropped. e.g. 10:30:60
means allow up to 10 unauthenticated connections after which drop 30% of connection attempts until reaching 60 unauthenticated connections, at which time, drop all attempts.
Clients first make an unauthenticated connection to the SFTP server to begin negotiation of protocol settings (cipher, authentication method etc). If you are creating multiple connections in a script, it is easy to exceed the limit, resulting in some connections being dropped. As SSH2 only raises an 'end' event for these dropped connections, no error is detected. The ssh2-sftp-client
now listens for end
events during the connection process and if one is detected, will reject the connection promise.
One way to avoid this type of issue is to add a delay between connection attempts. It does not need to be a very long delay - just sufficient to permit the previous connection to be authenticated. In fact, the default setting for openSSH is 10:30:60
, so you really just need to have enough delay to ensure that the 1st connection has completed authentication before the 11th connection is attempted.
If the dst argument passed to the get method is a writeable stream, the remote file will be piped into that writeable. If the writeable you pass in is a writeable stream created with fs.createWriteStream()
, the data will be written to the file specified in the constructor call to createWriteStream()
.
The writeable stream can be any type of write stream. For example, the below code will convert all the characters in the remote file to upper case before it is saved to the local file system. This could just as easily be something like a gunzip stream from zlib
, enabling you to decompress remote zipped files as you bring them across before saving to local file system.
1'use strict'; 2 3// Example of using a writeable with get to retrieve a file. 4// This code will read the remote file, convert all characters to upper case 5// and then save it to a local file 6 7const Client = require('../src/index.js'); 8const path = require('path'); 9const fs = require('fs'); 10const through = require('through2'); 11 12const config = { 13 host: 'arch-vbox', 14 port: 22, 15 username: 'tim', 16 password: 'xxxx' 17}; 18 19const sftp = new Client(); 20const remoteDir = '/home/tim/testServer'; 21 22function toupper() { 23 return through(function(buf, enc, next) { 24 next(null, buf.toString().toUpperCase()); 25 }); 26} 27 28sftp 29 .connect(config) 30 .then(() => { 31 return sftp.list(remoteDir); 32 }) 33 .then(data => { 34 // list of files in testServer 35 console.dir(data); 36 let remoteFile = path.join(remoteDir, 'test.txt'); 37 let upperWtr = toupper(); 38 let fileWtr = fs.createWriteStream(path.join(__dirname, 'loud-text.txt')); 39 upperWtr.pipe(fileWtr); 40 return sftp.get(remoteFile, upperWtr); 41 }) 42 .then(() => { 43 return sftp.end(); 44 }) 45 .catch(err => { 46 console.error(err.message); 47 });
There are a couple of ways to do this. Essentially, you want to setup SSH keys and use these for authentication to the remote server.
One solution, provided by @KalleVuorjoki is to use the SSH agent process. Note: SSHAUTHSOCK is normally created by your OS when you load the ssh-agent as part of the login session.
1let sftp = new Client(); 2sftp.connect({ 3 host: 'YOUR-HOST', 4 port: 'YOUR-PORT', 5 username: 'YOUR-USERNAME', 6 agent: process.env.SSH_AUTH_SOCK 7}).then(() => { 8 sftp.fastPut(/* ... */) 9}
Another alternative is to just pass in the SSH key directly as part of the configuration.
1let sftp = new Client(); 2sftp.connect({ 3 host: 'YOUR-HOST', 4 port: 'YOUR-PORT', 5 username: 'YOUR-USERNAME', 6 privateKey: fs.readFileSync('/path/to/ssh/key') 7}).then(() => { 8 sftp.fastPut(/* ... */) 9}
This solution was provided by @jmorino.
When a SOCKS 5 client is connected it must be ingested by ssh2-sftp-client immediately, otherwise a timeout occurs.
1import { SocksClient } from 'socks'; 2import SFTPClient from 'ssh2-sftp-client'; 3 4const host = 'my-sftp-server.net'; 5const port = 22; // default SSH/SFTP port on remote server 6 7// connect to SOCKS 5 proxy 8const { socket } = await SocksClient.createConnection({ 9 proxy: { 10 host: 'my.proxy', // proxy hostname 11 port: 1080, // proxy port 12 type: 5, // for SOCKS v5 13 }, 14 command: 'connect', 15 destination: { host, port } // the remote SFTP server 16}); 17 18const client = new SFTPClient(); 19client.connect({ 20 host, 21 sock: socket, // pass the socket to proxy here (see ssh2 doc) 22 // other config options 23}) 24 25// client is connected
Some users have encountered the error 'Timeout while waiting for handshake' or 'Handshake failed, no matching client->server ciphers. This is often due to the client not having the correct configuration for the transport layer algorithms used by ssh2. One of the connect options provided by the ssh2 module is algorithm
, which is an object that allows you to explicitly set the key exchange, ciphers, hmac and compression algorithms as well as server host key used to establish the initial secure connection. See the SSH2 documentation for details. Getting these parameters correct usually resolves the issue.
When encountering this type of problem, one worthwhile approach is to use openSSH's CLI sftp program with the -v
switch to raise logging levels. This will show you what algorithms the CLI is using. You can then use this information to match the names with the accepted algorithm names documented in the ssh2
README to set the properties in the algorithms
object.
If you want to limit the amount of bandwidth used during upload/download of data, you can use a stream to limit throughput. The following example was provided by kennylbj. Note that there is a caveat that we must set the autoClose
flag to false to avoid calling an extra _read()
on a closed stream that may cause _get Permission Denied error in ssh2-streams.
1 2 3const Throttle = require('throttle'); 4const progress = require('progress-stream'); 5 6// limit download speed 7const throttleStream = new Throttle(config.throttle); 8 9// download progress stream 10const progressStream = progress({ 11 length: fileSize, 12 time: 500, 13}); 14progressStream.on('progress', (progress) => { 15 console.log(progress.percentage.toFixed(2)); 16}); 17 18const outStream = createWriteStream(localPath); 19 20// pipe streams together 21throttleStream.pipe(progressStream).pipe(outStream); 22 23try { 24 // set autoClose to false 25 await client.get(remotePath, throttleStream, { autoClose: false }); 26} catch (e) { 27 console.log('sftp error', e); 28} finally { 29 await client.end(); 30}
This was contributed by Ladislav Jacho. Thanks.
A symptom of this issue is that you are able to upload small files, but uploading larger ones fail. You probably have an MTU/fragmentation problem. For each network interface on both client and server set the MTU to 576, e.g. ifconfig eth0 mtu 576
. If that works, you need to find the largest MTU which will work for your network. An MTU which is too small will adversely affect throughput speed. A common value to use is an MTU of 1400.
For more explanation, see issue #342.
This project does not use Typescript. However, typescript definition files are provided by other 3rd parties. Sometimes, these definition files have not stayed up-to-date with the current version of this module. If you encounter this issue, you need to report it to the party responsible for the definition file, not this project.
I have started collecting example scripts in the example directory of the repository. These are mainly scripts I have put together in order to investigate issues or provide samples for users. They are not robust, lack adequate error handling and may contain errors. However, I think they are still useful for helping developers see how the module and API can be used.
The ssh2-sftp-client
module is essentially a wrapper around the ssh2
and ssh2-streams
modules, providing a higher level promise
based API. When you run into issues, it is important to try and determine where the issue lies - either in the ssh2-sftp-client module or the underlying ssh2
and ssh2-streams
modules. One way to do this is to first identify a minimal reproducible example which reproduces the issue. Once you have that, try to replicate the functionality just using the ssh2
and ssh2-streams
modules. If the issue still occurs, then you can be fairly confident it is something related to those later 2 modules and therefore and issue which should be referred to the maintainer of that module.
The ssh2
and ssh2-streams
modules are very solid, high quality modules with a large user base. Most of the time, issues with those modules are due to client misconfiguration. It is therefore very important when trying to diagnose an issue to also check the documentation for both ssh2
and ssh2-streams
. While these modules have good defaults, the flexibility of the ssh2 protocol means that not all options are available by default. You may need to tweak the connection options, ssh2 algorithms and ciphers etc for some remote servers. The documentation for both the ssh2
and ssh2-streams
module is quite comprehensive and there is lots of valuable information in the issue logs.
If you run into an issue which is not repeatable with just the ssh2
and ssh2-streams
modules, then please log an issue against the ssh2-sftp-client
module and I will investigate. Please note the next section on logging issues.
Note also that in the repository there are two useful directories. The first is the examples directory, which contain some examples of using ssh2-sftp-client
to perform common tasks. A few minutes reviewing these examples can provide that additional bit of detail to help fix any problems you are encountering.
The second directory is the validation directory. I have some very simple scripts in this directory which perform basic tasks using only the ssh2
modules (no ssh2-sftp-client
module). These can be useful when trying to determine if the issue is with the underlying ssh2
module or the ssh2-sftp-client
wrapper module.
There are some common errors people tend to make when using Promises or Async/Await. These are by far the most common problem found in issues logged against this module. Please check for some of these before logging your issue.
then()
blockAll methods in ssh2-sftp-client
return a Promise. This means methods are executed asynchrnously. When you call a method inside the then()
block of a promise chain, it is critical that you return the Promise that call generates. Failing to do this will result in the then()
block completing and your code starting execution of the next then()
, catch()
or finally()
block before your promise has been fulfilled. For example, the following will not do what you expect
1sftp.connect(config) 2 .then(() => { 3 sftp.fastGet('foo.txt', 'bar.txt'); 4 }).then(rslt => { 5 console.log(rslt); 6 sftp.end(); 7 }).catch(e => { 8 console.error(e.message); 9 });
In the above code, the sftp.end()
method will almost certainly be called before sftp.fastGet()
has been fulfilled (unless the foo.txt file is really small!). In fact, the whole promise chain will complete and exit even before the sftp.end()
call has been fulfilled. The correct code would be something like
1sftp.connect(config) 2 .then(() => { 3 return sftp.fastGet('foo.txt', 'bar.txt'); 4 }).then(rslt => { 5 console.log(rslt); 6 return sftp.end(); 7 }).catch(e => { 8 console.error(e.message); 9 });
Note the return
statements. These ensure that the Promise returned by the client method is returned into the promise chain. It will be this promise the next block in the chain will wait on to be fulfilled before the next block is executed. Without the return statement, that block will return the default promise for that block, which essentially says this block has been fulfilled. What you really want is the promise which says your sftp client method call has been fulfilled.
A common symptom of this type of error is for file uploads or download to fail to complete or for data in those files to be truncated. What is happening is that the connection is being ended before the transfer has completed.
Another common error is to mix Promise chains and async/await calls. This is rarely a great idea. While you can do this, it tends to create complicated and difficult to maintain code. Select one approach and stick with it. Both approaches are functionally equivalent, so there is no reason to mix up the two paradigms. My personal preference would be to use async/await as I think that is more natural for most developers. For example, the following is more complex and difficult to follow than necessary (and has a bug!)
1sftp.connect(config) 2 .then(() => { 3 return sftp.cwd(); 4 }).then(async (d) => { 5 console.log(`Remote directory is ${d}`); 6 try { 7 await sftp.fastGet(`${d}/foo.txt`, `./bar.txt`); 8 }.catch(e => { 9 console.error(e.message); 10 }); 11 }).catch(e => { 12 console.error(e.message); 13 }).finally(() => { 14 sftp.end(); 15 });
The main bug in the above code is the then()
block is not returning the Promise generated by the call to sftp.fastGet()
. What it is actually returning is a fulfilled promise which says the then()
block has been run (note that the await'ed promise is not being returned and is therefore outside the main Promise chain). As a result, the finally()
block will be executed before the await promise has been fulfilled.
Using async/await inside the promise chain has created unnecessary complexity and leads to incorrect assumptions regarding how the code will execute. A quick glance at the code is likely to give the impression that execution will wait for the sftp.fastGet()
call to be fulfilled before continuing. This is not the case. The code would be more clearly expressed as either
1sftp.connect(config) 2 .then(() => { 3 return sftp.cwd(); 4 }).then(d => { 5 console.log(`remote dir ${d}`); 6 return sftp.fastGet(`${d}/foot.txt`, 'bar.txt'); 7 }).catch(e => { 8 console.error(e.message); 9 }).finally(() => { 10 return sftp.end(); 11 });
or, using async/await
1async function doSftp() { 2 try { 3 let sftp = await sftp.connect(conf); 4 let d = await sftp.cwd(); 5 console.log(`remote dir is ${d}`); 6 await sftp.fastGet(`${d}/foo.txt`, 'bat.txt'); 7 } catch (e) { 8 console.error(e.message); 9 } finally { 10 await sftp.end(); 11 } 12}
Another common error is to try and use a try/catch block to catch event signals, such as an error event. In general, you cannot use try/catch blocks for asynchronous code and expect errors to be caught by the catch
block. Handling errors in asynchronous code is one of the key reasons we now have the Promise and async/await frameworks.
The basic problem is that the try/catch block will have completed execution before the asynchronous code has completed. If the asynchronous code has not completed, then there is a potential for it to raise an error. However, as the try/catch block has already completed, there is no catch waiting to catch the error. It will bubble up and probably result in your script exiting with an uncaught exception error.
Error events are essentially asynchronous code. You don't know when such events will fire. Therefore, you cannot use a try/catch block to catch such event errors. Even creating an error handler which then throws an exception won't help as the key problem is that your try/catch block has already executed. There are a number of alternative ways to deal with this situation. However, the key symptom is that you see occasional uncaught error exceptions that cause your script to exit abnormally despite having try/catch blocks in your script. What you need to do is look at your code and find where errors are raised asynchronously and use an event handler or some other mechanism to manage any errors raised.
Not all SFTP servers are the same. Like most standards, the SFTP protocol has some level of interpretation and allows different levels of compliance. This means there can be differences in behaviour between different servers and code which works with one server will not work the same with another. For example, the value returned by realpath for non-existent objects can differ significantly. Some servers will throw an error for a particular operation while others will just return null, some servers support concurrent operations (such as used by fastGet/fastPut) while others will not and of course, the text of error messages can vary significantly. In particular, we have noticed significant differences across different platforms. It is therefore advisable to do comprehensive testing when the SFTP server is moved to a new platform. This includes moving from to a cloud based service even if the underlying platform remains the same. I have noticed that some cloud platforms can generate unexpected events, possibly related to additional functionality or features associated with the cloud implementation. For example, it appears SFTP servers running under Azure will generate an error event when the connection is closed even when the client has requested the connection be terminated. The same SFTP server running natively on Windows does not appear to exhibit such behaviour.
Technically, SFTP should be able to perform multiple operations concurrently. As node is single threaded, what we a really talking about is running multiple execution contexts as a pool where node will switch contexts when each context is blocked due to things like waiting on network data etc. However, I have found this to be extremely unreliable and of very little benefit from a performance perspective. My recommendation is to therefore avoid executing multiple requests over the same connection in parallel (for example, generating multiple get()
promises and using something like Promise.all()
to resolve them.
If you are going to try and perform concurrent operations, you need to test extensively and ensure you are using data which is large enough that context switching does occur (i.e. the request is not completed in a single run). Some SFTP servers will handle concurrent operations better than others.
You can add a debug
property to the config object passed in to connect()
to turn on debugging. This will generate quite a lot of output. The value of the property should be a function which accepts a single string argument. For example;
1config.debug = msg => { 2 console.error(msg); 3}; 4
Enabling debugging can generate a lot of output. If you use console.error() as the output (as in the example above), you can redirect the output to a file using shell redirection e.g.
1node script.js 2> debug.log 2
If you just want to see debug messages from ssh2-sftp-client
and exclude debug messages from the underlying ssh2
and ssh2-streams
modules, you can filter based on messages which start with 'CLIENT' e.g.
1{ 2 debug: (msg) => { 3 if (msg.startsWith('CLIENT')) { 4 console.error(msg); 5 } 6 } 7}
Please log an issue for all bugs, questions, feature and enhancement requests. Please ensure you include the module version, node version and platform.
I am happy to try and help diagnose and fix any issues you encounter while using the ssh2-sftp-client
module. However, I will only put in effort if you are prepared to put in the effort to provide the information necessary to reproduce the issue. Things which will help
Perhaps the best assistance is a minimal reproducible example of the issue. Once the issue can be readily reproduced, it can usually be fixed very quickly.
Pull requests are always welcomed. However, please ensure your changes pass all tests and if you're adding a new feature, that tests for that feature are included. Likewise, for new features or enhancements, please include any relevant documentation updates.
Note: The README.md
file is generated from the README.org
file. Therefore, any documentation updates or fixes need to be made to the README.org
file. This file is tangled using Emacs
org mode. If you don't use Emacs or org-mode, don't be too concerned. The org-mode syntax is straight-forward and similar to markdown. I will verify any updates to README.org
and generate a new README.md
when necessary. The main point to note is that any changes made directly to README.md
will not persist and will be lost when a new version is generated, so don't modify that file.
This module will adopt a standard semantic versioning policy. Please indicate in your pull request what level of change it represents i.e.
Major: Change to API or major change in functionality which will require an increase in major version number.
Minor: Minor change, enhancement or new feature which does not change existing API and will not break existing client code.
Bug Fix: No change to functionality or features. Simple fix of an existing bug.
This module was initially written by jyu213. On August 23rd, 2019, theophilusx took over responsibility for maintaining this module. A number of other people have contributed to this module, but until now, this was not tracked. My intention is to credit anyone who contributes going forward.
Thanks to the following for their contributions -
No vulnerabilities found.
Reason
4 commit(s) and 9 issue activity found in the last 90 days -- score normalized to 10
Reason
no binaries found in the repo
Reason
0 existing vulnerabilities detected
Reason
license file detected
Details
Reason
Found 3/27 approved changesets -- score normalized to 1
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More