Gathering detailed insights and metrics for @healthadvisor/r7insight_node
Gathering detailed insights and metrics for @healthadvisor/r7insight_node
npm install @healthadvisor/r7insight_node
Typescript
Module System
Min. Node Version
Node Version
NPM Version
75
Supply Chain
98.1
Quality
77
Maintenance
100
Vulnerability
99.6
License
JavaScript (100%)
Love this project? Help keep it running — sponsor us today! 🚀
Total Downloads
8,450
Last Day
14
Last Week
104
Last Month
516
Last Year
5,141
7 Stars
49 Commits
13 Forks
44 Watching
6 Branches
225 Contributors
Minified
Minified + Gzipped
Latest Version
3.3.0
Package Id
@healthadvisor/r7insight_node@3.3.0
Unpacked Size
901.10 kB
Size
33.65 kB
File Count
17
NPM Version
8.12.1
Node Version
14.19.1
Cumulative downloads
Total Downloads
Last day
16.7%
14
Compared to previous day
Last week
7.2%
104
Compared to previous week
Last month
67%
516
Compared to previous month
Last year
130.2%
5,141
Compared to previous year
4
2
Allows you to send logs to the Insight Platform (or Logentries) account from Node.js.
This client is not backwards-compatible with Le_Node.
An upgrade guide can be found on the wiki.
There’s a separate client intended for use in the browser, called r7insight_js, which uses http and is optimized for browser-specific logging needs.
1const Logger = require('r7insight_node'); 2 3const logger = new Logger({ token: '<token>' , region: '<region>'}); 4 5logger.warning("I'll put this over here, with the rest of the fire.");
Workflow is as follows:
npm install
for installing the packagesnpm test
for testingnpm version [major|minor|patch]
1# You can simply paste all this into your terminal 2npm uninstall -g r7insight_node 3npm pack 4npm i -g r7insight_node-*.tgz 5npm install -g dts-gen 6dts-gen -m r7insight_node -f index.d.ts -o
The options object you provide to the constructor only requires your access token, but you can configure its behavior further.
All of the following except token
, levels
and secure
can also be
configured after instantiation as settable properties on the client. They are
accessors, though, and invalid values will be ignored.
eu
, us
etc.console.log
,
console.warn
and console.error
as appropriate. Default: false
.false
.true
:
1// Rather than call different functions based on level: 2logger.warn({message: 'hello'}); 3// You can call the same function with different levels within object: 4logger.log({level: 'warn', message: 'hello'});
16192
.true
.1000
15 * 1000
fibonacci
or exponential
. Default: fibonnacci
false
. More details on this below.flatten
is true, you can also indicate whether arrays
should be subject to the same process. Defaults to true
if flatten
is
true
; otherwise meaningless.false
.true
.false
.Error
object, setting this to
true
will cause the stack trace to be included. Default: false.
secure
is false, or 443 if
it’s true.true
will enable debug logging with a default stdout
logger.log
method.The default log levels are:
You can provision the constructor with custom names for these levels with either an array or an object hash:
1[ 'boring', 'yawn', 'eh', 'hey' ] 2 3{ boring: 0, yawn: 1, eh: 2, hey: 3 }
In the former case, the index corresponds to the numeric level, so sparse arrays are valid. In either case, missing levels will be filled in with the defaults.
The minLevel
option respects either level number (e.g. 2
) or the name (e.g.
'eh'
).
The level names each become methods on the client, which are just sugar for
calling client.log(lvl, logentry)
with the first argument curried.
Since these names will appear on the client, they can’t collide with existing properties. Not that you’re particularly likely to try naming a log level ‘hasOwnProperty’ or ‘_write’ but I figured I should mention it.
So the following three are equivalent:
1logger.notice('my msg'); 2logger.log('notice', 'my msg'); 3logger.log(2, 'my msg');
It’s also possible to forgo log levels altogether. Just call log
with a single
argument and it will be interpreted as the log entry. When used this way, the
minLevel
setting is ignored.
These events are also exported in the Logger
, so you can access them using Logger.errorEvent
, Logger.bufferDrainEvent
etc. Example:
1 logger.notice({ type: 'server', event: 'shutdown' }); 2 logger.once(Logger.bufferDrainEvent, () => { 3 logger.closeConnection(); 4 logger.on(Logger.disconnectedEvent, () => { 5 process.exit(); 6 }); 7 });
'error'
The client is an EventEmitter, so you should (as always) make sure you have a
listener on 'error'
. Error events can occur when there’s been a problem with
the connection or if a method was called with invalid parameters. Note that
errors that occur during instantiation, as opposed to operation, will throw.
'log'
Triggered when a log is about to be written to the underlying connection. The prepared log object or string is supplied as an argument.
'connected'
and 'disconnected'
and 'timed out'
These indicate when a new connection to the host is established, destroyed or timed out due to client side inactivity. Inactivity timeout is normal if the connection is inactive for a configurable period of time (see inactivityTimeout); it will be reopened when needed again. Disconnection can be either a result of socket inactivity or a network failure.
'drain'
, 'finish'
, 'pipe'
, and 'unpipe'
These are events inherited from Writable
.
'buffer drain'
This event is emitted when the underlying ring buffer is fully consumed and Socket.write callback called. This can be useful when it’s time for the application to terminate but you want to be sure any pending logs have finished writing.
1 logger.notice({ type: 'server', event: 'shutdown' }); 2 logger.once('buffer drain', () => { 3 logger.closeConnection(); 4 logger.on('disconnected', () => { 5 process.exit(); 6 }); 7 });
'buffer shift'
Buffer shift event is emitted when the internal buffer is shifted due to reaching bufferSize
of events in the buffer. This event may be listened for security/operations related reasons as
each time this event is emitted, a log event will be discarded and discarded log event will
never make it to the Insight Platform.
1logger.ringBuffer.on('buffer shift', () => { 2 // PagerDuty or send an email 3});
Log entries can be strings or objects. If the log argument is an array, it will be interpretted as multiple log events.
In the case of objects, the native JSON.stringify serialization is augmented in several ways. In addition to handling circular references, it will automatically take care of a variety of objects and primitives which otherwise wouldn’t serialize correctly, like Error, RegExp, Set, Map, Infinity, NaN, etc.
If you choose to set withStack
to true, errors will include their stacktraces
as an array (so that they are not painful to look at). Be sure to turn on
"expand JSON" (meaning pretty print) in the options in the Insight Platform:
![stack trace as seen in logentries app][screen1]
You can adjust this further by supplying your own custom replacer
. This is a
standard argument to JSON.stringify -- See MDN: JSON > Stringify > The Replacer Parameter
for details. In the event that you supply a custom replacer, it is applied
prior to the built-in replacer described above so you can override its behavior.
Two options are available, timestamp
and withLevel
, which will add data to
your log events. For objects, these are added as properties (non-mutatively).
For strings, these values are prepended. If the name of a property would cause
a collision with an existing property, it will be prepended with an underscore.
In some cases it will end up being easier to query your data if objects aren’t
deeply nested. With the flatten
and flattenArrays
options, you can tell the
client to transform objects like so:
{ "a": 1, "b": { "c": 2 } }
=> { "a": 1, "b.c": 2 }
If flattenArrays
has not been set to false, this transformation will apply to
arrays as well:
{ "a": [ "b", { "c": 3 } ] }
=> { "a.0": "b", "a.1.c": 3 }
In addition to log
and its arbitrary sugary cousins, you can call
closeConnection
to explicitly close an open connection if one exists; you
might wish to do this as part of a graceful exit. The connection will reopen if
you log further.
Also, because the client is actually a writable stream, you can call write
directly. This gives you lower-level access to writing entries. It is in object
mode, but this means it expects discreet units (one call = one entry), not
actual objects; you should pass in strings. This is useful if you want to pipe
stdout, for example.
If there’s a problem with the connection (network loss or congestion),
entries will be buffered in an internal ring buffer to a max of 16192(bufferSize
)
entries by default. After that, internal ring buffer will shift
records
to keep only last bufferSize
number of records in memory. A log that indicates the
buffer was full will be sent to internal logger "once" this happens.
If console
is true, these log entries will still display there, but they will
not make it to the Insight Platform.
You can adjust the maximum size of the buffer with the bufferSize
option.
You’ll want to raise it if you’re dealing with very high volume (either a high
number of logs per second, or when log entries are unusually long on average).
Outside of these situations, exceeding the max buffer size is more likely an
indication of creating logs in a synchronous loop (which seems like a bad idea).
If the connection fails, it will keep retrying with a fibonacci
backoff by default.
Connection retry will start with a delay of reconnectInitialDelay
and the delay between each retry
will go up to a maximum of reconnectMaxDelay
with each retry in fibonacci sequence.
Backoff strategy can be changed to exponential
through constructor if necessary.
A connection to the host does not guarantee that your logs are transmitting successfully. If you have a bad token, there is no feedback from the server to indicate this. The only way to confirm that your token is working is to check the live tail in InsightOps. I will investigate this further to see if there’s some other means with which a token can be tested for validity.
winston
and winston-transport
installed
1const winston = require('winston'); 2 3// If Winston is included in your package.json dependencies, 4// you can just require the Insight Logger 5// to initialize it. 6require('r7insight_node'); 7 8const token = '00112233-4455-6677-8899-aabbccddeeff'; 9const transports = []; 10 11transports.push( 12 new winston.transports.Console({ 13 format: winston.format.simple(), 14 level: 'debug', 15 }) 16); 17 18transports.push( 19 new winston.transports.Insight({ 20 token, 21 region: 'eu', 22 level: 'debug', 23 }) 24); 25 26const logger = winston.createLogger({ 27 transports, 28}); 29 30logger.info('hello there');
The Insight client will place the transport constructor at winston.transports
,
even if Winston itself hasn’t yet been required.
1const Logger = require('r7insight_node'); 2const winston = require('winston'); 3 4assert(winston.transports.Insight);
Winston is an optional dependency in r7insight_node
and and if included it
requires winston-transport
for the InsightTransport
to extend it.
When adding a new Insight transport, the options argument passed to Winston’s
add
method supports the usual options in addition to those which are Winston-
specific. If custom levels are not provided, Winston’s defaults will be used.
1winston.add(new winston.transports.Insight({ token: '<token>', region: '<region>' }));
provisionWinston
like this:1const winston = require('winston'); 2 3const Logger = require('r7insight_node'); 4 5Logger.provisionWinston();
For Bunyan it’s like so:
1const bunyan = require('bunyan'); 2const Logger = require('r7insight_node'); 3 4const loggerDefinition = Logger.bunyanStream({ token: '<token', region: '<region>' }); 5 6// One stream 7const logger1 = bunyan.createLogger(loggerDefinition); 8 9// Multiple streams 10const logger2 = bunyan.createLogger({ 11 name: 'my leg', 12 streams: [ loggerDefinition, otherLoggerDefinition ] 13});
As with Winston, the options argument takes the normal constructor options (with
the exception of timestamp
, which is an option you should set on Bunyan itself
instead). Bunyan uses six log levels, so the seventh and eighth, if provided,
will be ignored; by default Bunyan’s level names will be used.
The object returned by bunyanStream
is the Bunyan logging ‘channel’ definition
in total. If you want to futz with this you can -- you can change its name
or
get the stream
object itself from here.
For Ts.ED logger it's like so:
1import {Logger} from "@tsed/logger"; 2import "@tsed/logger-insight"; 3 4const logger = new Logger("loggerName"); 5 6logger.appenders.set("stdout", { 7 type: "insight", 8 level: ["info"], 9 options: { 10 token: "the token", 11 region: "us" 12 // other options of insight 13 } 14});
As with Winston, the options argument takes the normal constructor options.
See more details on Ts.ED logger
No vulnerabilities found.
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
Found 14/19 approved changesets -- score normalized to 7
Reason
dependency not pinned by hash detected -- score normalized to 3
Details
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
10 existing vulnerabilities detected
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2025-02-03
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More