Installations
npm install frox-winston-elasticsearch
Developer Guide
Typescript
No
Module System
CommonJS
Min. Node Version
>= 8.0.0
Node Version
8.17.0
NPM Version
6.13.4
Releases
Unable to fetch releases
Contributors
Unable to fetch Contributors
Languages
JavaScript (100%)
Developer
vanthome
Download Statistics
Total Downloads
301
Last Day
1
Last Week
2
Last Month
6
Last Year
46
GitHub Statistics
271 Stars
384 Commits
131 Forks
7 Watching
4 Branches
50 Contributors
Bundle Size
455.51 kB
Minified
86.82 kB
Minified + Gzipped
Package Meta Information
Latest Version
0.15.7
Package Id
frox-winston-elasticsearch@0.15.7
Unpacked Size
41.18 kB
Size
13.28 kB
File Count
9
NPM Version
6.13.4
Node Version
8.17.0
Total Downloads
Cumulative downloads
Total Downloads
301
Last day
0%
1
Compared to previous day
Last week
100%
2
Compared to previous week
Last month
100%
6
Compared to previous month
Last year
-34.3%
46
Compared to previous year
Daily Downloads
Weekly Downloads
Monthly Downloads
Yearly Downloads
winston-elasticsearch
An elasticsearch transport for the winston logging toolkit.
Features
- logstash compatible message structure.
- Thus consumable with kibana.
- Date pattern based index names.
- Custom transformer function to transform logged data into a different message structure.
- Buffering of messages in case of unavailability of ES. The limit is the memory as all unwritten messages are kept in memory.
Compatibility
For Winston 3.x, Elasticsearch 7.0 and later, use the >= 0.7.0
.
For Elasticsearch 6.0 and later, use the 0.6.0
.
For Elasticsearch 5.0 and later, use the 0.5.9
.
For earlier versions, use the 0.4.x
series.
Unsupported / Todo
- Querying.
Installation
1npm install --save winston winston-elasticsearch
Usage
1const winston = require('winston'); 2const { ElasticsearchTransport } = require('winston-elasticsearch'); 3 4const esTransportOpts = { 5 level: 'info' 6}; 7const esTransport = new ElasticsearchTransport(esTransportOpts); 8const logger = winston.createLogger({ 9 transports: [ 10 esTransport 11 ] 12}); 13// Compulsory error handling 14logger.on('error', (error) => { 15 console.error('Error caught', error); 16}); 17esTransport.on('warning', (error) => { 18 console.error('Error caught', error); 19});
The winston API for logging can be used with one restriction: Only one JS object can only be logged and indexed as such. If multiple objects are provided as arguments, the contents are stringified.
Options
level
[info
] Messages logged with a severity greater or equal to the given one are logged to ES; others are discarded.index
[none | whendataStream
istrue
,logs-app-default
] The index to be used. This option is mutually exclusive withindexPrefix
.indexPrefix
[logs
] The prefix to use to generate the index name according to the pattern<indexPrefix>-<indexSuffixPattern>
. Can be string or function, returning the string to use.indexSuffixPattern
[YYYY.MM.DD
] a Day.js compatible date/ time pattern.transformer
[see below] A transformer function to transform logged data into a different message structure.useTransformer
[true
] If set totrue
, the giventransformer
will be used (or the default). Set tofalse
if you want to apply custom transformers during Winston'screateLogger
.ensureIndexTemplate
[true
] If set totrue
, the givenindexTemplate
is checked/ uploaded to ES when the module is sending the fist log message to make sure the log messages are mapped in a sensible manner.indexTemplate
[see fileindex-template-mapping.json
] the mapping template to be ensured as parsed JSON.ensureIndexTemplate
istrue
andindexTemplate
isundefined
flushInterval
[2000
] Time span between bulk writes in ms.retryLimit
[400
] Number of retries to connect to ES before giving up.healthCheckTimeout
[30s
] Timeout for one health check (health checks will be retried forever).healthCheckWaitForStatus
[yellow
] Status to wait for when check upon health. See its API docs for supported options.healthCheckWaitForNodes
[>=1
] Nodes to wait for when check upon health. See its API docs for supported options.client
An elasticsearch client instance. If given, theclientOpts
are ignored.clientOpts
An object passed to the ES client. See its docs for supported options.waitForActiveShards
[1
] Sets the number of shard copies that must be active before proceeding with the bulk operation.pipeline
[none] Sets the pipeline id to pre-process incoming documents with. See the bulk API docs.buffering
[true
] Boolean flag to enable or disable messages buffering. ThebufferLimit
option is ignored if set tofalse
.bufferLimit
[null
] Limit for the number of log messages in the buffer.apm
[null
] Inject apm client to link elastic logs with elastic apm traces.dataStream
[false
] Use Elasticsearch datastreams.source
[none] the source of the log message. This can be useful for microservices to understand from which service a log message origins.
Logging of ES Client
The default client and options will log through console
.
Interdependencies of Options
When changing the indexPrefix
and/or the transformer
,
make sure to provide a matching indexTemplate
.
Transformer
The transformer function allows mutation of log data as provided by winston into a shape more appropriate for indexing in Elasticsearch.
The default transformer generates a @timestamp
and rolls any meta
objects into an object called fields
.
Params:
logdata
An object with the data to log. Properties are:timestamp
[new Date().toISOString()
] The timestamp of the log entrylevel
The log level of the entrymessage
The message for the log entrymeta
The meta data for the log entry
Returns: Object with the following properties
@timestamp
The timestamp of the log entryseverity
The log level of the entrymessage
The message for the log entryfields
The meta data for the log entry
The default transformer function's transformation is shown below.
Input A:
1{ 2 "message": "Some message", 3 "level": "info", 4 "meta": { 5 "method": "GET", 6 "url": "/sitemap.xml", 7 ... 8 } 9}
Output A:
1{ 2 "@timestamp": "2019-09-30T05:09:08.282Z", 3 "message": "Some message", 4 "severity": "info", 5 "fields": { 6 "method": "GET", 7 "url": "/sitemap.xml", 8 ... 9 } 10}
The default transformer can be imported and extended
Example
1 const { ElasticsearchTransformer } = require('winston-elasticsearch'); 2 const esTransportOpts = { 3 transformer: (logData) => { 4 const transformed = ElasticsearchTransformer(logData); 5 transformed.fields.customField = 'customValue' 6 return transformed; 7 }}; 8const esTransport = new ElasticsearchTransport(esTransportOpts); 9
Note that in current logstash versions, the only "standard fields" are
@timestamp
and @version
, anything else is just free.
A custom transformer function can be provided in the options initiation.
Events
error
: in case of any error.
Example
An example assuming default settings.
Log Action
1logger.info('Some message', {});
Only JSON objects are logged from the meta
field. Any non-object is ignored.
Generated Message
The log message generated by this module has the following structure:
1{ 2 "@timestamp": "2019-09-30T05:09:08.282Z", 3 "message": "Some log message", 4 "severity": "info", 5 "fields": { 6 "method": "GET", 7 "url": "/sitemap.xml", 8 "headers": { 9 "host": "www.example.com", 10 "user-agent": "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)", 11 "accept": "*/*", 12 "accept-encoding": "gzip,deflate", 13 "from": "googlebot(at)googlebot.com", 14 "if-modified-since": "Tue, 30 Sep 2019 11:34:56 GMT", 15 "x-forwarded-for": "66.249.78.19" 16 } 17 } 18}
Target Index
This message would be POSTed to the following endpoint:
http://localhost:9200/logs-2019.09.30/log/
So the default mapping uses an index pattern logs-*
.
Logs correlation with Elastic APM
Instrument your code
- Install the official nodejs client for elastic-apm
1yarn add elastic-apm-node 2- or - 3npm install elastic-apm-node
Then, before any other require in your code, do:
1const apm = require("elastic-apm-node").start({ 2 serverUrl: "<apm server http url>" 3}) 4 5// Set up the logger 6var winston = require('winston'); 7var Elasticsearch = require('winston-elasticsearch'); 8 9var esTransportOpts = { 10 apm, 11 level: 'info', 12 clientOpts: { node: "<elastic server>" } 13}; 14var logger = winston.createLogger({ 15 transports: [ 16 new Elasticsearch(esTransportOpts) 17 ] 18});
Inject apm traces into logs
1logger.info('Some log message');
Will produce:
1{ 2 "@timestamp": "2021-03-13T20:35:28.129Z", 3 "message": "Some log message", 4 "severity": "info", 5 "fields": {}, 6 "transaction": { 7 "id": "1f6c801ffc3ae6c6" 8 }, 9 "trace": { 10 "id": "1f6c801ffc3ae6c6" 11 } 12}
Notice
Some "custom" logs may not have the apm trace.
If that is the case, you can retrieve traces using apm.currentTraceIds
like so:
1logger.info("Some log message", { ...apm.currentTracesIds })
The transformer function (see above) will place the apm trace in the root object so that kibana can link Logs to APMs.
Custom traces WILL TAKE PRECEDENCE
If you are using a custom transformer, you should add the following code into it:
1 if (logData.meta['transaction.id']) transformed.transaction = { id: logData.meta['transaction.id'] }; 2 if (logData.meta['trace.id']) transformed.trace = { id: logData.meta['trace.id'] }; 3 if (logData.meta['span.id']) transformed.span = { id: logData.meta['span.id'] };
This scenario may happen on a server (e.g. restify) where you want to log the query
after it was sent to the client (e.g. using server.on('after', (req, res, route, error) => log.debug("after", { route, error }))
).
In that case you will not get the traces into the response because traces would
have stopped (as the server sent the response to the client).
In that scenario, you could do something like so:
1server.use((req, res, next) => { 2 req.apm = apm.currentTracesIds 3 next() 4}) 5server.on("after", (req, res, route, error) => log.debug("after", { route, error, ...req.apm }))
Manual Flushing
Flushing can be manually triggered like this:
1const esTransport = new ElasticsearchTransport(esTransportOpts); 2esTransport.flush();
Datastreams
Elasticsearch 7.9 and higher supports Datastreams.
When dataStream: true
is set, bulk indexing happens with create
instead of index
, and also the default naming convention is logs-*-*
, which will match the built-in Index template and ILM policy,
automatically creating a datastream.
By default, the datastream will be named logs-app-default
, but alternatively, you can set the index
option to anything that matches logs-*-*
to make use of the built-in template and ILM policy.
If dataStream: true
is enabled, AND ( you are using Elasticsearch < 7.9 OR (you have set a custom index
that does not match logs-*-*
AND you have not created a custom matching template in Elasticsearch)), a normal index will be created.
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
license file detected
Details
- Info: project has a license file: LICENSE:0
- Info: FSF or OSI recognized license: MIT License: LICENSE:0
Reason
6 existing vulnerabilities detected
Details
- Warn: Project is vulnerable to: GHSA-grv7-fg5c-xmjg
- Warn: Project is vulnerable to: GHSA-pxg6-pf52-xh8x
- Warn: Project is vulnerable to: GHSA-3xgq-45jj-v275
- Warn: Project is vulnerable to: GHSA-p8p7-x288-28g6
- Warn: Project is vulnerable to: GHSA-72xf-g2v4-qvf3
- Warn: Project is vulnerable to: GHSA-c76h-2ccp-4975
Reason
Found 6/22 approved changesets -- score normalized to 2
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
project is not fuzzed
Details
- Warn: no fuzzer integrations found
Reason
branch protection not enabled on development/release branches
Details
- Warn: branch protection not enabled for branch 'master'
Reason
security policy file not detected
Details
- Warn: no security policy file detected
- Warn: no security file to analyze
- Warn: no security file to analyze
- Warn: no security file to analyze
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
- Warn: 0 commits out of 14 are checked with a SAST tool
Score
2.5
/10
Last Scanned on 2025-01-20
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More