Gathering detailed insights and metrics for micro
Gathering detailed insights and metrics for micro
Gathering detailed insights and metrics for micro
Gathering detailed insights and metrics for micro
npm install micro
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
10,593 Stars
435 Commits
458 Forks
178 Watching
1 Branches
126 Contributors
Updated on 27 Nov 2024
Minified
Minified + Gzipped
TypeScript (82.34%)
JavaScript (17.66%)
Cumulative downloads
Total Downloads
Last day
19%
217,907
Compared to previous day
Last week
7.1%
1,061,398
Compared to previous week
Last month
9.1%
4,247,979
Compared to previous month
Last year
209%
32,248,752
Compared to previous year
3
async
and await
Disclaimer: Micro was created for use within containers and is not intended for use in serverless environments. For those using Vercel, this means that there is no requirement to use Micro in your projects as the benefits it provides are not applicable to the platform. Utility features provided by Micro, such as json
, are readily available in the form of Serverless Function helpers.
Important: Micro is only meant to be used in production. In development, you should use micro-dev, which provides you with a tool belt specifically tailored for developing microservices.
To prepare your microservice for running in the production environment, firstly install micro
:
1npm install --save micro
Create an index.js
file and export a function that accepts the standard http.IncomingMessage and http.ServerResponse objects:
1module.exports = (req, res) => { 2 res.end('Welcome to Micro'); 3};
Micro provides useful helpers but also handles return values – so you can write it even shorter!
1module.exports = () => 'Welcome to Micro';
Next, ensure that the main
property inside package.json
points to your microservice (which is inside index.js
in this example case) and add a start
script:
1{ 2 "main": "index.js", 3 "scripts": { 4 "start": "micro" 5 } 6}
Once all of that is done, the server can be started like this:
1npm start
And go to this URL: http://localhost:3000
- 🎉
micro - Asynchronous HTTP microservices
USAGE
$ micro --help
$ micro --version
$ micro [-l listen_uri [-l ...]] [entry_point.js]
By default micro will listen on 0.0.0.0:3000 and will look first
for the "main" property in package.json and subsequently for index.js
as the default entry_point.
Specifying a single --listen argument will overwrite the default, not supplement it.
OPTIONS
--help shows this help message
-v, --version displays the current version of micro
-l, --listen listen_uri specify a URI endpoint on which to listen (see below) -
more than one may be specified to listen in multiple places
ENDPOINTS
Listen endpoints (specified by the --listen or -l options above) instruct micro
to listen on one or more interfaces/ports, UNIX domain sockets, or Windows named pipes.
For TCP (traditional host/port) endpoints:
$ micro -l tcp://hostname:1234
For UNIX domain socket endpoints:
$ micro -l unix:/path/to/socket.sock
For Windows named pipe endpoints:
$ micro -l pipe:\\.\pipe\PipeName
async
& await
Micro is built for usage with async/await.
1const sleep = require('then-sleep'); 2 3module.exports = async (req, res) => { 4 await sleep(500); 5 return 'Ready!'; 6};
When you want to set the port using an environment variable you can use:
micro -l tcp://0.0.0.0:$PORT
Optionally you can add a default if it suits your use case:
micro -l tcp://0.0.0.0:${PORT-3000}
${PORT-3000}
will allow a fallback to port 3000
when $PORT
is not defined.
Note that this only works in Bash.
For parsing the incoming request body we included an async functions buffer
, text
and json
1const { buffer, text, json } = require('micro'); 2 3module.exports = async (req, res) => { 4 const buf = await buffer(req); 5 console.log(buf); 6 // <Buffer 7b 22 70 72 69 63 65 22 3a 20 39 2e 39 39 7d> 7 const txt = await text(req); 8 console.log(txt); 9 // '{"price": 9.99}' 10 const js = await json(req); 11 console.log(js.price); 12 // 9.99 13 return ''; 14};
buffer(req, { limit = '1mb', encoding = 'utf8' })
text(req, { limit = '1mb', encoding = 'utf8' })
json(req, { limit = '1mb', encoding = 'utf8' })
async
function that can be run with await
.limit
is how much data is aggregated before parsing at max. Otherwise, an Error
is thrown with statusCode
set to 413
(see Error Handling). It can be a Number
of bytes or a string like '1mb'
.Error
is thrown with statusCode
set to 400
(see Error Handling)For other types of data check the examples
So far we have used return
to send data to the client. return 'Hello World'
is the equivalent of send(res, 200, 'Hello World')
.
1const { send } = require('micro'); 2 3module.exports = async (req, res) => { 4 const statusCode = 400; 5 const data = { error: 'Custom error message' }; 6 7 send(res, statusCode, data); 8};
send(res, statusCode, data = null)
require('micro').send
.statusCode
is a Number
with the HTTP status code, and must always be supplied.data
is supplied it is sent in the response. Different input types are processed appropriately, and Content-Type
and Content-Length
are automatically set.
Stream
: data
is piped as an octet-stream
. Note: it is your responsibility to handle the error
event in this case (usually, simply logging the error and aborting the response is enough).Buffer
: data
is written as an octet-stream
.object
: data
is serialized as JSON.string
: data
is written as-is.400
error is thrown. See Error Handling.You can use Micro programmatically by requiring Micro directly:
1const http = require('http'); 2const sleep = require('then-sleep'); 3const { serve } = require('micro'); 4 5const server = new http.Server( 6 serve(async (req, res) => { 7 await sleep(500); 8 return 'Hello world'; 9 }), 10); 11 12server.listen(3000);
require('micro').serve
.(req, res) => void
signature. That uses the provided function
as the request handler.await
. So it can be async
require('micro').sendError
.error.statusCode
.error.message
as the body.console.error
and during development (when NODE_ENV
is set to 'development'
) also sent in responses.throw
.require('micro').createError
.statusCode
.orig
sets error.originalError
which identifies the original error (if any).Micro allows you to write robust microservices. This is accomplished primarily by bringing sanity back to error handling and avoiding callback soup.
If an error is thrown and not caught by you, the response will automatically be 500
. Important: Error stacks will be printed as console.error
and during development mode (if the env variable NODE_ENV
is 'development'
), they will also be included in the responses.
If the Error
object that's thrown contains a statusCode
property, that's used as the HTTP code to be sent. Let's say you want to write a rate limiting module:
1const rateLimit = require('my-rate-limit'); 2 3module.exports = async (req, res) => { 4 await rateLimit(req); 5 // ... your code 6};
If the API endpoint is abused, it can throw an error with createError
like so:
1if (tooMany) { 2 throw createError(429, 'Rate limit exceeded'); 3}
Alternatively you can create the Error
object yourself
1if (tooMany) { 2 const err = new Error('Rate limit exceeded'); 3 err.statusCode = 429; 4 throw err; 5}
The nice thing about this model is that the statusCode
is merely a suggestion. The user can override it:
1try { 2 await rateLimit(req); 3} catch (err) { 4 if (429 == err.statusCode) { 5 // perhaps send 500 instead? 6 send(res, 500); 7 } 8}
If the error is based on another error that Micro caught, like a JSON.parse
exception, then originalError
will point to it. If a generic error is caught, the status will be set to 500
.
In order to set up your own error handling mechanism, you can use composition in your handler:
1const { send } = require('micro'); 2 3const handleErrors = (fn) => async (req, res) => { 4 try { 5 return await fn(req, res); 6 } catch (err) { 7 console.log(err.stack); 8 send(res, 500, 'My custom error!'); 9 } 10}; 11 12module.exports = handleErrors(async (req, res) => { 13 throw new Error('What happened here?'); 14});
Micro makes tests compact and a pleasure to read and write. We recommend Node TAP or AVA, a highly parallel test framework with built-in support for async tests:
1const http = require('http'); 2const { send, serve } = require('micro'); 3const test = require('ava'); 4const listen = require('test-listen'); 5const fetch = require('node-fetch'); 6 7test('my endpoint', async (t) => { 8 const service = new http.Server( 9 serve(async (req, res) => { 10 send(res, 200, { 11 test: 'woot', 12 }); 13 }), 14 ); 15 16 const url = await listen(service); 17 const response = await fetch(url); 18 const body = await response.json(); 19 20 t.deepEqual(body.test, 'woot'); 21 service.close(); 22});
Look at test-listen for a function that returns a URL with an ephemeral port every time it's called.
npm link
npm link micro
. Instead of the default one from npm, node will now use your clone of Micro!You can run the tests using: npm test
.
Thanks to Tom Yandell and Richard Hodgson for donating the name "micro" on npm!
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
no dangerous workflow patterns detected
Reason
license file detected
Details
Reason
security policy file detected
Details
Reason
Found 17/30 approved changesets -- score normalized to 5
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
project is not fuzzed
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Reason
66 existing vulnerabilities detected
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More