Gathering detailed insights and metrics for @nestjs/throttler
Gathering detailed insights and metrics for @nestjs/throttler
Gathering detailed insights and metrics for @nestjs/throttler
Gathering detailed insights and metrics for @nestjs/throttler
nestjs-throttler-storage-redis
Redis storage provider for the @nestjs/throttler package
nestjs-throttler-storage-mongo
Mongo storage provider for nestjs/throttler
nestjs-throttler
A Rate-Limiting module for NestJS to work on Express, Fastify, Websockets, Socket.IO, and GraphQL, all rolled up into a simple package.
@nest-lab/throttler-storage-redis
Redis storage provider for the @nestjs/throttler package
A rate limiting module for NestJS to work with Fastify, Express, GQL, Websockets, and RPC 🧭
npm install @nestjs/throttler
Typescript
Module System
Node Version
NPM Version
96.1
Supply Chain
97.7
Quality
83.3
Maintenance
100
Vulnerability
100
License
TypeScript (98.36%)
JavaScript (1.64%)
Total Downloads
35,489,161
Last Day
75,866
Last Week
496,063
Last Month
2,036,206
Last Year
18,158,390
MIT License
656 Stars
2,443 Commits
68 Forks
6 Watchers
7 Branches
34 Contributors
Updated on May 09, 2025
Minified
Minified + Gzipped
Latest Version
6.4.0
Package Id
@nestjs/throttler@6.4.0
Unpacked Size
226.22 kB
Size
54.28 kB
File Count
49
NPM Version
10.8.2
Node Version
20.18.1
Published on
Jan 22, 2025
Cumulative downloads
Total Downloads
Last Day
12.1%
75,866
Compared to previous day
Last Week
11.2%
496,063
Compared to previous week
Last Month
0.3%
2,036,206
Compared to previous month
Last Year
65.1%
18,158,390
Compared to previous year
3
49
A progressive Node.js framework for building efficient and scalable server-side applications.
A Rate-Limiter for NestJS, regardless of the context.
For an overview of the community storage providers, see Community Storage Providers.
This package comes with a couple of goodies that should be mentioned, first is the ThrottlerModule
.
1$ npm i --save @nestjs/throttler
@nestjs/throttler@^1
is compatible with Nest v7 while @nestjs/throttler@^2
is compatible with Nest v7 and Nest v8, but it is suggested to be used with only v8 in case of breaking changes against v7 that are unseen.
For NestJS v10, please use version 4.1.0 or above
Once the installation is complete, the ThrottlerModule
can be configured as any other Nest package with forRoot
or forRootAsync
methods.
1@@filename(app.module) 2@Module({ 3 imports: [ 4 ThrottlerModule.forRoot([{ 5 ttl: 60000, 6 limit: 10, 7 }]), 8 ], 9}) 10export class AppModule {}
The above will set the global options for the ttl
, the time to live in milliseconds, and the limit
, the maximum number of requests within the ttl, for the routes of your application that are guarded.
Once the module has been imported, you can then choose how you would like to bind the ThrottlerGuard
. Any kind of binding as mentioned in the guards section is fine. If you wanted to bind the guard globally, for example, you could do so by adding this provider to any module:
1{ 2 provide: APP_GUARD, 3 useClass: ThrottlerGuard 4}
There may come upon times where you want to set up multiple throttling definitions, like no more than 3 calls in a second, 20 calls in 10 seconds, and 100 calls in a minute. To do so, you can set up your definitions in the array with named options, that can later be referenced in the @SkipThrottle()
and @Throttle()
decorators to change the options again.
1@@filename(app.module) 2@Module({ 3 imports: [ 4 ThrottlerModule.forRoot([ 5 { 6 name: 'short', 7 ttl: 1000, 8 limit: 3, 9 }, 10 { 11 name: 'medium', 12 ttl: 10000, 13 limit: 20 14 }, 15 { 16 name: 'long', 17 ttl: 60000, 18 limit: 100 19 } 20 ]), 21 ], 22}) 23export class AppModule {}
There may be a time where you want to bind the guard to a controller or globally, but want to disable rate limiting for one or more of your endpoints. For that, you can use the @SkipThrottle()
decorator, to negate the throttler for an entire class or a single route. The @SkipThrottle()
decorator can also take in an object of string keys with boolean values for if there is a case where you want to exclude most of a controller, but not every route, and configure it per throttler set if you have more than one. If you do not pass an object, the default is to use {{ '{' }} default: true {{ '}' }}
1@SkipThrottle() 2@Controller('users') 3export class UsersController {}
This @SkipThrottle()
decorator can be used to skip a route or a class or to negate the skipping of a route in a class that is skipped.
1@SkipThrottle() 2@Controller('users') 3export class UsersController { 4 // Rate limiting is applied to this route. 5 @SkipThrottle({ default: false }) 6 dontSkip() { 7 return 'List users work with Rate limiting.'; 8 } 9 // This route will skip rate limiting. 10 doSkip() { 11 return 'List users work without Rate limiting.'; 12 } 13}
There is also the @Throttle()
decorator which can be used to override the limit
and ttl
set in the global module, to give tighter or looser security options. This decorator can be used on a class or a function as well. With version 5 and onwards, the decorator takes in an object with the string relating to the name of the throttler set, and an object with the limit and ttl keys and integer values, similar to the options passed to the root module. If you do not have a name set in your original options, use the string default
You have to configure it like this:
1// Override default configuration for Rate limiting and duration. 2@Throttle({ default: { limit: 3, ttl: 60000 } }) 3@Get() 4findAll() { 5 return "List users works with custom rate limiting."; 6}
If your application runs behind a proxy server, check the specific HTTP adapter options (express and fastify) for the trust proxy
option and enable it. Doing so will allow you to get the original IP address from the X-Forwarded-For
header, and you can override the getTracker()
method to pull the value from the header rather than from req.ip
. The following example works with both express and fastify:
1// throttler-behind-proxy.guard.ts 2import { ThrottlerGuard } from '@nestjs/throttler'; 3import { Injectable } from '@nestjs/common'; 4 5@Injectable() 6export class ThrottlerBehindProxyGuard extends ThrottlerGuard { 7 protected getTracker(req: Record<string, any>): Promise<string> { 8 return new Promise<string>((resolve, reject) => { 9 const tracker = req.ips.length > 0 ? req.ips[0] : req.ip; // individualize IP extraction to meet your own needs 10 resolve(tracker); 11 }); 12 } 13} 14 15// app.controller.ts 16import { ThrottlerBehindProxyGuard } from './throttler-behind-proxy.guard'; 17 18@UseGuards(ThrottlerBehindProxyGuard)
info Hint You can find the API of the
req
Request object for express here and for fastify here.
This module can work with websockets, but it requires some class extension. You can extend the ThrottlerGuard
and override the handleRequest
method like so:
1@Injectable() 2export class WsThrottlerGuard extends ThrottlerGuard { 3 async handleRequest(requestProps: ThrottlerRequest): Promise<boolean> { 4 const { context, limit, ttl, throttler, blockDuration, getTracker, generateKey } = requestProps; 5 6 const client = context.switchToWs().getClient(); 7 const tracker = client._socket.remoteAddress; 8 const key = generateKey(context, tracker, throttler.name); 9 const { totalHits, timeToExpire, isBlocked, timeToBlockExpire } = 10 await this.storageService.increment(key, ttl, limit, blockDuration, throttler.name); 11 12 const getThrottlerSuffix = (name: string) => (name === 'default' ? '' : `-${name}`); 13 14 // Throw an error when the user reached their limit. 15 if (isBlocked) { 16 await this.throwThrottlingException(context, { 17 limit, 18 ttl, 19 key, 20 tracker, 21 totalHits, 22 timeToExpire, 23 isBlocked, 24 timeToBlockExpire, 25 }); 26 } 27 28 return true; 29 } 30}
info Hint If you are using ws, it is necessary to replace the
_socket
withconn
There's a few things to keep in mind when working with WebSockets:
APP_GUARD
or app.useGlobalGuards()
exception
event, so make sure there is a listener ready for thisinfo Hint If you are using the
@nestjs/platform-ws
package you can useclient._socket.remoteAddress
instead.
The ThrottlerGuard
can also be used to work with GraphQL requests. Again, the guard can be extended, but this time the getRequestResponse
method will be overridden
1@Injectable() 2export class GqlThrottlerGuard extends ThrottlerGuard { 3 getRequestResponse(context: ExecutionContext) { 4 const gqlCtx = GqlExecutionContext.create(context); 5 const ctx = gqlCtx.getContext(); 6 return { req: ctx.req, res: ctx.res }; 7 } 8}
However, when using Apollo Express/Fastify or Mercurius, it's important to configure the context correctly in the GraphQLModule to avoid any problems.
For Apollo Server running on Express, you can set up the context in your GraphQLModule configuration as follows:
1GraphQLModule.forRoot({ 2 // ... other GraphQL module options 3 context: ({ req, res }) => ({ req, res }), 4});
When using Apollo Server with Fastify or Mercurius, you need to configure the context differently. You should use request and reply objects. Here's an example:
1GraphQLModule.forRoot({ 2 // ... other GraphQL module options 3 context: (request, reply) => ({ request, reply }), 4});
The following options are valid for the object passed to the array of the ThrottlerModule
's options:
name | the name for internal tracking of which throttler set is being used. Defaults to `default` if not passed |
ttl | the number of milliseconds that each request will last in storage |
limit | the maximum number of requests within the TTL limit |
blockDuration | the number of milliseconds that request will be blocked for that time |
ignoreUserAgents | an array of regular expressions of user-agents to ignore when it comes to throttling requests |
skipIf | a function that takes in the ExecutionContext and returns a boolean to short circuit the throttler logic. Like @SkipThrottler() , but based on the request |
getTracker | a function that takes in the Request and returns a string to override the default logic of the getTracker method |
generateKey | a function that takes in the ExecutionContext , the tacker string and the throttler name as a string and returns a string to override the final key which will be used to store the rate limit value. This overrides the default logic of the generateKey method |
If you need to set up storages instead, or want to use a some of the above options in a more global sense, applying to each throttler set, you can pass the options above via the throttlers
option key and use the below table
storage | a custom storage service for where the throttling should be kept track. See here. |
ignoreUserAgents | an array of regular expressions of user-agents to ignore when it comes to throttling requests |
skipIf | a function that takes in the ExecutionContext and returns a boolean to short circuit the throttler logic. Like @SkipThrottler() , but based on the request |
throttlers | an array of throttler sets, defined using the table above |
errorMessage | a string OR a function that takes in the ExecutionContext and the ThrottlerLimitDetail and returns a string which overrides the default throttler error message |
getTracker | a function that takes in the Request and returns a string to override the default logic of the getTracker method |
generateKey | a function that takes in the ExecutionContext , the tacker string and the throttler name as a string and returns a string to override the final key which will be used to store the rate limit value. This overrides the default logic of the generateKey method |
You may want to get your rate-limiting configuration asynchronously instead of synchronously. You can use the forRootAsync()
method, which allows for dependency injection and async
methods.
One approach would be to use a factory function:
1@Module({ 2 imports: [ 3 ThrottlerModule.forRootAsync({ 4 imports: [ConfigModule], 5 inject: [ConfigService], 6 useFactory: (config: ConfigService) => [ 7 { 8 ttl: config.get('THROTTLE_TTL'), 9 limit: config.get('THROTTLE_LIMIT'), 10 }, 11 ], 12 }), 13 ], 14}) 15export class AppModule {}
You can also use the useClass
syntax:
1@Module({ 2 imports: [ 3 ThrottlerModule.forRootAsync({ 4 imports: [ConfigModule], 5 useClass: ThrottlerConfigService, 6 }), 7 ], 8}) 9export class AppModule {}
This is doable, as long as ThrottlerConfigService
implements the interface ThrottlerOptionsFactory
.
The built in storage is an in memory cache that keeps track of the requests made until they have passed the TTL set by the global options. You can drop in your own storage option to the storage
option of the ThrottlerModule
so long as the class implements the ThrottlerStorage
interface.
info Note
ThrottlerStorage
can be imported from@nestjs/throttler
.
There are a couple of helper methods to make the timings more readable if you prefer to use them over the direct definition. @nestjs/throttler
exports five different helpers, seconds
, minutes
, hours
, days
, and weeks
. To use them, simply call seconds(5)
or any of the other helpers, and the correct number of milliseconds will be returned.
For most people, wrapping your options in an array will be enough.
If you are using a custom storage, you should wrap you ttl
and limit
in an
array and assign it to the throttlers
property of the options object.
Any @ThrottleSkip()
should now take in an object with string: boolean
props.
The strings are the names of the throttlers. If you do not have a name, pass the
string 'default'
, as this is what will be used under the hood otherwise.
Any @Throttle()
decorators should also now take in an object with string keys,
relating to the names of the throttler contexts (again, 'default'
if no name)
and values of objects that have limit
and ttl
keys.
Warning Important The
ttl
is now in milliseconds. If you want to keep your ttl in seconds for readability, use theseconds
helper from this package. It just multiplies the ttl by 1000 to make it in milliseconds.
For more info, see the Changelog
Feel free to submit a PR with your custom storage provider being added to this list.
Nest is MIT licensed.
No vulnerabilities found.
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
30 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 10
Reason
license file detected
Details
Reason
9 existing vulnerabilities detected
Details
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2025-05-05
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More