Gathering detailed insights and metrics for riot-ratelimiter-temp
Gathering detailed insights and metrics for riot-ratelimiter-temp
Gathering detailed insights and metrics for riot-ratelimiter-temp
Gathering detailed insights and metrics for riot-ratelimiter-temp
limiter
A generic rate limiter for the web and node.js. Useful for API clients, web crawling, or other tasks that need to be throttled
ratelimiter
abstract rate limiter backed by redis
temp-dir
Get the real path of the system temp directory
@types/ratelimiter
TypeScript definitions for ratelimiter
npm install riot-ratelimiter-temp
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
15 Stars
47 Commits
6 Forks
4 Watching
5 Branches
2 Contributors
Updated on 16 Jun 2024
Minified
Minified + Gzipped
TypeScript (90.86%)
JavaScript (9.14%)
Cumulative downloads
Total Downloads
Last day
-85.9%
10
Compared to previous day
Last week
-46.8%
149
Compared to previous week
Last month
-2.3%
1,211
Compared to previous month
Last year
-1.5%
17,467
Compared to previous year
4
34
A rate limiter handling rate-limits enforced by the riot-games api. Automatically creates and updates rate-limiters on a per region and method base, respecting app-limits, method limits and generic backoff on service/underlying service limits.
npm install riot-ratelimiter
const RiotRateLimiter = require('riot-ratelimiter')
const limiter = new RiotRateLimiter()
limiter.executing({
url: 'validRiotApiUrl',
token: <RIOT_API_KEY>,
// will resolve the Promise with the full API response Object
// omit or set to false if you are only interested in the data.
// in case of an error (404/500/503) the Promise will always be rejected with the full response.
resolveWithFullResponse: true
})
You need to know how to work with Promises. This module uses promise-request for the actual requests.
The region and method used will be determined from the given url. When a new ratelimiter is created (on the first request to a region and method) a single synchronisation Request will be executed to find out about the relevant limits and the current limit-counts for that method/region on your API-Key.
This ensures you can not hit more then one 429 (per RiotRateLimiter instance) when starting up your app. The received limits and counts will then be set acchordingly and used processing additional resets.
See "Choosing the right strategy" below for information on additional synchronisation requests done depending on strategy.
We currently offer two strategies for your limiting needs:
STRATEGY.SPREAD
(default) and STRATEGY.BURST
SPREAD will ensure you don't hit the rate-limit by spreading out the requests to fit into the given time-window and remaining limit-count. For example if a limit resets every 10 seconds and you can do 100 requests in that window, one request will be done every 0.1 seconds (actually there is a margin of x% to really ensure you don't hit the limit). This spread-interval is calculated on a per request base using the current limit-counts and limits received from the RIOT API.
This basically means every request done when using STRATEGY.SPREAD will act as a synchronisation request, which should prevent most issues when using it in a multi-instance scenario.
BURST will try to execute all the requests you pass in immediately if possible. This strategy can become highly volatile to getting out of synch with the actual limit-counts on the RIOT API edge, so this should be used with care (and will need improvement over time). Each time a limit resets, the next request done will act as synchronisation request, to prevent throwing a huge amount of requests into an exceeded limit.
It is recommended for following scenarios:
RiotRateLimiter will keep track of the reset timer for the method, starting from from the synchronisation request. Because there are no limit-window information given by riot this timer might wait longer then neccessary when the rate-limit will be approached (the full reset time), even if there are only a few requests left in the limit-count. All requests that would exceed the limit will be queeud up to be executed as soon as the the limit resets.
These requests will be rescheduled (first in queue) and the executing limiter will back off for the duration given by the retry-after Header.
These requests will be rescheduled (first in queue) and the executing limiter will backoff generically.
This means it will start with a backoff timer on the first try (eg. 1000 MS) and increase the backoff time exponentially with each unsuccessful try (2000, 4000, 8000, ...).
Will be passed back to the caller by rejecting the promise returned from .executing
RateLimit instances are exposed through RiotRateLimiter#getLimits
and RiotRateLimiter#getLimitsForPlatformId
.
For App RateLimits, the same instances are shared across all RateLimiters.
Each ApiMethod has it's own RateLimiter instance with his own Method RateLimits.
RateLimits and RateLimiters communicate about changes, to keep things in sync internally, and to be able to synergize with each other.
Each RateLimiter has public access to all of it's RateLimit instances, and each RateLimit instance has public access to all RateLimiter instances, that are connected to it.
This strong coupling is desired to a) keep the propably unneccessary complicated codebase easier to understand and modify and b) for being able to work directly on references for easy propagation of rate-limit changes, for example in the App RateLimits.
Because of the tight coupling and that RateLimit instances are exposed, you also have public access to the internal RateLimiters. If you have a special use-case that temporarily requires extra strict RateLimits, or you just want to have a bit more control and transparency in what's going on, you can introduce your own RateLimits. Just be aware, that the public interface is not deliberately designed yet, so there might be breaking changes somewhen, but it will follow Semantic Versioning.
You will need to add your (development) api-key to a file /src/API_KEY
to run the tests within src/RiotRateLimiter
We use SemVer for versioning. For the versions available, see the tags on this repository.
This project is licensed under the MIT License - see the LICENSE file for details
No vulnerabilities found.
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
Found 2/28 approved changesets -- score normalized to 0
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Reason
84 existing vulnerabilities detected
Details
Score
Last Scanned on 2024-11-25
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More