Gathering detailed insights and metrics for @slonik/dataloaders
Gathering detailed insights and metrics for @slonik/dataloaders
Gathering detailed insights and metrics for @slonik/dataloaders
Gathering detailed insights and metrics for @slonik/dataloaders
A Node.js PostgreSQL client with runtime and build time type safety, and composable SQL.
npm install @slonik/dataloaders
Typescript
Module System
Min. Node Version
Node Version
NPM Version
@slonik/dataloaders@48.1.2
Updated on Jun 01, 2025
@slonik/types@48.1.2
Updated on Jun 01, 2025
slonik-interceptor-query-cache@48.1.2
Updated on Jun 01, 2025
slonik-interceptor-query-logging@48.1.2
Updated on Jun 01, 2025
@slonik/utilities@48.1.2
Updated on Jun 01, 2025
@slonik/sql-tag@48.1.2
Updated on Jun 01, 2025
TypeScript (97.19%)
JavaScript (2.57%)
Dockerfile (0.22%)
Shell (0.02%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
NOASSERTION License
4,793 Stars
1,534 Commits
144 Forks
27 Watchers
60 Branches
55 Contributors
Updated on Jul 15, 2025
Latest Version
48.1.2
Package Id
@slonik/dataloaders@48.1.2
Unpacked Size
142.79 kB
Size
29.89 kB
File Count
82
NPM Version
10.8.2
Node Version
20.19.2
Published on
Jun 01, 2025
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
3
1
Utilities for creating DataLoaders using Slonik. These DataLoaders abstract away some of the complexity of working with cursor-style pagination when working with a SQL database, while still maintaining the flexibility that comes with writing raw SQL statements.
createNodeByIdLoaderClass
Example usage:
1const UserByIdLoader = createNodeByIdLoaderClass({ 2 query: sql.type(User)` 3 SELECT 4 * 5 FROM user 6 `, 7}); 8 9const pool = createPool("postgresql://"); 10const loader = new UserByIdLoader(pool); 11const user = await loader.load(99);
By default, the loader will look for an integer column named id
to use as the key. You can specify a different column to use like this:
1const UserByIdLoader = createNodeByIdLoaderClass({ 2 column: { 3 name: 'unique_id', 4 type: 'text', 5 } 6 query: sql.type(User)` 7 SELECT 8 * 9 FROM user 10 `, 11});
createConnectionLoaderClass
Example usage
1const UserConnectionLoader = createConnectionLoaderClass<User>({ 2 query: sql.type(User)` 3 SELECT 4 * 5 FROM user 6 `, 7}); 8 9const pool = createPool("postgresql://"); 10const loader = new UserByIdLoader(pool); 11const connection = await loader.load({ 12 where: ({ firstName }) => sql.fragment`${firstName} = 'Susan'`, 13 orderBy: ({ firstName }) => [[firstName, "ASC"]], 14});
When calling load
, you can include where
and orderBy
expression factories that will be used to generate each respective clause. These factory functions allow for type-safe loader usage and abstract away the actual table alias used inside the generated SQL query. Note that the column names passed to each factory reflect the type provided when creating the loader class (i.e. User
in the example above); however, each column name is transformed using columnNameTransformer
as described below.
Usage example with forward pagination:
1const connection = await loader.load({ 2 orderBy: ({ firstName }) => [[firstName, "ASC"]], 3 limit: first, 4 cursor: after, 5});
Usage example with backward pagination:
1const connection = await loader.load({ 2 orderBy: ({ firstName }) => [[firstName, "ASC"]], 3 limit: last, 4 cursor: before, 5 reverse: true, 6});
In addition to the standard edges
and pageInfo
fields, each connection returned by the loader also includes a count
field. This field reflects the total number of results that would be returned if no limit was applied. In order to fetch both the edges and the count, the loader makes two separate database queries. However, the loader can determine whether it needs to request only one or both of the queries by looking at the GraphQL fields that were actually requested. To do this, we pass in the GraphQLResolveInfo
parameter provided to every GraphQL resolver:
1const connection = await loader.load({ 2 orderBy: ({ firstName }) => [[firstName, "ASC"]], 3 limit: first, 4 cursor: after, 5 info, 6});
It's possible to request columns that will be exposed as fields on the edge type in your schema, as opposed to on the node type. These fields should be included in your query and the TypeScript type provided to the loader. The loader returns each row of the results as both the edge
and the node
, so all requested columns are available inside the resolvers for either type. Note: each requested column should be unique, so if there's a name conflict, you should use an appropriate alias. For example:
1const UserConnectionLoader = createConnectionLoaderClass< 2 User & { edgeCreatedAt } 3>({ 4 query: sql.unsafe` 5 SELECT 6 user.id, 7 user.name, 8 user.created_at, 9 friend.created_at edge_created_at 10 FROM user 11 INNER JOIN friend ON 12 user.id = friend.user_id 13 `, 14});
In the example above, if the field on the Edge type in the schema is named createdAt
, we just need to write a resolver for it and resolve the value to that of the edgeCreatedAt
property.
columnNameTransformer
Both types of loaders also accept an columnNameTransformer
option. By default, the transformer used is snake-case. The default assumes:
slonik-interceptor-field-name-transformation
or the slonik-interceptor-preset
, which means the columns are returned as camelCased in the query resultsBy using the columnNameTransformer
(snake case), fields can be referenced by their names as they appear in the results when calling the loader, while still referencing the correct columns inside the query itself. If your usage doesn't meet the above two criteria, consider providing an alternative transformer, like an identify function.
This library has been originally developed by @danielrearden (https://github.com/danielrearden/slonik-dataloaders), and it has been ported (with adjustments) with Daniel's permission to Slonik monorepo.
No vulnerabilities found.
Reason
30 commit(s) and 5 issue activity found in the last 90 days -- score normalized to 10
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
3 existing vulnerabilities detected
Details
Reason
Found 0/20 approved changesets -- score normalized to 0
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2025-07-07
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More