Gathering detailed insights and metrics for graphql-directive-pagination
Gathering detailed insights and metrics for graphql-directive-pagination
Gathering detailed insights and metrics for graphql-directive-pagination
Gathering detailed insights and metrics for graphql-directive-pagination
graphql-pagination-transform
Transform fields into relay connections using a @connection directive
relay-pagination-directive
GQL directive to simplify relay style pagination
@kazekyo/nau-graphql-codegen-preset
A GraphQL Code Generator preset for Nau
@kazekyo/nau
Nau is a tool that makes the use of Apollo Client more productive for users using Relay GraphQL Server Specification compliant backends.
npm install graphql-directive-pagination
Typescript
Module System
Node Version
NPM Version
JavaScript (70.02%)
TypeScript (29.98%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
MIT License
41 Commits
1 Watchers
1 Branches
1 Contributors
Updated on Jul 11, 2023
Latest Version
1.0.15
Package Id
graphql-directive-pagination@1.0.15
Unpacked Size
0.97 MB
Size
805.11 kB
File Count
13
NPM Version
9.6.7
Node Version
20.3.1
Published on
Jul 22, 2023
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
6
1
Manipulates SQL clauses to provide pagination ability to fields in GraphQL. Easy to integrate.
This is similar to Relay's @connection
directive (see graphql-directive-connection). The difference is the use of limit
and offset
rather than after, first, before, last
. The reason for this change is to integrate better with the LIMIT and OFFSET clauses in MySQL and PostgresQL.
In addition to a @pagination
directive, this package is also bundled with JS functions to help you build the GraphQL resolvers that go along with your @pagination
fields.
Here are instructions on how to use this package to implement a paginated table or an infinite scroller.
It also supports scenarios where the data source has inserted new rows or documents while the client has already started paginating/scrolling.
These are the changes you need to make to your existing typeDefs.
Steps:
@pagination
directive definition either inlined into your typeDefs, or by including paginationDirectiveTypeDefs
in makeExecutableSchema
.paginationDirectiveTransform
.1import paginationDirective from 'graphql-directive-pagination'
2import { makeExecutableSchema } from '@graphql-tools/schema'
3
4const {
5 paginationResolver,
6 paginationDirectiveTransform,
7 paginationDirectiveTypeDefs
8} = paginationDirective('pagination', {
9 // The timezone of your SQL server.
10 // This can be 'local', 'utc', or an offset in the form +HH:MM or -HH:MM. (Default: 'utc')
11 timezone: 'utc'
12})
13
14// Wrap your resolvers with paginationResolver
15export const pagination = paginationResolver
16
17const typeDefs = `
18 directive @pagination on FIELD_DEFINITION
19
20 type User {
21 userId: Int
22
23 """
24 The type given to a field tagged with @pagination does not matter.
25 It will be overwritten.
26 Any args already present, such as this orderBy arg, will be kept.
27 """
28 posts: [Post!]! @pagination
29 }
30
31 type Post {
32 postId: Int
33 }
34
35 type Query {
36 user: User
37 }
38`
39
40const resolvers = {}
41let schema = makeExecutableSchema({
42 typeDefs,
43 // You can also use this:
44 // typeDefs: [typeDefs, paginationDirectiveTypeDefs]
45 resolvers
46})
47
48// You must also pass your resolvers into the transform:
49schema = paginationDirectiveTransform(schema, resolvers)
50
51export default schema
When you run this script, the original typeDefs you define will be transformed in this manner:
@pagination
will be untagged since the transform has been applied.@pagination
will have their return type changed to the Pagination type.These types will be added to your schema:
1type PaginationInfo { 2 # Tells whether or not there are more rows available. 3 # You can get these rows by performing a new query after incrementing 4 # your client-side offset. 5 # E.G. offset += limit 6 hasMore: Boolean! 7 8 # moreOffset is relative to nextOffsetRelativeTo. 9 # It is the offset of the next non-negative row the client doesn't have yet. 10 # If hasMore is false, moreOffset is still set to the next row so that the 11 # client can make a LoadMore request after waiting some time. 12 # In most cases, the client should do `setOffset(moreOffset)` after each response 13 # in order to be prepared for a LoadMore event. 14 moreOffset: Int! 15 16 # Tells whether or not there are new rows available. "New rows" are not 17 # the same as "more rows". 18 # See the section on New Rows. 19 hasNew: Int! 20 21 # In order to get new rows, 22 # the client should send a request with: 23 # offset: $minusCountNew, 24 # limit: $countNew, 25 # offsetRelativeTo: $offsetRelativeTo 26 countNew: Int! 27 28 # The client should echo nextOffsetRelativeTo back in the next request. 29 nextOffsetRelativeTo: String! 30} 31 32input PostPagination { 33 nodes: [Post!]! 34 info: PaginationInfo! 35} 36 37input PaginationOrdering { 38 index: String! 39 direction: String! 40}
In addition, the User type has been modified:
1# Before: 2type User { 3 posts: [Post!]! @pagination 4} 5 6# After: 7type User { 8 posts( 9 # offset can be negative. See the section on New Rows. 10 offset: Int!, 11 limit: Int!, 12 # countNewLimit is used when checking how many new rows there are. 13 # See the section on New Rows. 14 countNewLimit: Int, 15 # At least 1 ordering should be included. 16 orderings: [PaginationOrdering!]!, 17 # The number of rows the client has should be passed into countLoaded on every request. 18 countLoaded: Int! 19 # After the initial page load, the server will respond with 20 # `nextOffsetRelativeTo`. 21 # This value should be echoed back in subsequent requests. 22 offsetRelativeTo: String 23 ): PostPagination! 24}
This package comes with a built-in functions for building resolvers for @pagination
fields.
Let's suppose you are refactoring a field which was already implemented using a regular array. Your typeDefs and resolvers looked like this originally:
1// userSchema.js 2 3export const typeDefs = ` 4 type User { 5 posts: [Post!]! 6 } 7` 8 9export const resolvers = { 10 User: { 11 posts: async (user, args, ctx, info): Promise<Array<Post>> => { 12 const posts = await PostDB.getByUserId(user.userId) 13 return posts 14 }, 15 }, 16}
Now you want to make this a paginated field, so you add the @pagination
directive, and this is how your resolvers would change:
1// userSchema.js
2
3export const typeDefs = `
4 type User {
5 posts: [Post!]! @pagination
6 }
7`
8
9export const resolvers = {
10 User: {
11 posts: paginationResolver(async (user, args, ctx, info): Promise<Array<Post>> => {
12 // See the SQL Queries section for instructions on how to use `args.clauses.mysql`.
13 const posts = await PostDB.getByUserId(user.userId, args.clauses.mysql)
14 return posts
15 }),
16 },
17}
The pagination
function takes in your original resolver and provides it with args.clauses
and converts the return value into a PaginationPost
. You do not have to use args.clauses
. See the SQL Queries section for more details.
This package is primarily designed for usage with MySQL or Postgres. By performing sorting, offsetting, and limiting in the database, work is offloaded from the Node server so that it remains lightweight and performant.
If you use these, you can pass args.clauses.mysql
or args.clauses.postgres
into your SQL query builder.
This package will then manipulate the clauses to do things such as:
offsetRelativeTo
is present in the request, it performs a SELECT to determine one.offsetRelativeTo
is determined, it performs one SELECT to get paginated rows, I.E. rows associated with a non-negative offset.offsetRelativeTo
, it performs another SELECT to lookahead and check if there are any rows associated with a negative offset. This is used to determine countNew
.For more information on the different between positive and negative offset rows, see the section on New Rows.
If you do not use MySQL or Postgres, you can still use this package. The pagination
function can detect if you use args.clauses
, and it will behave differently dependeing on if you do or do not:
args.clauses
, then the pagination
function expects your resolver to return an array which has already had sort, offset, and limit applied to it.args.clauses
, then pagination
expects an array which has not yet been sorted, offsetted, or limited. pagination
will perform those operations on the returned array.Both args.clauses.mysql
and args.clauses.postgres
have been SQL escaped/sanitized using mysql.format
.
Here is an example of how you should edit your SQL queries to integrate with this package:
1/* 2 `clauses` is passed in from the resolver's `args.clauses.mysql` or 3 `args.clauses.postgres`. 4 5 type Clauses = { 6 mysql: { 7 where?: string 8 orderBy: string 9 limit: string 10 } 11 postgres: { 12 where?: string 13 orderBy: string 14 offset: string 15 limit: string 16 } 17 } 18* */ 19async function getPosts(userId: string, clauses: Clauses) { 20 const query = sql` 21 SELECT * FROM Posts 22 WHERE userId = ${userId} 23 ${clauses.where ? `AND ${clauses.where}` : ''} 24 ORDER BY ${clauses.orderBy} 25 LIMIT ${clauses.limit}; 26 ` 27 const rows = await db.all(query) 28 return rows 29}
In the client, you might have these three queries for the three kinds of events that can happen:
offsetRelativeTo: null
as a page load. It will establish the offsetRelativeTo
and return it in the response as nextOffsetRelativeTo
.offsetRelativeTo
has been established, and now the client can load pages associated with positive offsets.See the picture at the top of this document for reference.
Here are what the queries for these events look like:
1query PageLoad ( 2 # startingOffset = startingPage * pageSize 3 $startingOffset: Int! 4 $pageSize: Int! 5 $orderings: [PaginationOrdering!]! 6 $countLoaded: Int! 7) { 8 user { 9 id 10 posts( 11 offset: $startingOffset, 12 limit: $pageSize, 13 orderings: $orderings, 14 countLoaded: $countLoaded 15 # notice offsetRelativeTo is blank 16 ) { 17 nodes { 18 id 19 } 20 info { 21 hasMore 22 hasNew 23 countNew 24 moreOffset 25 nextOffsetRelativeTo 26 } 27 } 28 } 29}
1query LoadMore ( 2 # pageOffset = currentPage * pageSize 3 $pageOffset: Int! 4 $pageSize: Int! 5 $orderings: [PaginationOrdering!]! 6 $countLoaded: Int! 7 # The client should echo back the value of nextOffsetRelativeTo 8 # which the server provided in the PageLoad response. 9 $offsetRelativeTo: String! 10) { 11 user { 12 id 13 posts( 14 offset: $pageOffset, 15 limit: $pageSize, 16 orderings: $orderings, 17 countLoaded: $countLoaded, 18 offsetRelativeTo: $offsetRelativeTo 19 ) { 20 nodes { 21 id 22 } 23 info { 24 hasMore 25 hasNew 26 countNew 27 moreOffset 28 nextOffsetRelativeTo 29 } 30 } 31 } 32}
1query LoadNew ( 2 $minusCountNew: Int! 3 $countNew: Int! 4 $orderings: [PaginationOrdering!]! 5 $countLoaded: Int! 6 # The client should echo back the value of nextOffsetRelativeTo 7 # which the server provided in the PageLoad response. 8 $offsetRelativeTo: String! 9) { 10 user { 11 id 12 posts( 13 offset: $minusCountNew, 14 limit: $countNew, 15 orderings: $orderings, 16 countLoaded: $countLoaded, 17 offsetRelativeTo: $offsetRelativeTo 18 ) { 19 nodes { 20 id 21 } 22 info { 23 hasMore 24 hasNew 25 countNew 26 moreOffset 27 nextOffsetRelativeTo 28 } 29 } 30 } 31}
For information about LoadNew
, countNew
, and why is a negative offset used to load new rows, see the section on New Rows.
Responses will include countNew
. This tells how many rows there are associated with a negative offset relative to
offsetRelativeTo
.
When the page first loads, assuming the pagination is starting on page 0, the client has not yet established an
offsetRelativeTo
, so it sends a query with offsetRelativeTo: null
. The server should determine an offsetRelativeTo
and then send this back to the client using nextOffsetRelativeTo
.
Once the client has established an offsetRelativeTo
, it is possible for the database to insert more rows after the client has started paginating, and these are
associated with a negative offset.
In the example of a paginated table, a negative offset would mean going to a page before the first page. There are new results which cannot be accessed, and one way to get them would be to do a full page refresh. In the case of Twitter's infinite scroller, you can scroll back to the top of the feed, and there is a button which says, "Show N New Tweets". So this is what is meant by new rows and negative offset.
Instead of doing a full page refresh, it is possible to gracefully show these new rows to the end user.
A naive way to do this would be to repeat what is done on initial page load.
The client could send offsetRelativeTo: null
to the server, emulating what it does on initial page load.
The server would respond with a new nextOffsetRelativeTo
and moreOffset
which the client should accept.
This is naive because the limit
in the request can cause there to be a gap in results. For example, if there are
35 new rows, and the client requests limit: 10, offsetRelativeTo: null
, then the first 10 of those 35 rows will be
returned, leaving a gap of 25 rows. Rows after those 25 have already been loaded by the client.
The server does respond with countNew
, but a gap can still happen even if the client
requests limit: $countNew, offsetRelativeTo: null
.
At first glance, this appears that it will get all the new rows so that there is no gap,
but the problem is that countNew
is old information by the time the client sends it back up in the request's limit
.
countNew
is reported every time the client ask for more rows, but the time between the last LoadMore
event and the
LoadNew
event could be a long time.
In order to get new rows without any possibility of there being gaps in data, the client must not pass
offsetRelativeTo: null
. Instead, the client should keep offsetRelativeTo
in the request as it does
for LoadMore
events, but instead, when handling LoadNew
events, it should pass in a negative offset
.
The server will flip the inequality in the WHERE clause and flip the sort directions in the ORDER BY clause in order to
get new rows relative to offsetRelativeTo
. This avoids gaps.
To recap, to get new rows,
the client request should include: offset: $minusCountNew, limit: $countNew, offsetRelativeTo: $offsetRelativeTo
.
See the section on Client-Side Events for an example of the LoadMore
query.
countNew
?If offsetRelativeTo
is included in the request, it includes a WHERE statement in the SQL query. This is equivalent
to cutting the database table into two partitions, rows above the WHERE statement's inequality, and rows below. See the image at the top of this document for reference. The rows above are associated with a negative offset, and the rows below are associated with a non-negative offset.
By flipping the WHERE statement's inequality, the server can SELECT either of the two partitions, the negative offset rows or the non-negative offset rows.
So this is what this package does. It sends two SQL queries on every request which has offsetRelativeTo
. It returns
the positive offset rows but returns the count of the negative offset rows.
When offsetRelativeTo
is not included in the request, the server will use the first row, which is associated with
offset 0, to obtain the offsetRelativeTo
. To do this, it performs a SQL query without a WHERE statement, so,
countNew
must be 0 in requests with offsetRelativeTo: null
.
countNew
Taking Twitter for example again, if someone were to leave the page open, close their laptop, and come back to it later, and scroll to the top and press, "Show N New Tweets", then if implemented incorrectly, it might cause the server and database to process more data than it should because in those days when the user was inactive, many rows could have been added.
When the server asks the database for a list of new rows, I.E. rows associated with a negative offset, it must include a LIMIT on this query to avoid a scenario where there are potentially hundreds of new rows and the Node server ends up doing too much work.
By default, this package will use the limit
field in the request. So the server can only lookahead as far as limit
.
It cannot see new rows past the limit
.
(BTW, when asking for new rows, in addition to flipping the inequality in the WHERE clause,
the ORDER BY directions are also flipped.)
Because of this usage of limit
, the server response's countNew
won't be greater than the request's limit
. This can impact your frontend display of countNew
. In Twitter's example, they display "Show N New Tweets", where N displaying countNew
. If the limit
of your requests is your page size, then countNew
would be limited to show a value at maximum equal to your page size.
If you would like to override this default, then you can include countNewLimit
in the request. countNewLimit
will
only be used in the SQL query for counting new rows,
and it will not affect the query for rows associated with a non-negative offset. This allows you to display "Show N New Rows" to your end users where the maximum value N can take on is specified by countNewLimit
.
Additional Note: On a LoadNew
request, I.E. a request with negative offset, the response will never contain rows that were associated with a non-negative offset. For example, if there were 35 new rows, it doesn't matter what offset
, limit
, or countNewLimit
are set to, only up to 35 rows can be returned. The row that was associated with nextOffsetRelativeTo
from the previous response and echoed back in the negative offset requests offsetRelativeTo
will never be included in the response.
This package is designed to support a feature where the end user can select which column to sort by, for example, a
table which has multiple columns like email
, name
, and dateCreated
. However, there is an issue with sorting when
multiple columns have the same sort value and they have a tie.
Suppose 3 rows have the same dateCreated
. If you were to sort based on dateCreated
, and the limit was 1, then the
algorithm is now non-deterministic, and we aren't sure which of the 3 rows will be returned.
This kind of issue will probably not affect most use cases. It only becomes a problem when there are more tied rows than
the limit. For example, if there are 10 rows with the same dateCreated
, and the limit is 5, then the first query, with
offset 0, will randomly return 5 of those 10 rows. The second query for the next page, offset 5, will again randomly
return 5 of those 10 rows.
For something like dateCreated
, it might be very rare for your application to ever see 10 rows tied. For something
like the email
field, you will probably never see ties since emails are unique in most cases. Every use case is
different, however, so make sure to watch out for this issue, and carefully consider if your combination of
sort index and limit will run into this problem.
One way to fix this is and return to deterministic behavior is to break ties by including more orderings in the
orderings
array. The primary column of your table is unique, so you could include this as a deterministic tie breaker
in your orderings
. Keep in mind, the first ordering in your orderings
array is the primary sorting column, and is also used for determining offsetRelativeTo
, so usually your tie breaker will not be the first ordering.
Another solution, which is not determinitic, is to not worry about it by increasing the limit until your feel confident your application won't run into this issue.
This is a feature for those using Apollo GraphQL.
By default, the cacheControl
directives are not generated on Edge object types and inside pagination fields which results in cache arguments being completely ignored.
Enabling defaultMaxAge
for all types/fields across your GraphQL implementation partially solves the problem, however it might not be the best option.
It is possible to enable the cacheControl
directive support by passing a useCacheControl: true
flag to the paginationDirective
function.
The package will then use the largest maxAge
across the pagination fields with custom types and apply it to nodes
and info
fields.
No vulnerabilities found.
No security vulnerabilities found.