Gathering detailed insights and metrics for interwebs-graphql-lambda-subscriptions
Gathering detailed insights and metrics for interwebs-graphql-lambda-subscriptions
Graphql-WS compatible Lambda Powered Subscriptions
npm install interwebs-graphql-lambda-subscriptions
Typescript
Module System
Min. Node Version
Node Version
NPM Version
67.5
Supply Chain
97.3
Quality
74.3
Maintenance
100
Vulnerability
99.3
License
TypeScript (91.05%)
JavaScript (8.69%)
Arc (0.26%)
Total Downloads
220
Last Day
1
Last Week
4
Last Month
8
Last Year
47
171 Commits
1 Watching
27 Branches
Minified
Minified + Gzipped
Latest Version
1.0.0
Package Id
interwebs-graphql-lambda-subscriptions@1.0.0
Unpacked Size
295.73 kB
Size
68.27 kB
File Count
6
NPM Version
6.14.17
Node Version
14.19.3
Cumulative downloads
Total Downloads
Last day
0%
1
Compared to previous day
Last week
300%
4
Compared to previous week
Last month
300%
8
Compared to previous month
Last year
-13%
47
Compared to previous year
2
33
Amazon Lambda Powered GraphQL Subscriptions. This is an Amazon Lambda Serverless equivalent to graphql-ws
. It follows the graphql-ws prototcol
. It is tested with the Architect Sandbox against graphql-ws
directly and run in production today. For many applications graphql-lambda-subscriptions
should do what graphql-ws
does for you today without having to run a server. This started as fork of subscriptionless
another library with similar goals.
As subscriptionless
's tagline goes;
Have all the functionality of GraphQL subscriptions on a stateful server without the cost.
I had different requirements and needed more features. This project wouldn't exist without subscriptionless
and you should totally check it out.
nexus.js
Since there are many ways to deploy to amazon lambda I'm going to have to get opinionated in the quick start and pick Architect. graphql-lambda-subscriptions
should work on Lambda regardless of your deployment and packaging framework. Take a look at the arc-basic-events mock used for integration testing for an example of using it with Architect.
Can be found in our docs folder. You'll want to start with makeServer()
and subscribe()
.
1import { makeServer } from 'graphql-lambda-subscriptions' 2 3// define a schema and create a configured DynamoDB instance from aws-sdk 4// and make a schema with resolvers (maybe look at) '@graphql-tools/schema 5 6const subscriptionServer = makeServer({ 7 dynamodb, 8 schema, 9})
1export const handler = subscriptionServer.webSocketHandler
Set up API Gateway to route WebSocket events to the exported handler.
1@app 2basic-events 3 4@ws
1functions: 2 websocket: 3 name: my-subscription-lambda 4 handler: ./handler.handler 5 events: 6 - websocket: 7 route: $connect 8 - websocket: 9 route: $disconnect 10 - websocket: 11 route: $default
In-flight connections and subscriptions need to be persisted.
Use the tableNames
argument to override the default table names.
1const instance = makeServer({ 2 /* ... */ 3 tableNames: { 4 connections: 'my_connections', 5 subscriptions: 'my_subscriptions', 6 }, 7}) 8 9// or use an async function to retrieve the names 10 11const fetchTableNames = async () => { 12 // do some work to get your table names 13 return { 14 connections, 15 subscriptions, 16 } 17} 18const instance = makeServer({ 19 /* ... */ 20 tableNames: fetchTableNames(), 21}) 22
1@tables 2Connection 3 id *String 4 ttl TTL 5Subscription 6 id *String 7 ttl TTL 8 9@indexes 10 11Subscription 12 connectionId *String 13 name ConnectionIndex 14 15Subscription 16 topic *String 17 name TopicIndex
1import { tables } from '@architect/functions' 2 3const fetchTableNames = async () => { 4 const tables = await tables() 5 6 const ensureName = (table) => { 7 const actualTableName = tables.name(table) 8 if (!actualTableName) { 9 throw new Error(`No table found for ${table}`) 10 } 11 return actualTableName 12 } 13 14 return { 15 connections: ensureName('Connection'), 16 subscriptions: ensureName('Subscription'), 17 } 18} 19 20const subscriptionServer = makeServer({ 21 dynamodb: tables.db, 22 schema, 23 tableNames: fetchTableNames(), 24})
1resources: 2 Resources: 3 # Table for tracking connections 4 connectionsTable: 5 Type: AWS::DynamoDB::Table 6 Properties: 7 TableName: ${self:provider.environment.CONNECTIONS_TABLE} 8 AttributeDefinitions: 9 - AttributeName: id 10 AttributeType: S 11 KeySchema: 12 - AttributeName: id 13 KeyType: HASH 14 ProvisionedThroughput: 15 ReadCapacityUnits: 1 16 WriteCapacityUnits: 1 17 # Table for tracking subscriptions 18 subscriptionsTable: 19 Type: AWS::DynamoDB::Table 20 Properties: 21 TableName: ${self:provider.environment.SUBSCRIPTIONS_TABLE} 22 AttributeDefinitions: 23 - AttributeName: id 24 AttributeType: S 25 - AttributeName: topic 26 AttributeType: S 27 - AttributeName: connectionId 28 AttributeType: S 29 KeySchema: 30 - AttributeName: id 31 KeyType: HASH 32 GlobalSecondaryIndexes: 33 - IndexName: ConnectionIndex 34 KeySchema: 35 - AttributeName: connectionId 36 KeyType: HASH 37 Projection: 38 ProjectionType: ALL 39 ProvisionedThroughput: 40 ReadCapacityUnits: 1 41 WriteCapacityUnits: 1 42 - IndexName: TopicIndex 43 KeySchema: 44 - AttributeName: topic 45 KeyType: HASH 46 Projection: 47 ProjectionType: ALL 48 ProvisionedThroughput: 49 ReadCapacityUnits: 1 50 WriteCapacityUnits: 1 51 ProvisionedThroughput: 52 ReadCapacityUnits: 1 53 WriteCapacityUnits: 1
1resource "aws_dynamodb_table" "connections-table" { 2 name = "graphql_connections" 3 billing_mode = "PROVISIONED" 4 read_capacity = 1 5 write_capacity = 1 6 hash_key = "id" 7 8 attribute { 9 name = "id" 10 type = "S" 11 } 12} 13 14resource "aws_dynamodb_table" "subscriptions-table" { 15 name = "graphql_subscriptions" 16 billing_mode = "PROVISIONED" 17 read_capacity = 1 18 write_capacity = 1 19 hash_key = "id" 20 21 attribute { 22 name = "id" 23 type = "S" 24 } 25 26 attribute { 27 name = "topic" 28 type = "S" 29 } 30 31 attribute { 32 name = "connectionId" 33 type = "S" 34 } 35 36 global_secondary_index { 37 name = "ConnectionIndex" 38 hash_key = "connectionId" 39 write_capacity = 1 40 read_capacity = 1 41 projection_type = "ALL" 42 } 43 44 global_secondary_index { 45 name = "TopicIndex" 46 hash_key = "topic" 47 write_capacity = 1 48 read_capacity = 1 49 projection_type = "ALL" 50 } 51}
graphql-lambda-subscriptions
uses it's own PubSub implementation.
Use the subscribe
function to associate incoming subscriptions with a topic.
1import { subscribe } from 'graphql-lambda-subscriptions' 2 3export const resolver = { 4 Subscribe: { 5 mySubscription: { 6 subscribe: subscribe('MY_TOPIC'), 7 resolve: (event, args, context) => {/* ... */} 8 } 9 } 10}
Use the subscribe
with SubscribeOptions
to allow for filtering.
Note: If a function is provided, it will be called on subscription start and must return a serializable object.
1import { subscribe } from 'graphql-lambda-subscriptions' 2 3// Subscription agnostic filter 4subscribe('MY_TOPIC', { 5 filter: { 6 attr1: '`attr1` must have this value', 7 attr2: { 8 attr3: 'Nested attributes work fine', 9 }, 10 } 11}) 12 13// Subscription specific filter 14subscribe('MY_TOPIC',{ 15 filter: (root, args, context, info) => ({ 16 userId: args.userId, 17 }), 18})
Use the publish()
function on your graphql-lambda-subscriptions server to publish events to active subscriptions. Payloads must be of type Record<string, any>
so they can be filtered and stored.
1subscriptionServer.publish({ 2 topic: 'MY_TOPIC', 3 payload: { 4 message: 'Hey!', 5 }, 6})
Events can come from many sources
1// SNS Event
2export const snsHandler = (event) =>
3 Promise.all(
4 event.Records.map((r) =>
5 subscriptionServer.publish({
6 topic: r.Sns.TopicArn.substring(r.Sns.TopicArn.lastIndexOf(':') + 1), // Get topic name (e.g. "MY_TOPIC")
7 payload: JSON.parse(r.Sns.Message),
8 })
9 )
10 )
11
12// Manual Invocation
13export const invocationHandler = (payload) => subscriptionServer.publish({ topic: 'MY_TOPIC', payload })
Use the complete
on your graphql-lambda-subscriptions server to complete active subscriptions. Payloads are optional and match against filters like events do.
1subscriptionServer.complete({ 2 topic: 'MY_TOPIC', 3 // optional payload 4 payload: { 5 message: 'Hey!', 6 }, 7})
Context is provided on the ServerArgs
object when creating a server. The values are accessible in all callback and resolver functions (eg. resolve
, filter
, onAfterSubscribe
, onSubscribe
and onComplete
).
Assuming no context
argument is provided when creating the server, the default value is an object with connectionInitPayload
, connectionId
properties and the publish()
and complete()
functions. These properties are merged into a provided object or passed into a provided function.
An object can be provided via the context
attribute when calling makeServer
.
1const instance = makeServer({ 2 /* ... */ 3 context: { 4 myAttr: 'hello', 5 }, 6})
The default values (above) will be appended to this object prior to execution.
A function (optionally async) can be provided via the context
attribute when calling makeServer
.
The default context value is passed as an argument.
1const instance = makeServer({ 2 /* ... */ 3 context: ({ connectionInitPayload }) => ({ 4 myAttr: 'hello', 5 user: connectionInitPayload.user, 6 }), 7})
1export const resolver = { 2 Subscribe: { 3 mySubscription: { 4 subscribe: subscribe('GREETINGS', { 5 filter(_, _, context) { 6 console.log(context.connectionId) // the connectionId 7 }, 8 async onAfterSubscribe(_, _, { connectionId, publish }) { 9 await publish('GREETINGS', { message: `HI from ${connectionId}!` }) 10 } 11 }) 12 resolve: (event, args, context) => { 13 console.log(context.connectionInitPayload) // payload from connection_init 14 return event.payload.message 15 }, 16 }, 17 }, 18}
Side effect handlers can be declared on subscription fields to handle onSubscribe
(start) and onComplete
(stop) events.
1export const resolver = { 2 Subscribe: { 3 mySubscription: { 4 resolve: (event, args, context) => { 5 /* ... */ 6 }, 7 subscribe: subscribe('MY_TOPIC', { 8 // filter?: object | ((...args: SubscribeArgs) => object) 9 // onSubscribe?: (...args: SubscribeArgs) => void | Promise<void> 10 // onComplete?: (...args: SubscribeArgs) => void | Promise<void> 11 // onAfterSubscribe?: (...args: SubscribeArgs) => PubSubEvent | Promise<PubSubEvent> | undefined | Promise<undefined> 12 }), 13 }, 14 }, 15}
Global events can be provided when calling makeServer
to track the execution cycle of the lambda.
Called when a WebSocket connection is first established.
1const instance = makeServer({ 2 /* ... */ 3 onConnect: ({ event }) => { 4 /* */ 5 }, 6})
Called when a WebSocket connection is disconnected.
1const instance = makeServer({ 2 /* ... */ 3 onDisconnect: ({ event }) => { 4 /* */ 5 }, 6})
onConnectionInit
can be used to verify the connection_init
payload prior to persistence.
Note: Any sensitive data in the incoming message should be removed at this stage.
1const instance = makeServer({ 2 /* ... */ 3 onConnectionInit: ({ message }) => { 4 const token = message.payload.token 5 6 if (!myValidation(token)) { 7 throw Error('Token validation failed') 8 } 9 10 // Prevent sensitive data from being written to DB 11 return { 12 ...message.payload, 13 token: undefined, 14 } 15 }, 16})
By default, the (optionally parsed) payload will be accessible via context.
Called when any subscription message is received.
1const instance = makeServer({ 2 /* ... */ 3 onSubscribe: ({ event, message }) => { 4 /* */ 5 }, 6})
Called when any complete message is received.
1const instance = makeServer({ 2 /* ... */ 3 onComplete: ({ event, message }) => { 4 /* */ 5 }, 6})
Called when any error is encountered
1const instance = makeServer({ 2 /* ... */ 3 onError: (error, context) => { 4 /* */ 5 }, 6})
For whatever reason, AWS API Gateway does not support WebSocket protocol level ping/pong. So you can use Step Functions to do this. See pingPong
.
API Gateway considers an idle connection to be one where no messages have been sent on the socket for a fixed duration (currently 10 minutes). The WebSocket spec has support for detecting idle connections (ping/pong) but API Gateway doesn't use it. This means, in the case where both parties are connected, and no message is sent on the socket for the defined duration (direction agnostic), API Gateway will close the socket. A fix for this is to set up immediate reconnection on the client side.
API Gateway doesn't support custom reasons or codes for WebSockets being closed. So the codes and reason strings wont match graphql-ws
.
No vulnerabilities found.
No security vulnerabilities found.