Gathering detailed insights and metrics for graphql-lambda-subscriptions-fix
Gathering detailed insights and metrics for graphql-lambda-subscriptions-fix
Gathering detailed insights and metrics for graphql-lambda-subscriptions-fix
Gathering detailed insights and metrics for graphql-lambda-subscriptions-fix
graphql-subscriptions
GraphQL subscriptions for node.js
@httptoolkit/subscriptions-transport-ws
A websocket transport for GraphQL subscriptions
@graphql-yoga/typed-event-target
This is an internal package. Please don't use this package directly. The package will do unexpected breaking changes.
graphql-redis-subscriptions
A graphql-subscriptions PubSub Engine using redis
npm install graphql-lambda-subscriptions-fix
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
49 Stars
1,552 Commits
13 Forks
3 Watching
20 Branches
3 Contributors
Updated on 26 Nov 2024
Minified
Minified + Gzipped
TypeScript (96.82%)
JavaScript (2.94%)
Arc (0.24%)
Cumulative downloads
Total Downloads
Last day
25%
165
Compared to previous day
Last week
79.6%
810
Compared to previous week
Last month
12.6%
2,644
Compared to previous month
Last year
454.3%
40,983
Compared to previous year
2
4
33
Amazon Lambda Powered GraphQL Subscriptions. This is an Amazon Lambda Serverless equivalent to graphql-ws
. It follows the graphql-ws prototcol
. It is tested with the Architect Sandbox against graphql-ws
directly and run in production today. For many applications graphql-lambda-subscriptions
should do what graphql-ws
does for you today without having to run a server. This started as fork of subscriptionless
another library with similar goals.
As subscriptionless
's tagline goes;
Have all the functionality of GraphQL subscriptions on a stateful server without the cost.
I had different requirements and needed more features. This project wouldn't exist without subscriptionless
and you should totally check it out.
nexus.js
Since there are many ways to deploy to amazon lambda I'm going to have to get opinionated in the quick start and pick Architect. graphql-lambda-subscriptions
should work on Lambda regardless of your deployment and packaging framework. Take a look at the arc-basic-events mock used for integration testing for an example of using it with Architect.
Can be found in our docs folder. You'll want to start with makeServer()
and subscribe()
.
1import { makeServer } from 'graphql-lambda-subscriptions' 2 3// define a schema and create a configured DynamoDB instance from aws-sdk 4// and make a schema with resolvers (maybe look at) '@graphql-tools/schema 5 6const subscriptionServer = makeServer({ 7 dynamodb, 8 schema, 9})
1export const handler = subscriptionServer.webSocketHandler
Set up API Gateway to route WebSocket events to the exported handler.
1@app 2basic-events 3 4@ws
1functions: 2 websocket: 3 name: my-subscription-lambda 4 handler: ./handler.handler 5 events: 6 - websocket: 7 route: $connect 8 - websocket: 9 route: $disconnect 10 - websocket: 11 route: $default
In-flight connections and subscriptions need to be persisted.
Use the tableNames
argument to override the default table names.
1const instance = makeServer({ 2 /* ... */ 3 tableNames: { 4 connections: 'my_connections', 5 subscriptions: 'my_subscriptions', 6 }, 7}) 8 9// or use an async function to retrieve the names 10 11const fetchTableNames = async () => { 12 // do some work to get your table names 13 return { 14 connections, 15 subscriptions, 16 } 17} 18const instance = makeServer({ 19 /* ... */ 20 tableNames: fetchTableNames(), 21}) 22
1@tables 2Connection 3 id *String 4 ttl TTL 5Subscription 6 id *String 7 ttl TTL 8 9@indexes 10 11Subscription 12 connectionId *String 13 name ConnectionIndex 14 15Subscription 16 topic *String 17 name TopicIndex
1import { tables as arcTables } from '@architect/functions' 2 3const fetchTableNames = async () => { 4 const tables = await arcTables() 5 6 const ensureName = (table) => { 7 const actualTableName = tables.name(table) 8 if (!actualTableName) { 9 throw new Error(`No table found for ${table}`) 10 } 11 return actualTableName 12 } 13 14 return { 15 connections: ensureName('Connection'), 16 subscriptions: ensureName('Subscription'), 17 } 18} 19 20const subscriptionServer = makeServer({ 21 dynamodb: tables._db, 22 schema, 23 tableNames: fetchTableNames(), 24})
1resources: 2 Resources: 3 # Table for tracking connections 4 connectionsTable: 5 Type: AWS::DynamoDB::Table 6 Properties: 7 TableName: ${self:provider.environment.CONNECTIONS_TABLE} 8 AttributeDefinitions: 9 - AttributeName: id 10 AttributeType: S 11 KeySchema: 12 - AttributeName: id 13 KeyType: HASH 14 ProvisionedThroughput: 15 ReadCapacityUnits: 1 16 WriteCapacityUnits: 1 17 # Table for tracking subscriptions 18 subscriptionsTable: 19 Type: AWS::DynamoDB::Table 20 Properties: 21 TableName: ${self:provider.environment.SUBSCRIPTIONS_TABLE} 22 AttributeDefinitions: 23 - AttributeName: id 24 AttributeType: S 25 - AttributeName: topic 26 AttributeType: S 27 - AttributeName: connectionId 28 AttributeType: S 29 KeySchema: 30 - AttributeName: id 31 KeyType: HASH 32 GlobalSecondaryIndexes: 33 - IndexName: ConnectionIndex 34 KeySchema: 35 - AttributeName: connectionId 36 KeyType: HASH 37 Projection: 38 ProjectionType: ALL 39 ProvisionedThroughput: 40 ReadCapacityUnits: 1 41 WriteCapacityUnits: 1 42 - IndexName: TopicIndex 43 KeySchema: 44 - AttributeName: topic 45 KeyType: HASH 46 Projection: 47 ProjectionType: ALL 48 ProvisionedThroughput: 49 ReadCapacityUnits: 1 50 WriteCapacityUnits: 1 51 ProvisionedThroughput: 52 ReadCapacityUnits: 1 53 WriteCapacityUnits: 1
1resource "aws_dynamodb_table" "connections-table" { 2 name = "graphql_connections" 3 billing_mode = "PROVISIONED" 4 read_capacity = 1 5 write_capacity = 1 6 hash_key = "id" 7 8 attribute { 9 name = "id" 10 type = "S" 11 } 12} 13 14resource "aws_dynamodb_table" "subscriptions-table" { 15 name = "graphql_subscriptions" 16 billing_mode = "PROVISIONED" 17 read_capacity = 1 18 write_capacity = 1 19 hash_key = "id" 20 21 attribute { 22 name = "id" 23 type = "S" 24 } 25 26 attribute { 27 name = "topic" 28 type = "S" 29 } 30 31 attribute { 32 name = "connectionId" 33 type = "S" 34 } 35 36 global_secondary_index { 37 name = "ConnectionIndex" 38 hash_key = "connectionId" 39 write_capacity = 1 40 read_capacity = 1 41 projection_type = "ALL" 42 } 43 44 global_secondary_index { 45 name = "TopicIndex" 46 hash_key = "topic" 47 write_capacity = 1 48 read_capacity = 1 49 projection_type = "ALL" 50 } 51}
graphql-lambda-subscriptions
uses it's own PubSub implementation.
Use the subscribe
function to associate incoming subscriptions with a topic.
1import { subscribe } from 'graphql-lambda-subscriptions' 2 3export const resolver = { 4 Subscribe: { 5 mySubscription: { 6 subscribe: subscribe('MY_TOPIC'), 7 resolve: (event, args, context) => {/* ... */} 8 } 9 } 10}
Use the subscribe
with SubscribeOptions
to allow for filtering.
Note: If a function is provided, it will be called on subscription start and must return a serializable object.
1import { subscribe } from 'graphql-lambda-subscriptions' 2 3// Subscription agnostic filter 4subscribe('MY_TOPIC', { 5 filter: { 6 attr1: '`attr1` must have this value', 7 attr2: { 8 attr3: 'Nested attributes work fine', 9 }, 10 } 11}) 12 13// Subscription specific filter 14subscribe('MY_TOPIC',{ 15 filter: (root, args, context, info) => ({ 16 userId: args.userId, 17 }), 18})
Use the publish()
function on your graphql-lambda-subscriptions server to publish events to active subscriptions. Payloads must be of type Record<string, any>
so they can be filtered and stored.
1subscriptionServer.publish({ 2 topic: 'MY_TOPIC', 3 payload: { 4 message: 'Hey!', 5 }, 6})
Events can come from many sources
1// SNS Event
2export const snsHandler = (event) =>
3 Promise.all(
4 event.Records.map((r) =>
5 subscriptionServer.publish({
6 topic: r.Sns.TopicArn.substring(r.Sns.TopicArn.lastIndexOf(':') + 1), // Get topic name (e.g. "MY_TOPIC")
7 payload: JSON.parse(r.Sns.Message),
8 })
9 )
10 )
11
12// Manual Invocation
13export const invocationHandler = (payload) => subscriptionServer.publish({ topic: 'MY_TOPIC', payload })
Use the complete
on your graphql-lambda-subscriptions server to complete active subscriptions. Payloads are optional and match against filters like events do.
1subscriptionServer.complete({ 2 topic: 'MY_TOPIC', 3 // optional payload 4 payload: { 5 message: 'Hey!', 6 }, 7})
Context is provided on the ServerArgs
object when creating a server. The values are accessible in all callback and resolver functions (eg. resolve
, filter
, onAfterSubscribe
, onSubscribe
and onComplete
).
Assuming no context
argument is provided when creating the server, the default value is an object with connectionInitPayload
, connectionId
properties and the publish()
and complete()
functions. These properties are merged into a provided object or passed into a provided function.
An object can be provided via the context
attribute when calling makeServer
.
1const instance = makeServer({ 2 /* ... */ 3 context: { 4 myAttr: 'hello', 5 }, 6})
The default values (above) will be appended to this object prior to execution.
A function (optionally async) can be provided via the context
attribute when calling makeServer
.
The default context value is passed as an argument.
1const instance = makeServer({ 2 /* ... */ 3 context: ({ connectionInitPayload }) => ({ 4 myAttr: 'hello', 5 user: connectionInitPayload.user, 6 }), 7})
1export const resolver = { 2 Subscribe: { 3 mySubscription: { 4 subscribe: subscribe('GREETINGS', { 5 filter(_, _, context) { 6 console.log(context.connectionId) // the connectionId 7 }, 8 async onAfterSubscribe(_, _, { connectionId, publish }) { 9 await publish('GREETINGS', { message: `HI from ${connectionId}!` }) 10 } 11 }) 12 resolve: (event, args, context) => { 13 console.log(context.connectionInitPayload) // payload from connection_init 14 return event.payload.message 15 }, 16 }, 17 }, 18}
Side effect handlers can be declared on subscription fields to handle onSubscribe
(start) and onComplete
(stop) events.
1export const resolver = { 2 Subscribe: { 3 mySubscription: { 4 resolve: (event, args, context) => { 5 /* ... */ 6 }, 7 subscribe: subscribe('MY_TOPIC', { 8 // filter?: object | ((...args: SubscribeArgs) => object) 9 // onSubscribe?: (...args: SubscribeArgs) => void | Promise<void> 10 // onComplete?: (...args: SubscribeArgs) => void | Promise<void> 11 // onAfterSubscribe?: (...args: SubscribeArgs) => PubSubEvent | Promise<PubSubEvent> | undefined | Promise<undefined> 12 }), 13 }, 14 }, 15}
Global events can be provided when calling makeServer
to track the execution cycle of the lambda.
Called when a WebSocket connection is first established.
1const instance = makeServer({ 2 /* ... */ 3 onConnect: ({ event }) => { 4 /* */ 5 }, 6})
Called when a WebSocket connection is disconnected.
1const instance = makeServer({ 2 /* ... */ 3 onDisconnect: ({ event }) => { 4 /* */ 5 }, 6})
onConnectionInit
can be used to verify the connection_init
payload prior to persistence.
Note: Any sensitive data in the incoming message should be removed at this stage.
1const instance = makeServer({ 2 /* ... */ 3 onConnectionInit: ({ message }) => { 4 const token = message.payload.token 5 6 if (!myValidation(token)) { 7 throw Error('Token validation failed') 8 } 9 10 // Prevent sensitive data from being written to DB 11 return { 12 ...message.payload, 13 token: undefined, 14 } 15 }, 16})
By default, the (optionally parsed) payload will be accessible via context.
Called when any subscription message is received.
1const instance = makeServer({ 2 /* ... */ 3 onSubscribe: ({ event, message }) => { 4 /* */ 5 }, 6})
Called when any complete message is received.
1const instance = makeServer({ 2 /* ... */ 3 onComplete: ({ event, message }) => { 4 /* */ 5 }, 6})
Called when any error is encountered
1const instance = makeServer({ 2 /* ... */ 3 onError: (error, context) => { 4 /* */ 5 }, 6})
For whatever reason, AWS API Gateway does not support WebSocket protocol level ping/pong. So you can use Step Functions to do this. See pingPong
.
API Gateway considers an idle connection to be one where no messages have been sent on the socket for a fixed duration (currently 10 minutes). The WebSocket spec has support for detecting idle connections (ping/pong) but API Gateway doesn't use it. This means, in the case where both parties are connected, and no message is sent on the socket for the defined duration (direction agnostic), API Gateway will close the socket. A fix for this is to set up immediate reconnection on the client side.
API Gateway doesn't support custom reasons or codes for WebSockets being closed. So the codes and reason strings wont match graphql-ws
.
No vulnerabilities found.
Reason
30 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 10
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
security policy file detected
Details
Reason
license file detected
Details
Reason
dependency not pinned by hash detected -- score normalized to 3
Details
Reason
Found 0/30 approved changesets -- score normalized to 0
Reason
no SAST tool detected
Details
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
11 existing vulnerabilities detected
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More