Gathering detailed insights and metrics for dynamo-builder
Gathering detailed insights and metrics for dynamo-builder
Gathering detailed insights and metrics for dynamo-builder
Gathering detailed insights and metrics for dynamo-builder
@knymbus/dynamo-query-builder
[npm-badge]: https://img.shields.io/npm/v/@knymbus/dynamo-query-builder.svg?style=flat-square [npm]: https://www.npmjs.org/package/@knymbus/dynamo-query-builder
@sudoo/dynamo-builder
DynamoDB query builder for Node
aarts-model-builder
utility to build dynamo items and graphql schema for aarts framework
@shiftcoders/dynamo-easy
DynamoDB client for NodeJS and browser with a fluent api to build requests. We take care of the type mapping between JS and DynamoDB, customizable trough typescript decorators.
Type-safe DynamoDB query builder for TypeScript. Designed with single-table architecture in mind.
npm install dynamo-builder
Typescript
Module System
Node Version
NPM Version
TypeScript (98.06%)
JavaScript (1.28%)
Shell (0.66%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
Apache-2.0 License
3 Stars
177 Commits
1 Watchers
12 Branches
8 Contributors
Updated on Feb 02, 2023
Latest Version
1.1.3
Package Id
dynamo-builder@1.1.3
Unpacked Size
326.75 kB
Size
68.85 kB
File Count
228
NPM Version
8.19.2
Node Version
16.18.1
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
8
A type-safe DynamoDB query builder for TypeScript. This library is inspired by Beyonce's library and extends the functionality further giving client the ability to configure table partition keys schema design
Features included in this library:
Low boilerplate. Define your tables, partitions, indexes and models in YAML and the codegens TypeScript definitions for you.
Store heterogeneous models in the same table. Unlike most DynamoDB libraries, this doesn't force you into a 1 model per table paradigm. It supports storing related models in the same table partition, which allows you to "precompute joins" and retrieve those models with a single roundtrip query to the db.
Type-safe API. dynamo-builder's API is type-safe. It's aware of which models live under your partition and sort keys (even for global secondary indexes).
When you get
, batchGet
or query
, the result types are automatically inferred. And when you apply filters on your
query
the attribute names are automatically type-checked.
To test out dev changes here in the consuming package, run:
npm run build npm pack
Then copy the tgz produced into the root of the consuming project and refer to it from the package/json of consuming package as:
"dynamo-builder": "file:dynamo-builder-1.0.0.tgz",
remember to run install in consuming package too.
First install dynamo-builder - npm install dynamo-builder
Define your tables
, models
and partitions
in YAML:
1tables: 2 # We have a single DynamoDB Table named "Library". 3 Library: 4 partitionKeyName: pk #pk is the default value of partitionKeyName of the table 5 sortKeyName: sk #sk is the default value of sortKeyName of the table 6 # Let's add two models to our Library table: Author and Book. 7 models: 8 Author: 9 id: string 10 name: string 11 12 Book: 13 id: string 14 authorId: string 15 name: string 16 17 # Now, imagine we want to be able to retrieve an Author + all their Books 18 # in a single DynamoDB Query operation. 19 20 # To do that, we need a specific Author and all their Books to live under the same partition key. 21 # How about we use "Author-$id" as the partition key? Great, let's go with that. 22 23 # dynamo-builder calls a group of models that share the same partition key a "patition". 24 # Let's define one now, and name it "Authors" 25 partitions: 26 Authors: 27 28 # All dynamo-builder partition keys are prefixed (to help you avoid collisions) 29 # We said above we want our final partition key to be "Author-$id", 30 # so we set: "Author" as our prefix here 31 partitionKeyPrefix: Author 32 33 # And, now we can put a given Author and all their Books into the same partition 34 models: 35 Author: 36 partitionKey: [$id] # "Author-$id" 37 sortKey: [Author, $id] 38 39 Book: 40 partitionKey: [$authorId] # "Author-$authorId" 41 sortKey: [Book, $id]
partitionKey
and sortKey
syntaxdynamo-builder expects you to specify your partition and sort keys using arrays, e.g. [Author, $id]
. The first element in this example is interpreted as a string literal, while the second substitutes the value of a specific model instance's id
field. In addition, it prefixes partition keys with the partitionKeyPrefix
set on the "partition" configured your the YAML file.
In our example above, we set the Author
partiion's partitionKeyPrefix
to "Author"
and the Author
model's partitionKey
field to [$id]
. Thus the full partition key at runtime is Author-$id
(it uses -
as a delimiter by default, you can override the implementation by passing in delimiter key in table definition).
Supported values for delimiter: "-", "#"
1tables: 2 Library: 3 delimiter: "#" 4 models: 5 ... 6 partitions: 7 ...
If you'd like to form a composite partition or sort key using multiple model fields, that is supported as well, e.g. [$id, $name]
.
If your table(s) have GSI's you can specify them like this:
1tables: 2 Library: 3 models: 4 ... 5 partitions: 6 ... 7 8 gsis: 9 byName: # must match your GSI's name 10 partitionKey: $name # name field must exist on at least one model 11 sortKey: $id # same here
Note: library currently assumes that your GSI indexes project all model attributes, which will be reflected in the return types of your queries.
You can specify external types you need to import like so:
1Author: 2 ... 3 address: Address from author/Address 4
Which transforms into import { Address } from "author/address"
npx dynamo-builder --in src/models.yaml --out src/generated/models.ts
1import { LibraryTable } from "generated/models" 2const dynamo = new DynamoDB({ endpoint: "...", region: "..."}) 3await dynamo 4 .createTable(LibraryTable.asCreateTableInput("PAY_PER_REQUEST")) 5 .promise()
Now you can write partition-aware, type safe queries with abandon:
1import { Beyonce } from "dynamo-builder" 2import { DynamoDB } from "aws-sdk" 3import { LibraryTable } from "generated/models" 4 5const beyonce = new Beyonce(LibraryTable, dynamo)
1import { 2 AuthorModel, 3 BookModel, 4} from "generated/models"
1const author = AuthorModel.create({ 2 id: "1", 3 name: "Jane Austen" 4}) 5 6await beyonce.put(author)
1const author = await beyonce.get(AuthorModel.key({ id: "1" }))
Note: the key prefix
("Author" from our earlier example) will be automatically appeneded.
Beyoncé supports type-safe partial updates on items, without having to read the item from the db first. And it works, even through nested attributes:
1const updatedAuthor = await beyonce.update(AuthorModel.key({ id: "1" }), (author) => { 2 author.name = "Jack London", 3 author.details.description = "American novelist" 4 delete author.details.someDeprecatedField 5})
Here author
is an intelligent proxy object (thus we avoid having to read the full item from the DB prior to updating it).
And beyonce.update(...)
returns the full Author
, with the updated fields.
Beyoncé supports type-safe query
operations that either return a single model type or all model types that live under a given partition key.
You can query
for a single type of model like so:
1import { BookModel } from "generated/models" 2 3// Get all Books for an Author 4const results = await beyonce 5 .query(BookModel.partitionKey({ authorId: "1" })) 6 .exec() // returns { Book: Book[] }
To reduce the amount of data retrieved by DynamoDB, Beyoncé automatically applies a KeyConditionExpression
that uses the sortKey
prefix provided in your model definitions. For example, if the YAML definition for the Book
model contains sortKey:[Book, $id]
-- then the generated KeyConditionExpression
will contain a clause like #partitionKey = :partitionKey AND begins_with(#sortKey, Book)
.
You can also query for all models that live in a partition, like so:
1import { AuthorPartition } from "generated/models" 2 3// Get an Author + their books 4const results = await beyonce 5 .query(AuthorPartition.key({ id: "1" })) 6 .exec() // returns { Author: Author[], Book: Book[] }
Note that, in this case the generated KeyconditionExpression
will not include a clause for the sort key since DynamoDB does not support OR-ing key conditions.
You can filter results from a query like so:
1// Get an Author + filter on their books 2const authorWithFilteredBooks = await beyonce 3 .query(AuthorPartition.key({ id: "1" })) 4 .attributeNotExists("title") // type-safe fields 5 .or("title", "=", "Brave New World") // type safe fields + operators 6 .exec()
When you call .exec()
Beyoncé will automatically page through all the results and return them to you.
If you would like to step through pages manually (e.g to throttle reads) -- use the .iterator()
method instead:
1const iterator = beyonce 2 .query(AuthorPartition.key({ id: "1" })) 3 .iterator({ pageSize: 1 }) 4 5// Step through each page 1 by 1 6for await (const { items, errors } of iterator) { 7 // ... 8}
The errors
field above contains any exceptions thrown while attempting to load the next iterator "page".
So it's up to you, the caller to decide if you want to continue walking the iterator, or give up and exit.
Important: When an error is encountered within the iterator, you might get a partial result
that contains one or more items
and one or more errors
. Thus, you should always check errors.length
.
Each time you call .next()
on the iterator, you'll also get a cursor
back, which you can use to create a new iterator
that picks up where you left off
1const iterator1 = beyonce 2 .query(AuthorPartition.key({ id: "1" })) 3 .iterator({ pageSize: 1 }) 4 5const firstPage = await iterator1.next() 6const { items, cursor } = firstPage.value // do something with these 7 8// Later... 9const iterator2 = beyonce 10 .query(AuthorPartition.key({ id: "1" })) 11 .iterator({ cursor, pageSize: 1 }) 12 13const secondPage = await iterator2.next()
1import { byNameGSI } from "generated/models" 2const prideAndPrejudice = await beyonce 3 .queryGSI(byNameGSI.name, byNameGSI.key("Jane Austen")) 4 .where("title", "=", "Pride and Prejudice") 5 .exec()
You can scan
every record in your DynamoDB table using an API that closely mirrors the query
API. For example:
1import { AuthorPartition } from "generated/models" 2 3// Scan through everything in the table and load it into memory (not recommended for prod) 4const results = await beyonce 5 .scan() 6 .exec() // returns { Author: Author[], Book: Book[] }
1const iterator = beyonce 2 .scan() 3 .iterator({ pageSize: 1 }) 4 5// Step through each page 1 by 1 6for await (const { items } of iterator) { 7 // ... 8}
You can perform "parallel scans" by passing a parallel
config operation to the .scan
method,
like so:
1// Somewhere inside of Worker 1 2const segment1 = beyonce 3 .scan({ parallel: { segmentId: 0, totalSegments: 2 }}) 4 .iterator() 5 6for await (const results of segment1) { 7 // ... 8} 9 10// Somewhere inside of Worker 2 11const segment2 = beyonce 12 .scan({ parallel: { segmentId: 1, totalSegments: 2 }}) 13 .iterator() 14 15for await (const results of segment2) { 16 // ... 17}
These options mirror the underlying DynamoDB API
You can retrieve records in bulk via batchGet
. DynamoDB allows retrieving a maximum of 100 items per
batchGet
query. So, if you ask for more than 100 keys in a single Beyonce batchGet
call, Beyonce will automatically split
DynamoDB calls into N concurrent requests and join the results for you.
1// Batch get several items 2const { items, unprocessedKeys } = await beyonce.batchGet({ 3 keys: [ 4 // Get 2 authors 5 AuthorModel.key({ id: "1" }), 6 AuthorModel.key({ id: "2" }), 7 8 // And a specific book from each 9 Book.key({ authorId: "1", id: "1" }) 10 Book.key({ authorId: "2" id: "2" }) 11 ] 12}) 13 14 15// And the return type is: 16// { author: Author[], book: Book[] } 17const { Author, Book } = items
If the unprocessedKeys
array isn't empty, you can retry
via:
1await beyonce.batchGet({ keys: unprocessedKeys })
You can batch put/delete records using batchWrite
. If any operations can't be processed,
you'll get a populated unprocessedPuts
array and/or an unprocessedDeletes
array back.
1// Batch put or delete several items at once 2const author1 = AuthorModel.create({ 3 id: "1", 4 name: "Jane Austen" 5}) 6 7const author2 = AuthorModel.create({ 8 id: "2", 9 name: "Charles Dickens" 10}) 11 12const { 13 unprocessedPuts, 14 unprocessedDeletes 15} = await beyonce.batchWrite({ putItems: [author1], deleteItems: [Author.key({ id: author2.id })] })
If you'd like to batch pute/delete records in an atomic transaction, you can use batchWriteWithTransaction
.
And all operations will either succeed, or fail.
1await beyonce.batchWriteWithTransaction({ putItems: [author1], deleteItems: [Author.key({ id: author2.id })] })
You can also pass a string clientRequestToken
to batchWriteWithTransaction
to force your operations to
be idempotent, per the AWS docs.
Beyonce supports consistent reads via an optional parameter on get
, batchGet
and query
, e.g. get(..., { consistentRead: true })
.
And if you'd like to always make consistent reads by default, you can set this as the default when you create a Beyonce instance:
1new Beyonce(table, dynamo, { consistentReads: true })
Note: When you enable consistentReads on a Beyonce instance, you can override it on a per-operation basis by setting the method level consistentRead
option.
When using DynamoDB, you often want to "pre-compute" joins by sticking a set of heterogeneous models into the same table, under the same partition key. This allows for retrieving related records using a single query instead of N.
Unfortunately most existing DynamoDB libraries, like DynamoDBMapper, don't support this use case as they follow the SQL convention sticking each model into a separte table.
For example, we might want to fetch an Author
+ all their Book
s in a single query. And we'd accomplish that by sticking both models
under the same partition key - e.g. author-${id}
.
AWS's guidelines, take this to the extreme:
...most well-designed applications require only one table
Keep in mind that the primary reason they recommened this is to avoid forcing the application-layer to perform in-memory joins. Due to Amazon's scale, they are highly motivated to minimize the number of roundtrip db calls.
You are probably not Amazon scale. And thus probably don't need to shove everything into a single table.
But you might want to keep a few related models in the same table, under the same partition key and fetch those models in a type-safe way. Beyonce makes that easy.
You can enable AWS XRay tracing like so:
1const beyonce = new Beyonce(
2 LibraryTable,
3 dynamo,
4 { xRayTracingEnabled: true }
5)
No vulnerabilities found.
No security vulnerabilities found.