Gathering detailed insights and metrics for @netlify/blobs
Gathering detailed insights and metrics for @netlify/blobs
Gathering detailed insights and metrics for @netlify/blobs
Gathering detailed insights and metrics for @netlify/blobs
npm install @netlify/blobs
71.6
Supply Chain
99.6
Quality
91.1
Maintenance
100
Vulnerability
100
License
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
16 Stars
221 Commits
5 Forks
4 Watching
28 Branches
14 Contributors
Updated on 25 Nov 2024
Minified
Minified + Gzipped
TypeScript (99.06%)
JavaScript (0.94%)
Cumulative downloads
Total Downloads
Last day
-0.5%
38,297
Compared to previous day
Last week
5.8%
234,820
Compared to previous week
Last month
-2.8%
996,788
Compared to previous month
Last year
2,591%
8,476,985
Compared to previous year
A TypeScript client for Netlify Blobs.
You can install @netlify/blobs
via npm:
1npm install @netlify/blobs
To start reading and writing data, you must first get a reference to a store using the getStore
method.
This method takes an options object that lets you configure the store for different access modes.
Rather than explicitly passing the configuration context to the getStore
method, it can be read from the execution
environment. This is particularly useful for setups where the configuration data is held by one system and the data
needs to be accessed in another system, with no direct communication between the two.
To do this, the system that holds the configuration data should set a global variable called netlifyBlobsContext
or an
environment variable called NETLIFY_BLOBS_CONTEXT
with a Base64-encoded, JSON-stringified representation of an object
with the following properties:
apiURL
(optional) or edgeURL
: URL of the Netlify API (for API access) or the edge endpoint (for
Edge access)token
: Access token for the corresponding access modesiteID
: ID of the Netlify siteThis data is automatically populated by Netlify in the execution environment for both serverless and edge functions.
With this in place, the getStore
method can be called just with the store name. No configuration object is required,
since it'll be read from the environment.
1import { getStore } from '@netlify/blobs' 2 3const store = getStore('my-store') 4 5console.log(await store.get('my-key'))
The environment is not configured automatically when running functions in the
Lambda compatibility mode. To use Netlify Blobs, you must
initialize the environment manually by calling the connectLambda
method with the Lambda event as a parameter.
You should call this method immediately before calling getStore
or getDeployStore
.
1import { connectLambda, getStore } from '@netlify/blobs' 2 3export const handler = async (event) => { 4 connectLambda(event) 5 6 const store = getStore('my-store') 7 const value = await store.get('my-key') 8 9 return { 10 statusCode: 200, 11 body: value, 12 } 13}
You can interact with the blob store through the Netlify API. This is the recommended method if you're looking for a strong-consistency way of accessing data, where latency is not mission critical (since requests will always go to a non-distributed origin).
Create a store for API access by calling getStore
with the following parameters:
name
(string): Name of the storesiteID
(string): ID of the Netlify sitetoken
(string): Personal access token to access the
Netlify APIapiURL
(string): URL of the Netlify API (optional, defaults to https://api.netlify.com
)1import { getStore } from '@netlify/blobs' 2 3const store = getStore({ 4 name: 'my-store', 5 siteID: 'MY_SITE_ID', 6 token: 'MY_TOKEN', 7}) 8 9console.log(await store.get('some-key'))
You can also interact with the blob store using a distributed network that caches entries at the edge. This is the recommended method if you're looking for fast reads across multiple locations, knowing that reads will be eventually-consistent with a drift of up to 60 seconds.
Create a store for edge access by calling getStore
with the following parameters:
name
(string): Name of the storesiteID
(string): ID of the Netlify sitetoken
(string): Access token to the edge endpointedgeURL
(string): URL of the edge endpoint1import { Buffer } from 'node:buffer' 2 3import { getStore } from '@netlify/blobs' 4 5// Serverless function using the Lambda compatibility mode 6export const handler = async (event) => { 7 const rawData = Buffer.from(event.blobs, 'base64') 8 const data = JSON.parse(rawData.toString('ascii')) 9 const store = getStore({ 10 edgeURL: data.url, 11 name: 'my-store', 12 token: data.token, 13 siteID: 'MY_SITE_ID', 14 }) 15 const item = await store.get('some-key') 16 17 return { 18 statusCode: 200, 19 body: item, 20 } 21}
By default, stores exist at the site level, which means that data can be read and written across different deploys and deploy contexts. Users are responsible for managing that data, since the platform doesn't have enough information to know whether an item is still relevant or safe to delete.
But sometimes it's useful to have data pegged to a specific deploy, and shift to the platform the responsibility of managing that data — keep it as long as the deploy is around, and wipe it if the deploy is deleted.
You can opt-in to this behavior by creating the store using the getDeployStore
method.
1import { assert } from 'node:assert' 2 3import { getDeployStore } from '@netlify/blobs' 4 5// Using API access 6const store1 = getDeployStore({ 7 deployID: 'MY_DEPLOY_ID', 8 token: 'MY_API_TOKEN', 9}) 10 11await store1.set('my-key', 'my value') 12 13// Using environment-based configuration 14const store2 = getDeployStore() 15 16assert.equal(await store2.get('my-key'), 'my value')
fetch
The client uses the web platform fetch()
to make HTTP
calls. By default, it will use any globally-defined instance of fetch
, but you can choose to provide your own.
You can do this by supplying a fetch
property to the getStore
method.
1import { fetch } from 'whatwg-fetch' 2 3import { getStore } from '@netlify/blobs' 4 5const store = getStore({ 6 fetch, 7 name: 'my-store', 8}) 9 10console.log(await store.get('my-key'))
get(key: string, { type?: string }): Promise<any>
Retrieves an object with the given key.
Depending on the most convenient format for you to access the value, you may choose to supply a type
property as a
second parameter, with one of the following values:
arrayBuffer
: Returns the entry as an
ArrayBuffer
blob
: Returns the entry as a Blob
json
: Parses the entry as JSON and returns the resulting objectstream
: Returns the entry as a ReadableStream
text
(default): Returns the entry as a string of plain textIf an object with the given key is not found, null
is returned.
1const entry = await store.get('some-key', { type: 'json' }) 2 3console.log(entry)
getWithMetadata(key: string, { etag?: string, type?: string }): Promise<{ data: any, etag: string, metadata: object }>
Retrieves an object with the given key, the ETag value for the entry, and any metadata that has been stored with the entry.
Depending on the most convenient format for you to access the value, you may choose to supply a type
property as a
second parameter, with one of the following values:
arrayBuffer
: Returns the entry as an
ArrayBuffer
blob
: Returns the entry as a Blob
json
: Parses the entry as JSON and returns the resulting objectstream
: Returns the entry as a ReadableStream
text
(default): Returns the entry as a string of plain textIf an object with the given key is not found, null
is returned.
1const blob = await store.getWithMetadata('some-key', { type: 'json' }) 2 3console.log(blob.data, blob.etag, blob.metadata)
The etag
input parameter lets you implement conditional requests, where the blob is only returned if it differs from a
version you have previously obtained.
1// Mock implementation of a system for locally persisting blobs and their etags 2const cachedETag = getFromMockCache('my-key') 3 4// Get entry from the blob store only if its ETag is different from the one you 5// have locally, which means the entry has changed since you last obtained it 6const { data, etag } = await store.getWithMetadata('some-key', { etag: cachedETag }) 7 8if (etag === cachedETag) { 9 // `data` is `null` because the local blob is fresh 10} else { 11 // `data` contains the new blob, store it locally alongside the new ETag 12 writeInMockCache('my-key', data, etag) 13}
getMetadata(key: string, { etag?: string, type?: string }): Promise<{ etag: string, metadata: object }>
Retrieves any metadata associated with a given key and its ETag value.
If an object with the given key is not found, null
is returned.
This method can be used to check whether a key exists without having to actually retrieve it and transfer a potentially-large blob.
1const blob = await store.getMetadata('some-key') 2 3console.log(blob.etag, blob.metadata)
set(key: string, value: ArrayBuffer | Blob | string, { metadata?: object }): Promise<void>
Creates an object with the given key and value.
If an entry with the given key already exists, its value is overwritten.
1await store.set('some-key', 'This is a string value')
setJSON(key: string, value: any, { metadata?: object }): Promise<void>
Convenience method for creating a JSON-serialized object with the given key.
If an entry with the given key already exists, its value is overwritten.
1await store.setJSON('some-key', { 2 foo: 'bar', 3})
delete(key: string): Promise<void>
Deletes an object with the given key, if one exists. The return value is always undefined
, regardless of whether or
not there was an object to delete.
1await store.delete('my-key')
list(options?: { directories?: boolean, paginate?: boolean. prefix?: string }): Promise<{ blobs: BlobResult[], directories: string[] }> | AsyncIterable<{ blobs: BlobResult[], directories: string[] }>
Returns a list of blobs in a given store.
1const { blobs } = await store.list() 2 3// [ { etag: 'etag1', key: 'some-key' }, { etag: 'etag2', key: 'another-key' } ] 4console.log(blobs)
To filter down the entries that should be returned, an optional prefix
parameter can be supplied. When used, only the
entries whose key starts with that prefix are returned.
1const { blobs } = await store.list({ prefix: 'some' }) 2 3// [ { etag: 'etag1', key: 'some-key' } ] 4console.log(blobs)
Optionally, you can choose to group blobs together under a common prefix and then browse them hierarchically when
listing a store, just like grouping files in a directory. To do this, use the /
character in your keys to group them
into directories.
Take the following list of keys as an example:
cats/garfield.jpg
cats/tom.jpg
mice/jerry.jpg
mice/mickey.jpg
pink-panther.jpg
By default, calling store.list()
will return all five keys.
1const { blobs } = await store.list() 2 3// [ 4// { etag: "etag1", key: "cats/garfield.jpg" }, 5// { etag: "etag2", key: "cats/tom.jpg" }, 6// { etag: "etag3", key: "mice/jerry.jpg" }, 7// { etag: "etag4", key: "mice/mickey.jpg" }, 8// { etag: "etag5", key: "pink-panther.jpg" }, 9// ] 10console.log(blobs)
But if you want to list entries hierarchically, use the directories
parameter.
1const { blobs, directories } = await store.list({ directories: true }) 2 3// [ { etag: "etag1", key: "pink-panther.jpg" } ] 4console.log(blobs) 5 6// [ "cats", "mice" ] 7console.log(directories)
To drill down into a directory and get a list of its items, you can use the directory name as the prefix
value.
1const { blobs, directories } = await store.list({ directories: true, prefix: 'cats/' }) 2 3// [ { etag: "etag1", key: "cats/garfield.jpg" }, { etag: "etag2", key: "cats/tom.jpg" } ] 4console.log(blobs) 5 6// [ ] 7console.log(directories)
Note that we're only interested in entries under the cats
directory, which is why we're using a trailing slash.
Without it, other keys like catsuit
would also match.
For performance reasons, the server groups results into pages of up to 1,000 entries. By default, the list()
method
automatically retrieves all pages, meaning you'll always get the full list of results.
If you'd like to handle this pagination manually, you can supply the paginate
parameter, which makes list()
return
an AsyncIterator
.
1const blobs = [] 2 3for await (const entry of store.list({ paginate: true })) { 4 blobs.push(...entry.blobs) 5} 6 7// [ 8// { etag: "etag1", key: "cats/garfield.jpg" }, 9// { etag: "etag2", key: "cats/tom.jpg" }, 10// { etag: "etag3", key: "mice/jerry.jpg" }, 11// { etag: "etag4", key: "mice/mickey.jpg" }, 12// { etag: "etag5", key: "pink-panther.jpg" }, 13// ] 14console.log(blobs)
We provide a Node.js server that implements the Netlify Blobs server interface backed by the local filesystem. This is useful if you want to write automated tests that involve the Netlify Blobs API without interacting with a live store.
The BlobsServer
export lets you construct and initialize a server. You can then use its address to initialize a store.
1import { BlobsServer, getStore } from '@netlify/blobs' 2 3// Choose any token for protecting your local server from 4// extraneous requests 5const token = 'some-token' 6 7// Create a server by providing a local directory where all 8// blobs and metadata should be persisted 9const server = new BlobsServer({ 10 directory: '/path/to/blobs/directory', 11 port: 1234, 12 token, 13}) 14 15await server.start() 16 17// Get a store and provide the address of the local server 18const store = getStore({ 19 edgeURL: 'http://localhost:1234', 20 name: 'my-store', 21 token, 22}) 23 24await store.set('my-key', 'This is a local blob') 25 26console.log(await store.get('my-key'))
Contributions are welcome! If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request on the GitHub repository.
Netlify Blobs is open-source software licensed under the MIT license.
No vulnerabilities found.
No security vulnerabilities found.