Gathering detailed insights and metrics for pgsql-test
Gathering detailed insights and metrics for pgsql-test
Gathering detailed insights and metrics for pgsql-test
Gathering detailed insights and metrics for pgsql-test
npm install pgsql-test
Typescript
Module System
Node Version
NPM Version
PLpgSQL (46.29%)
TypeScript (45.96%)
JavaScript (5.21%)
CSS (1.44%)
Rich Text Format (0.39%)
Makefile (0.32%)
Shell (0.23%)
Dockerfile (0.13%)
SCSS (0.01%)
Vim Script (0.01%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
MIT License
2 Stars
1,440 Commits
1 Watchers
30 Branches
2 Contributors
Updated on Jul 15, 2025
Latest Version
2.5.0
Package Id
pgsql-test@2.5.0
Unpacked Size
72.80 kB
Size
17.55 kB
File Count
42
NPM Version
lerna/4.0.0/node@v22.13.1+arm64 (darwin)
Node Version
22.13.1
Published on
Jul 15, 2025
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
pgsql-test
gives you instant, isolated PostgreSQL databases for each test — with automatic transaction rollbacks, context switching, and clean seeding. Forget flaky tests and brittle environments. Write real SQL. Get real coverage. Stay fast.
1npm install pgsql-test
.setContext()
.sql
files, programmatic seeds, or even load fixturesJest
, Mocha
, etc.Part of the LaunchQL ecosystem, pgsql-test
is built to pair seamlessly with our TypeScript-based Sqitch engine rewrite:
getConnections()
OverviewgetConnections() Options
1import { getConnections } from 'pgsql-test'; 2 3let db, teardown; 4 5beforeAll(async () => { 6 ({ db, teardown } = await getConnections()); 7 await db.query(`SELECT 1`); // ✅ Ready to run queries 8}); 9 10afterAll(() => teardown());
getConnections()
Overview1import { getConnections } from 'pgsql-test'; 2 3// Complete object destructuring 4const { pg, db, admin, teardown, manager } = await getConnections(); 5 6// Most common pattern 7const { db, teardown } = await getConnections();
The getConnections()
helper sets up a fresh PostgreSQL test database and returns a structured object with:
pg
: a PgTestClient
connected as the root or superuser — useful for administrative setup or introspectiondb
: a PgTestClient
connected as the app-level user — used for running tests with RLS and granted permissionsadmin
: a DbAdmin
utility for managing database state, extensions, roles, and templatesteardown()
: a function that shuts down the test environment and database poolmanager
: a shared connection pool manager (PgTestConnector
) behind both clientsTogether, these allow fast, isolated, role-aware test environments with per-test rollback and full control over setup and teardown.
The PgTestClient
returned by getConnections()
is a fully-featured wrapper around pg.Pool
. It provides:
PgTestClient
API Overview1let pg: PgTestClient; 2let teardown: () => Promise<void>; 3 4beforeAll(async () => { 5 ({ pg, teardown } = await getConnections()); 6}); 7 8beforeEach(() => pg.beforeEach()); 9afterEach(() => pg.afterEach()); 10afterAll(() => teardown());
The PgTestClient
returned by getConnections()
wraps a pg.Client
and provides convenient helpers for query execution, test isolation, and context switching.
query(sql, values?)
– Run a raw SQL query and get the QueryResult
beforeEach()
– Begins a transaction and sets a savepoint (called at the start of each test)afterEach()
– Rolls back to the savepoint and commits the outer transaction (cleans up test state)setContext({ key: value })
– Sets PostgreSQL config variables (like role
) to simulate RLS contextsany
, one
, oneOrNone
, many
, manyOrNone
, none
, result
– Typed query helpers for specific result expectationsThese methods make it easier to build expressive and isolated integration tests with strong typing and error handling.
The PgTestClient
returned by getConnections()
is a fully-featured wrapper around pg.Pool
. It provides:
1import { getConnections } from 'pgsql-test'; 2 3let db; // A fully wrapped PgTestClient using pg.Pool with savepoint-based rollback per test 4let teardown; 5 6beforeAll(async () => { 7 ({ db, teardown } = await getConnections()); 8 9 await db.query(` 10 CREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT); 11 CREATE TABLE posts (id SERIAL PRIMARY KEY, user_id INT REFERENCES users(id), content TEXT); 12 13 INSERT INTO users (name) VALUES ('Alice'), ('Bob'); 14 INSERT INTO posts (user_id, content) VALUES (1, 'Hello world!'), (2, 'Graphile is cool!'); 15 `); 16}); 17 18afterAll(() => teardown()); 19 20beforeEach(() => db.beforeEach()); 21afterEach(() => db.afterEach()); 22 23test('user count starts at 2', async () => { 24 const res = await db.query('SELECT COUNT(*) FROM users'); 25 expect(res.rows[0].count).toBe('2'); 26});
The pgsql-test
framework provides powerful tools to simulate authentication contexts during tests, which is particularly useful when testing Row-Level Security (RLS) policies.
Use setContext()
to simulate different user roles and JWT claims:
1db.setContext({ 2 role: 'authenticated', 3 'jwt.claims.user_id': '123', 4 'jwt.claims.org_id': 'acme' 5});
This applies the settings using SET LOCAL
statements, ensuring they persist only for the current transaction and maintain proper isolation between tests.
1describe('authenticated role', () => { 2 beforeEach(async () => { 3 db.setContext({ role: 'authenticated' }); 4 await db.beforeEach(); 5 }); 6 7 afterEach(() => db.afterEach()); 8 9 it('runs as authenticated', async () => { 10 const res = await db.query(`SELECT current_setting('role', true) AS role`); 11 expect(res.rows[0].role).toBe('authenticated'); 12 }); 13});
For non-superuser testing, use the connection options described in the options section. The db.connection
property allows you to customize the non-privileged user account for your tests.
Use setContext()
to simulate Role-Based Access Control (RBAC) during tests. This is useful when testing Row-Level Security (RLS) policies. Your actual server should manage role/user claims via secure tokens (e.g., setting current_setting('jwt.claims.user_id')
), but this interface helps emulate those behaviors in test environments.
This approach enables testing various access patterns:
Note: While this interface helps simulate RBAC for testing, your production server should manage user/role claims via secure authentication tokens, typically by setting values like
current_setting('jwt.claims.user_id')
through proper authentication middleware.
The second argument to getConnections()
is an optional array of SeedAdapter
objects:
1const { db, teardown } = await getConnections(getConnectionOptions, seedAdapters);
This array lets you fully customize how your test database is seeded. You can compose multiple strategies:
seed.sqlfile()
– Execute raw .sql
files from diskseed.fn()
– Run JavaScript/TypeScript logic to programmatically insert dataseed.csv()
– Load tabular data from CSV filesseed.json()
– Use in-memory objects as seed dataseed.sqitch()
– Deploy a Sqitch-compatible migration projectseed.launchql()
– Apply a LaunchQL module using deployFast()
(compatible with sqitch)✨ Default Behavior: If no
SeedAdapter[]
is passed, LaunchQL seeding is assumed. This makespgsql-test
zero-config for LaunchQL-based projects.
This composable system allows you to mix-and-match data setup strategies for flexible, realistic, and fast database tests.
Use .sql
files to set up your database state before tests:
1import path from 'path'; 2import { getConnections, seed } from 'pgsql-test'; 3 4const sql = (f: string) => path.join(__dirname, 'sql', f); 5 6let db; 7let teardown; 8 9beforeAll(async () => { 10 ({ db, teardown } = await getConnections({}, [ 11 seed.sqlfile([ 12 sql('schema.sql'), 13 sql('fixtures.sql') 14 ]) 15 ])); 16}); 17 18afterAll(async () => { 19 await teardown(); 20});
Use JavaScript functions to insert seed data:
1import { getConnections, seed } from 'pgsql-test'; 2 3let db; 4let teardown; 5 6beforeAll(async () => { 7 ({ db, teardown } = await getConnections({}, [ 8 seed.fn(async ({ pg }) => { 9 await pg.query(` 10 INSERT INTO users (name) VALUES ('Seeded User'); 11 `); 12 }) 13 ])); 14});
You can load tables from CSV files using seed.csv({ ... })
. CSV headers must match the table column names exactly. This is useful for loading stable fixture data for integration tests or CI environments.
1import path from 'path'; 2import { getConnections, seed } from 'pgsql-test'; 3 4const csv = (file: string) => path.resolve(__dirname, '../csv', file); 5 6let db; 7let teardown; 8 9beforeAll(async () => { 10 ({ db, teardown } = await getConnections({}, [ 11 // Create schema 12 seed.fn(async ({ pg }) => { 13 await pg.query(` 14 CREATE TABLE users ( 15 id SERIAL PRIMARY KEY, 16 name TEXT NOT NULL 17 ); 18 19 CREATE TABLE posts ( 20 id SERIAL PRIMARY KEY, 21 user_id INT REFERENCES users(id), 22 content TEXT NOT NULL 23 ); 24 `); 25 }), 26 // Load from CSV 27 seed.csv({ 28 users: csv('users.csv'), 29 posts: csv('posts.csv') 30 }), 31 // Adjust SERIAL sequences to avoid conflicts 32 seed.fn(async ({ pg }) => { 33 await pg.query(`SELECT setval(pg_get_serial_sequence('users', 'id'), (SELECT MAX(id) FROM users));`); 34 await pg.query(`SELECT setval(pg_get_serial_sequence('posts', 'id'), (SELECT MAX(id) FROM posts));`); 35 }) 36 ])); 37}); 38 39afterAll(() => teardown()); 40 41it('has loaded rows', async () => { 42 const res = await db.query('SELECT COUNT(*) FROM users'); 43 expect(+res.rows[0].count).toBeGreaterThan(0); 44});
You can seed tables using in-memory JSON objects. This is useful when you want fast, inline fixtures without managing external files.
1import { getConnections, seed } from 'pgsql-test'; 2 3let db; 4let teardown; 5 6beforeAll(async () => { 7 ({ db, teardown } = await getConnections({}, [ 8 // Create schema 9 seed.fn(async ({ pg }) => { 10 await pg.query(` 11 CREATE SCHEMA custom; 12 CREATE TABLE custom.users ( 13 id SERIAL PRIMARY KEY, 14 name TEXT NOT NULL 15 ); 16 17 CREATE TABLE custom.posts ( 18 id SERIAL PRIMARY KEY, 19 user_id INT REFERENCES custom.users(id), 20 content TEXT NOT NULL 21 ); 22 `); 23 }), 24 // Seed with in-memory JSON 25 seed.json({ 26 'custom.users': [ 27 { id: 1, name: 'Alice' }, 28 { id: 2, name: 'Bob' } 29 ], 30 'custom.posts': [ 31 { id: 1, user_id: 1, content: 'Hello world!' }, 32 { id: 2, user_id: 2, content: 'Graphile is cool!' } 33 ] 34 }), 35 // Fix SERIAL sequences 36 seed.fn(async ({ pg }) => { 37 await pg.query(`SELECT setval(pg_get_serial_sequence('custom.users', 'id'), (SELECT MAX(id) FROM custom.users));`); 38 await pg.query(`SELECT setval(pg_get_serial_sequence('custom.posts', 'id'), (SELECT MAX(id) FROM custom.posts));`); 39 }) 40 ])); 41}); 42 43afterAll(() => teardown()); 44 45it('has loaded rows', async () => { 46 const res = await db.query('SELECT COUNT(*) FROM custom.users'); 47 expect(+res.rows[0].count).toBeGreaterThan(0); 48});
Note: While compatible with Sqitch syntax, LaunchQL uses its own high-performance TypeScript-based deploy engine. that we encourage using for sqitch projects
You can seed your test database using a Sqitch project but with significantly improved performance by leveraging LaunchQL's TypeScript deployment engine:
1import path from 'path'; 2import { getConnections, seed } from 'pgsql-test'; 3 4const cwd = path.resolve(__dirname, '../path/to/sqitch'); 5 6beforeAll(async () => { 7 ({ db, teardown } = await getConnections({}, [ 8 seed.sqitch(cwd) 9 ])); 10});
This works for any Sqitch-compatible module, now accelerated by LaunchQL's deployment tooling.
If your project uses LaunchQL modules with a precompiled launchql.plan
, you can use pgsql-test
with zero configuration. Just call getConnections()
— and it just works:
1import { getConnections } from 'pgsql-test'; 2 3let db, teardown; 4 5beforeAll(async () => { 6 ({ db, teardown } = await getConnections()); // 🚀 LaunchQL deployFast() is used automatically - up to 10x faster than traditional Sqitch! 7});
This works out of the box because pgsql-test
uses the high-speed deployFast()
function by default, applying any compiled LaunchQL schema located in the current working directory (process.cwd()
).
If you want to specify a custom path to your LaunchQL module, use seed.launchql()
explicitly:
1import path from 'path'; 2import { getConnections, seed } from 'pgsql-test'; 3 4const cwd = path.resolve(__dirname, '../path/to/launchql'); 5 6beforeAll(async () => { 7 ({ db, teardown } = await getConnections({}, [ 8 seed.launchql(cwd) // uses deployFast() - up to 10x faster than traditional Sqitch! 9 ])); 10});
LaunchQL provides the best of both worlds:
By maintaining Sqitch compatibility while supercharging performance, LaunchQL enables you to keep your existing migration patterns while enjoying the speed benefits of our TypeScript engine.
getConnections
OptionsThis table documents the available options for the getConnections
function. The options are passed as a combination of pg
and db
configuration objects.
db
Options (PgTestConnectionOptions)Option | Type | Default | Description |
---|---|---|---|
db.extensions | string[] | [] | Array of PostgreSQL extensions to include in the test database |
db.cwd | string | process.cwd() | Working directory used for LaunchQL/Sqitch projects |
db.connection.user | string | 'app_user' | User for simulating RLS via setContext() |
db.connection.password | string | 'app_password' | Password for RLS test user |
db.connection.role | string | 'anonymous' | Default role used during setContext() |
db.template | string | undefined | Template database used for faster test DB creation |
db.rootDb | string | 'postgres' | Root database used for administrative operations (e.g., creating databases) |
db.prefix | string | 'db-' | Prefix used when generating test database names |
pg
Options (PgConfig)Environment variables will override these options when available:
PGHOST
, PGPORT
, PGUSER
, PGPASSWORD
, PGDATABASE
Option | Type | Default | Description |
---|---|---|---|
pg.user | string | 'postgres' | Superuser for PostgreSQL |
pg.password | string | 'password' | Password for the PostgreSQL superuser |
pg.host | string | 'localhost' | Hostname for PostgreSQL |
pg.port | number | 5423 | Port for PostgreSQL |
pg.database | string | 'postgres' | Default database used when connecting initially |
1const { conn, db, teardown } = await getConnections({
2 pg: { user: 'postgres', password: 'secret' },
3 db: {
4 extensions: ['uuid-ossp'],
5 cwd: '/path/to/project',
6 connection: { user: 'test_user', password: 'secret', role: 'authenticated' },
7 template: 'test_template',
8 prefix: 'test_',
9 rootDb: 'postgres'
10 }
11});
SET LOCAL
) into queries—ideal for setting role
, jwt.claims
, and other session settings.libpg_query
, converting SQL into parse trees.SELECT
, INSERT
, UPDATE
, DELETE
, and stored procedure calls—supports advanced SQL features like JOIN
, GROUP BY
, and schema-qualified queries.AS DESCRIBED IN THE LICENSES, THE SOFTWARE IS PROVIDED "AS IS", AT YOUR OWN RISK, AND WITHOUT WARRANTIES OF ANY KIND.
No developer or entity involved in creating this software will be liable for any claims or damages whatsoever associated with your use, inability to use, or your interaction with other users of the code, including any direct, indirect, incidental, special, exemplary, punitive or consequential damages, or loss of profits, cryptocurrencies, tokens, or anything else of value.
No vulnerabilities found.
Reason
no dangerous workflow patterns detected
Reason
30 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 10
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
Found 0/11 approved changesets -- score normalized to 0
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Reason
11 existing vulnerabilities detected
Details
Score
Last Scanned on 2025-07-07
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More