Gathering detailed insights and metrics for next-sql
Gathering detailed insights and metrics for next-sql
Gathering detailed insights and metrics for next-sql
Gathering detailed insights and metrics for next-sql
@electric-sql/next
Postgres everywhere - your data, in sync, wherever you need it.
sql-next
An api wrapper around node-mysql
knex-next-rails
Forked from knex: A batteries-included SQL query & schema builder for PostgresSQL, MySQL, CockroachDB, MSSQL and SQLite3
sql-easy-lib
This library was written from simple work with databases. In current version supported next famous databases: sqlite3 and MySQL. In future, list with supported databases will be replenishing.
npm install next-sql
Typescript
Module System
Node Version
NPM Version
JavaScript (100%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
Apache-2.0 License
4 Stars
130 Commits
1 Forks
2 Watchers
2 Branches
1 Contributors
Updated on May 22, 2025
Latest Version
1.0.2
Package Id
next-sql@1.0.2
Unpacked Size
129.66 kB
Size
28.65 kB
File Count
31
NPM Version
9.3.1
Node Version
18.14.0
Published on
Nov 03, 2023
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
3
3
The project is still in the pre-alpha stage
🏃🏻♂️ We are working in progress now... 💪🏻
For more detail, please see roadmap
next-sql
is next-gen relationship database connector.
🏃🏻♂️ Working on progress...
See our roadmap
- Module customization
- To support more databases in the future, such as Postgres, MSSQL, MariaDB, SQLite3, Oracle, Amazon Redshift
- To support One from Many
1npm i -S next-sql
OR
1yarn add next-sql
⚠️⚠️⚠️ Serverless Reminder ⚠️⚠️⚠️
To optimize your serverless setup, consider using a service like PlanetScale that allows for HTTP-based connections. By doing so, you can avoid encountering numerous connection issues on your database server.
⚠️⚠️⚠️ Edge Runtime Reminder ⚠️⚠️⚠️
As the origin MySQL connection is based on a socket, it is essential to avoid using packages like
mysql
ormysql2
. Instead, opt for a solution like PlanetScale that enables HTTP-based connections, ensuring compatibility with your edge runtime environment.
We will pass your config into mysql
/mysql2
/database-js
directly.
You can find more detail from the following link
https://github.com/mysqljs/mysql#connection-options
https://github.com/mysqljs/mysql#pool-options
https://github.com/sidorares/node-mysql2#using-connection-pools
https://github.com/planetscale/database-js#usage
Options:
All config of this level will apply into each hosts.
Also this config options as same as mysql connection options and pool options.
default
: Default key of hosts
hosts
:
key
: The key of this host
value
: The config of this host
only, all config of this level will override the default config1const xsql = require('next-sql') 2require('next-sql/clients/mysql2') 3require('next-sql/clients/database-js') 4 5// It will create PoolCluster for each hosts. 6xsql.init({ 7 // Each connection is created will use the following default config 8 port: 3306, 9 connectionLimit: 5, 10 waitForConnections: true, 11 acquireTimeout: 120000, 12 timeout: 120000, 13 charset: 'utf8mb4', 14 default: 'staging', // <- The default host id 15 16 // Configs for each hosts 17 hosts: { 18 // At least one host config is required 19 // The required default host id here 20 staging: { 21 client: 'database-js', // <- Required 22 host: 'example.com', 23 user: 'username', 24 password: 'password', 25 database: 'dbname', 26 }, 27 // Another host id 28 dev: { 29 client: 'mysql2', // <- Required 30 host: 'example2.com', 31 user: 'username', 32 password: 'password', 33 database: 'dbname', 34 timeout: 30000, // <- You can override default config 35 }, 36 }, 37})
1const xsql = require('next-sql')
1const rows = await xsql().read('table')
1// Will return the origin raw data from mysql/mysql2/database-js 2const result = await xsql().query('SELECT * FROM `user` WHERE id = ?', [5])
1const hostA_tableA_rows = await xsql('hostA').read('tableA') 2const hostB_tableB_rows = await xsql('hostB').read('tableB')
⚠️ Not yet support in this moment
🏃🏻♂️ Working on progress...
1const thirdPartyModule = require('thirdPartyModule') 2xsql.loadModule(thirdPartyModule)
1const users = await xsql().read('users')
Equivalent to the following SQL statement
1SELECT * FROM `users`
Result
1users = [ 2 { 3 id: 1, 4 name: 'Peter', 5 computer: 50, 6 pets: '20,21', 7 gender: 'M', 8 age: 20, 9 birthAt: '2001-01-01T00:00:00.000Z', 10 }, 11 { 12 id: 2, 13 name: 'Tom', 14 computer: null, 15 pets: null, 16 gender: 'M', 17 age: 56, 18 birthAt: '1965-01-01T00:00:00.000Z', 19 }, 20 ... 21]
Example:
1const [user] = await xsql().where({ id: 5 }).read('users')
Equivalent to the following SQL statement
1SELECT * FROM `users` WHERE `id` = 5
Result
1user = { 2 id: 5, 3 name: 'Sam', 4 computer: null, 5 pets: null, 6 gender: 'M', 7 age: 32, 8 birthAt: '1989-01-01T00:00:00.000Z', 9}
We provide a new way to query the database,
You can focus more on business logic without worrying about creating SQL statements.
function
or (q) => {}
is equal to a bracket ()
q
is current instance, it only required when first bracket ()
where()
is equal to AND
.and()
is equal to AND
.or()
is equal to OR
.where()
and and()
and or()
anywhereAND
/OR
) will render in front of the conditionalExample:
1const users = await xsql() 2 .select('`name`, `age`, DATE_FORMAT(`birthAt`, "%Y") AS birthYear') 3 .where({ isActive: 1, isEnable: 1 }) 4 .where('pets', 'NOT', null) 5 .and((q) => { 6 q.or(() => { 7 q.and('age', 'between', [40, 45]) 8 q.and('age', 'between', [50, 60]) 9 }) 10 q.or('age', 'between', [18, 25]) 11 }) 12 .read('users')
Equivalent to the following SQL statement
1SELECT `name`, `age`, DATE_FORMAT(`birthAt`, "%Y") AS birthYear
2FROM `users`
3WHERE `isActive` = ?
4AND `isEnable` = ?
5AND `pets` NOT NULL
6AND (
7 (
8 `age` between ? AND ?
9 `age` between ? AND ?
10 )
11 OR `age` between ? AND ?
12)
13# Query Params
14# [1, 1, 40, 45, 50, 60, 18, 25]
Result
1users = [ 2 { name: 'Peter', age: 20, birthYear: '2001' }, 3 { name: 'Mary', age: 42, birthYear: '1979' }, 4]
We also provide JSON support
Syntax:
{fieldName}.{jsonKey}.{jsonKey}
string
, number
, boolean
, null
{fieldName}[]
|| {fieldName}.{jsonKey}[]
string[]
, number[]
, null
1// Only return the match records 2const users = await xsql() 3 .where({ 'notificationSetting.enable': true }) 4 .and('joinedGroups.id', 'in', [56, 57, 58]) 5 .or('joinedChannel[]', 'find_in_set', 101) 6 .read('users') 7 8// Auto parse into javascript object 9const [user] = await xsql().read('users', { 10 jsonKeys: ['notificationSetting'], 11}) 12// Output 13user.notificationSetting = { 14 enable: true, 15 promotion: true, 16} 17 18// Extract JSON value 19const [user] = await xsql() 20 .select('notificationSetting.enable as notifyEnable') 21 .read('users') 22// Output 23user.notifyEnable = true 24 25// Insert or Update or BatchInsert 26// Will auto apply JSON.stringify 27const [user] = await xsql().insert('table', data, { 28 jsonKeys: ['fieldName'], 29}) 30const [user] = await xsql().update('table', data, { 31 jsonKeys: ['fieldName'], 32}) 33const [user] = await xsql().batchInsert('table', data, { 34 jsonKeys: ['fieldName'], 35})
Before fetch relationship,
mean you CAN NOT get the data from relationship field,
your only get the original row data
Example:
1const users = await xsql() 2 .filter((row) => ({ 3 id: row.id, 4 age: row.age, 5 birth: { 6 year: row.birthAt.getFullYear(), 7 month: row.birthAt.getMonth() + 1, 8 day: row.birthAt.getDate(), 9 timestamp: row.birthAt.getTime(), 10 }, 11 })) 12 .where({ id: 1 }) 13 .read('users')
Equivalent to the following SQL statement
1SELECT * FROM `users` WHERE `id` = 1
Result
1users = [ 2 { 3 id: 1, 4 age: 20, 5 birth: { 6 year: 2001, 7 month: 1, 8 day: 1, 9 timestamp: 978307200000, 10 }, 11 }, 12]
After fetch relationship,
mean you can get the data from relationship field.
Example:
1const users = await xsql() 2 .toOne('car:cars.id') // <- relationship field 3 .map((row) => ({ 4 id: row.id, 5 age: row.age, 6 carColor: row.car.color, // <- relationship field 7 birth: { 8 year: row.birthAt.getFullYear(), 9 month: row.birthAt.getMonth() + 1, 10 day: row.birthAt.getDate(), 11 timestamp: row.birthAt.getTime(), 12 }, 13 })) 14 .where({ id: 1 }) 15 .read('users')
Equivalent to the following SQL statement
1SELECT * FROM `users` WHERE `id` = 1
Result
1users = [ 2 { 3 id: 1, 4 age: 20, 5 carColor: 'red', 6 birth: { 7 year: 2001, 8 month: 1, 9 day: 1, 10 timestamp: 978307200000, 11 }, 12 }, 13]
Example:
1const users = await xsql() 2 .select('`gender`, AVG(`age`) AS averageAge') 3 .groupBy('`gender`') 4 .orderBy('`gender` DESC, `averageAge`') 5 .read('users')
Equivalent to the following SQL statement
1SELECT `gender`, AVG(`age`) AS averageAge 2FROM `users` 3GROUP BY `gender` 4ORDER BY `gender` DESC, `averageAge`
Result
1users = [ 2 { gender: 'M', averageAge: 46 }, 3 { gender: 'F', averageAge: 30 }, 4]
Example:
1const users = await xsql() 2 .select('`id`, `name`') 3 .limit(1) 4 .offset(3) 5 .read('users')
Equivalent to the following SQL statement
1SELECT `id`, `name` 2FROM `users` 3LIMIT 1, 3
Result
1users = [{ id: 4, name: 'Kitty' }]
Example:
1const users = await xsql().log(false).read('users')
It will diable the log.
Example:
1// Frequently used queries 2const linkImg = (query) => { 3 query 4 .select('userId,userName,userAvatar,userAlbum') 5 .toOne('userAvatar:imgTable.imgId', { 6 query: (q) => q.select('imgId,imgUrl'), 7 }) 8 .toMany('userAlbum:imgTable.imgId', { 9 query: (q) => q.select('imgId,imgUrl'), 10 }) 11} 12// Apply on query 13const users = await xsql().where({ userId: 1 }).extend(linkImg).read('users')
You can import frequently used queries and apply them via extend
Result
1users = [ 2 { 3 userId: 1, 4 userName: 'Foo Bar', 5 userAvatar: { imgId: 1, imgUrl: 'img.png' }, 6 userAlbum: [ 7 { imgId: 2, imgUrl: 'img.png' }, 8 { imgId: 3, imgUrl: 'img.png' }, 9 { imgId: 4, imgUrl: 'img.png' }, 10 ], 11 }, 12]
Automatically manage pagination.
Demo:
🏃🏻♂️ Working on progress...
🏃🏻♂️ Working on progress...
Will override the
limit()
andoffset()
settings!
Only can use with
read()
Example:
1const users = await xsql() 2 .pagination({ 3 // The current page 4 currPage: 2, 5 // How many rows pre each page 6 rowStep: 10, 7 // How many pages will shown on the navigation bar 8 navStep: 4, 9 }) 10 .read('users')
Result
1// Users of current page 2users = [...UserObject] 3 4/* 5Case 1: Normal 6 Current Page : 6 7 Total users : 100 8Range of user id : 51 to 60 9*/ 10users.pagination = { 11 isOutOfRange: false, 12 currPage: 6, 13 rowStep: 10, 14 navStep: 4, 15 row: { 16 record: { from: 51, to: 60 }, 17 index: { from: 50, to: 59 }, 18 }, 19 page: { 20 from: 5, 21 current: 6, 22 to: 8, 23 hasPrev: true, 24 hasNext: true, 25 }, 26 nav: { 27 current: 2, 28 hasPrev: true, 29 hasNext: true, 30 buttons: [ 31 { value: 5, label: '«', className: 'page-prev' }, 32 { value: 4, label: '...', className: 'nav-prev' }, 33 { value: 5, label: '5', className: '' }, 34 { value: 6, label: '6', className: 'current active' }, 35 { value: 7, label: '7', className: '' }, 36 { value: 8, label: '8', className: '' }, 37 { value: 9, label: '...', className: 'nav-next' }, 38 { value: 7, label: '»', className: 'page-next' }, 39 ], 40 }, 41} 42 43/* 44Case 2: Out of range 45 Current Page : 11 46 Total users : 100 47Range of user id : --- 48*/ 49users.pagination = { 50 isOutOfRange: true, 51 currPage: 11, 52 rowStep: 10, 53 navStep: 4, 54 row: { 55 record: { from: 101, to: 110 }, 56 index: { from: 100, to: 109 }, 57 }, 58 page: { 59 from: 9, 60 current: 11, 61 to: 10, 62 hasPrev: true, 63 hasNext: false, 64 }, 65 nav: { 66 current: 3, 67 hasPrev: true, 68 hasNext: false, 69 buttons: [ 70 { value: 10, label: '«', className: 'page-prev' }, 71 { value: 8, label: '...', className: 'nav-prev' }, 72 { value: 9, label: '9', className: '' }, 73 { value: 10, label: '10', className: '' }, 74 { value: 12, label: '»', className: 'page-next disabled' }, 75 ], 76 }, 77}
{currentField}
:{targetTable}
.{targetField}
currentField
: The field name of current table you want to maptargetTable
: Which table do you want to map?targetField
: The field name of the targer tableExample:
When mapping computer into user
Users Table (Current Table)
id | name | computer |
---|---|---|
1 | Tom | 50 |
Computers Table (Target Table)
id | name | ip |
---|---|---|
50 | Win10 | 192.168.0.123 |
1await xsql().toOne('computer:computers.id').read('users')
Each row linked to one foreign item
Parameters:
mapper
: The mapper stringoptions
: The options for this relationship mapping
filter
: (row) => (row)
query
: (q) => {}
q
of the callback is a new instance of xsql()
,addonKey
omitMapperKey
: [default=false]
override
: (q, currentIds, currentRows) => Row[]
Override the origin mapping query and return rows result.Each row linked to many foreign items
Parameters:
mapper
: The mapper stringoptions
: The options for this relationship mapping
arrayMapper
: (array) => string[]
When using JSON array, you can use this method to map the array value to string arraysplitter
: ','
|| '$[]'
|| '$.key.key[]'
JSON
to provide the mapping data.JSON
must eventually return string[]
or number[]
or null
'$[]'
'$.key.key[]'
$.too[]
the too
is JSON array$.foo.bar[]
the bar
is JSON arrayfilter
: (row) => (row)
query
: (q) => {}
q
of the callback is a new instance of xsql()
,addonKey
omitMapperKey
: [default=false]
override
: (q, currentIds, currentRows) => Row[]
Override the origin mapping query and return rows result.Each foreign items linked to one current row
Parameters:
addonKey
: You must provide the key for store all incoming data, this key will add to the end of current row objectmapper
: The mapper stringoptions
: The options for this relationship mapping
filter
: (row) => (row)
query
: (q) => {}
q
of the callback is a new instance of xsql()
,omitMapperKey
: [default=false]
override
: (q, currentIds, currentRows) => Row[]
Override the origin mapping query and return rows result.🔄 Coming Soon...
Based on performance considerations temporarily not supported.
Maybe it will be supported in some days of the future.
1const users = await xsql() 2 .filter(({ id, name, age }) => ({ id, name, age })) 3 .toOne('computer:computers.id', { 4 filter: ({ id, name, ip }) => ({ id, name, ip }), 5 }) 6 .toMany('pets:pets.id', { 7 filter: ({ id, type, name }) => ({ id, type, name }), 8 }) 9 .fromOne('primaryCar', 'id:cars.user', { 10 query: (q) => { 11 q.select('`id`, `model`') 12 q.where({ isPrimary: 1 }) 13 q.toOne('brand:brands.id', { 14 filter: ({ id, name } => ({ id, name })) 15 }) 16 }, 17 filter: ({ id, model }) => ({ id, model }), 18 }) 19 .read('users')
Equivalent to the following SQL statement
1# Master Query
2SELECT * FROM `users`
3
4# toOne Query
5SELECT * FROM `computers` WHERE `id` IN (50, 51)
6
7# toMany Query
8SELECT * FROM `pets` WHERE `id` IN (20, 21, 22, 23)
9
10# fromOne Query
11SELECT `id`, `model`
12FROM `cars`
13WHERE `user` IN (1, 2, 3, 4, 5, 6)
14AND isPrimary = 1
15
16# toOne query inside fromOne query
17SELECT * FROM `brand` WHERE `id` = 25
Result
1users = [ 2 { 3 id: 1, 4 name: 'Tom', 5 age: 20, 6 // toOne() 7 computer: { 8 id: 50, 9 name: 'Windows 10', 10 ip: '192.168.1.123', 11 }, 12 // toMany() 13 pets: [ 14 { id: 20, type: 'dog', name: 'Foo' }, 15 { id: 21, type: 'cat', name: 'Bar' }, 16 ], 17 // fromOne() 18 primaryCar: [ 19 { 20 id: 101, 21 model: 'Model S', 22 // toOne() 23 brand: { 24 id: 25, 25 name: 'Tesla', 26 }, 27 }, 28 ], 29 }, 30 { 31 id: 2, 32 name: 'Peter', 33 age: 20, 34 computer: null, 35 pets: null, 36 primaryCar: null, 37 }, 38 ... 39]
1const newUser = { 2 name: 'Bar', 3 age: 28, 4 computer: 56, 5 pets: '69,70', 6} 7await xsql().insert('users', newUser)
🚫 Pay Attention 🚫
- The key length of each row must be the same
- The order of the keys must be the same
1const newUsers = [ 2 { name: 'Foo', age: 28 }, 3 { name: 'Bar', age: 32 }, 4] 5await xsql().batchInsert('users', newUsers)
🚫 Pay Attention 🚫
- The key length of each row must be the same
- The order of the keys must be the same
1const newComputers = [ 2 3 // Insert record 4 { id: null, name: 'MacOS', ip: '192.168.1.125' } 5 6 // Update record 7 { id: 50, name: 'Win10', ip: '192.168.1.124' } 8 9 /* 🚫 Will throw errors due to different key lengths 🚫 10 { name: 'Win10', ip: '192.168.1.124' } */ 11 12 /* 🚫 Will update the wrong data due to different key order 🚫 13 { ip: '192.168.1.124', name: 'Win10', id: 50, name } */ 14] 15await xsql().batchInsert('computers', newComputers, { 16 primaryKeys: 'id', 17})
🚫 Pay Attention 🚫
- The key length of each row must be the same
- The order of the keys must be the same
1const wallets = [ 2 { user: 1, cash: 50 } 3 { user: 2, cash: -50 } 4] 5await xsql().batchInsert('wallets', wallets, { 6 primaryKeys: 'user', 7 sumKeys: ['cash'] 8})
1await xsql().where({ id: 1 }).update('users', { 2 name: 'Tom', 3})
⚠️ Not yet support in this moment
🏃🏻♂️ Working on progress...
1await xsql() 2 .where({ id: 1 }) 3 .update( 4 'users', 5 { 6 name: 'Tom', 7 cash: 50, 8 }, 9 { 10 sumKeys: ['cash'], 11 } 12 )
1await xsql().update('users', { wallet: 0 })
1await xsql().where({ id: 1 }).delete('users')
1await xsql().delete('users')
Commit
Rollback
1// [Tom] transfers $50 to [Mary] 2const tomId = 1 3const maryId = 2 4const amount = 50; 5await xsql().transaction(async (t) => { 6 // Extract $50 from Tom 7 await t() 8 .where({ id: tomId }) 9 .update( 10 'users', 11 { wallet: -amount }, // <- negative number 12 { sumKeys: ['wallet'] }, 13 ) 14 15 // Read the value of Tom wallet 16 const [tom] = await t() 17 .where({ id: tomId }) 18 .read('users') 19 20 // Rollback when not enough money 21 if (tom.wallet < 0) { 22 throw new Error('Not enough money') 23 } 24 25 // Deposit $50 into Mary 26 await t() 27 .where({ id: maryId }) 28 .update( 29 'users', 30 { wallet: amount }, 31 { sumKeys: ['wallet'] }, 32 ) 33 34 // Log into database 35 const logAt = Date.now() 36 await t().batchInsert('walletLogs', [ 37 { type: 'EXTRACT', user: tomId, change: -amount, logAt } 38 { type: 'DEPOSIT', user: maryId, change: amount, logAt } 39 ]) 40})
No vulnerabilities found.
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
Found 0/30 approved changesets -- score normalized to 0
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no SAST tool detected
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Reason
11 existing vulnerabilities detected
Details
Score
Last Scanned on 2025-07-07
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More