Gathering detailed insights and metrics for mongodb-schema
Gathering detailed insights and metrics for mongodb-schema
Gathering detailed insights and metrics for mongodb-schema
Gathering detailed insights and metrics for mongodb-schema
npm install mongodb-schema
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
141 Stars
412 Commits
15 Forks
23 Watching
25 Branches
20 Contributors
Updated on 14 Nov 2024
TypeScript (92.91%)
JavaScript (7.09%)
Cumulative downloads
Total Downloads
Last day
-28.7%
2,238
Compared to previous day
Last week
-21.8%
14,368
Compared to previous week
Last month
11.2%
70,794
Compared to previous month
Last year
41.5%
765,705
Compared to previous year
1
21
9
Infer a probabilistic schema for a MongoDB collection.
mongodb-schema
can be used as a command line tool or programmatically in your application as a node module.
To install mongodb-schema for command line use, run npm install -g mongodb-schema
. This will add a new
shell script which you can run directly from the command line.
The command line tool expects a MongoDB connection URI and a namespace in the form <database>.<collection>
.
Without further arguments, it will sample 100 random documents from the collection and print a schema of
the collection in JSON format to stdout.
mongodb-schema mongodb://localhost:27017 mongodb.fanclub
Additional arguments change the number of samples (--number
), print additional statistics about the
schema analysis (--stats
), switch to a different output format (--format
), or let you suppress the
schema output altogether (--no-output
) if you are only interested in the schema statistics, semantic
type discovery (--semantic-types
), and the ability to disable value collection (--no-values
).
For more information, run
mongodb-schema --help
The following example demonstrates how mongodb-schema
can be used programmatically from
your node application. You need to additionally install the MongoDB node driver to follow
along with this example.
Make sure you have a mongod
running on localhost on port 27017 (or change the example
below accordingly).
From your application folder, install the driver and mongodb-schema
locally:
npm install --save mongodb mongodb-schema
(optional) If you don't have any data in your MongoDB instance yet, you can create a
test.data
collection with this command:
mongosh --eval "db.data.insertMany([{_id: 1, a: true}, {_id: 2, a: 'true'}, {_id: 3, a: 1}, {_id: 4}])" localhost:27017/test
Create a new file parse-schema.js
and paste in the following code:
1const { parseSchema } = require('mongodb-schema'); 2const { MongoClient } = require('mongodb'); 3 4const dbName = 'test'; 5const uri = `mongodb://localhost:27017/${dbName}`; 6const client = new MongoClient(uri); 7 8async function run() { 9 try { 10 const database = client.db(dbName); 11 const documentStream = database.collection('data').find(); 12 13 // Here we are passing in a cursor as the first argument. You can 14 // also pass in a stream or an array of documents directly. 15 const schema = await parseSchema(documentStream); 16 17 console.log(JSON.stringify(schema, null, 2)); 18 } finally { 19 await client.close(); 20 } 21} 22 23run().catch(console.dir);
When we run the above with node ./parse-schema.js
, we'll see output
similar to this (some fields not present here for clarity):
1{ 2 "count": 4, // parsed 4 documents 3 "fields": [ // an array of Field objects, @see `./lib/field.js` 4 { 5 "name": "_id", 6 "count": 4, // 4 documents counted with _id 7 "type": "Number", // the type of _id is `Number` 8 "probability": 1, // all documents had an _id field 9 "hasDuplicates": false, // therefore no duplicates 10 "types": [ // an array of Type objects, @see `./lib/types/` 11 { 12 "name": "Number", // name of the type 13 "count": 4, // 4 numbers counted 14 "probability": 1, 15 "unique": 4, 16 "values": [ // array of encountered values 17 1, 18 2, 19 3, 20 4 21 ] 22 } 23 ] 24 }, 25 { 26 "name": "a", 27 "count": 3, // only 3 documents with field `a` counted 28 "probability": 0.75, // hence probability 0.75 29 "type": [ // found these types 30 "Boolean", 31 "String", 32 "Number", 33 "Undefined" // for convenience, we treat Undefined as its own type 34 ], 35 "hasDuplicates": false, // there were no duplicate values 36 "types": [ 37 { 38 "name": "Boolean", 39 "count": 1, 40 "probability": 0.25, // probabilities for types are calculated factoring in Undefined 41 "unique": 1, 42 "values": [ 43 true 44 ] 45 }, 46 { 47 "name": "String", 48 "count": 1, 49 "probability": 0.25, 50 "unique": 1, 51 "values": [ 52 "true" 53 ] 54 }, 55 { 56 "name": "Number", 57 "count": 1, 58 "probability": 0.25, 59 "unique": 1, 60 "values": [ 61 1 62 ] 63 }, 64 { 65 "name": "Undefined", 66 "count": 1, 67 "probability": 0.25, 68 "unique": 0 69 } 70 ] 71 } 72 ] 73}
A high-level view of the schema tree structure is as follows:
mongodb-schema
supports all BSON types.
Checkout the tests for more usage examples.
As of version 6.1.0, mongodb-schema has a new feature called "Semantic Type Detection". It allows to override the type identification of a value. This enables users to provide specific domain knowledge of their data, while still using the underlying flexible BSON representation and nested documents and arrays.
One of the built-in semantic types is GeoJSON, which traditionally would just be detected as "Document" type. With the new option semanticTypes
enabled, these sub-documents are now considered atomic values with a type "GeoJSON". The original BSON type name is still available under the bsonType
field.
To enable this mode, use the -t
or --semantic-types
flag at the command line. When using the API, pass an option object as the second parameter with the semanticTypes
flag set to true
:
1parseSchema(db.collection('data').find(), {semanticTypes: true}, function(err, schema) { 2 ... 3});
This mode is disabled by default.
It is also possible to provide custom semantic type detector functions. This is useful to provide domain knowledge, for example to detect trees or graphs, special string encodings of data, etc.
The detector function is called with value
and path
(the full field path in dot notation)
as arguments, and must return a truthy value if the data type applies to this field or value.
Here is an example to detect email addresses:
1 2var emailRegex = /[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?/; 3 4function emailDetector(value, path) { 5 return emailRegex.test(value); 6}; 7 8parseSchema(db.collection('data').find(), { semanticTypes: { EmailAddress: emailDetector } }, function(err, schema) { 9 ... 10});
This returns a schema with the following content (only partially shown):
{
"name": "email",
"path": "email",
"count": 100,
"types": [
{
"name": "EmailAddress", // custom type "EmailAddress" was recognized
"bsonType": "String", // original BSON type available as well
"path": "email",
"count": 100,
"values": [
"twinbutterfly28@aim.com",
"funkymoney45@comcast.net",
"beauty68@msn.com",
"veryberry8@hotmail.com",
As can be seen, the field name "email" was correctly identified as a custom type "EmailAddress".
As of version 6.1.0, mongodb-schema supports analysing only the structure of the documents, without collection data samples. To enable this mode, use the --no-values
flag at the command line. When using the API, pass an option object as the second parameter with the storeValues
flag set to false
.
This mode is enabled by default.
To compare schemas quantitatively we introduce the following measurable metrics on a schema:
The schema depth is defined as the maximum number of nested levels of keys in the schema. It does not matter if the subdocuments are nested directly or as elements of an array. An empty document has a depth of 0, whereas a document with some top-level keys but no nested subdocuments has a depth of 1.
The schema width is defined as the number of individual keys, added up over all nesting levels of the schema. Array values do not count towards the schema width.
1{}
Statistic | Value |
---|---|
Schema Depth | 0 |
Schema Width | 0 |
1{ 2 one: 1 3}
Statistic | Value |
---|---|
Schema Depth | 1 |
Schema Width | 1 |
1{ 2 one: [ 3 "foo", 4 "bar", 5 { 6 two: { 7 three: 3 8 } 9 }, 10 "baz" 11 ], 12 foo: "bar" 13}
Statistic | Value |
---|---|
Schema Depth | 3 |
Schema Width | 4 |
1{ 2 a: 1, 3 b: false, 4 one: { 5 c: null, 6 two: { 7 three: { 8 four: 4, 9 e: "deepest nesting level" 10 } 11 } 12 }, 13 f: { 14 g: "not the deepest level" 15 } 16}
Statistic | Value |
---|---|
Schema Depth | 4 |
Schema Width | 10 |
1// first document 2{ 3 foo: [ 4 { 5 bar: [1, 2, 3] 6 } 7 ] 8}, 9// second document 10{ 11 foo: 0 12}
Statistic | Value |
---|---|
Schema Depth | 2 |
Schema Width | 2 |
npm test
Apache 2.0
No vulnerabilities found.
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
SAST tool detected but not run on all commits
Details
Reason
Found 11/24 approved changesets -- score normalized to 4
Reason
8 existing vulnerabilities detected
Details
Reason
dependency not pinned by hash detected -- score normalized to 1
Details
Reason
0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
branch protection not enabled on development/release branches
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More