Gathering detailed insights and metrics for @prisma/prisma-schema-wasm
Gathering detailed insights and metrics for @prisma/prisma-schema-wasm
Gathering detailed insights and metrics for @prisma/prisma-schema-wasm
Gathering detailed insights and metrics for @prisma/prisma-schema-wasm
@prisma/prisma-fmt-wasm
The WASM package for prisma-fmt
@prisma/engines
This package is intended for Prisma's internal use
@prisma/client
Prisma Client is an auto-generated, type-safe and modern JavaScript/TypeScript ORM for Node.js that's tailored to your data. Supports PostgreSQL, CockroachDB, MySQL, MariaDB, SQL Server, SQLite & MongoDB databases.
@mrleebo/prisma-ast
This library uses an abstract syntax tree to parse schema.prisma files into an object in JavaScript. It is similar to [@prisma/sdk](https://github.com/prisma/prisma/tree/master/src/packages/sdk) except that it preserves comments and model attributes.
🚂 Engine components of Prisma ORM
npm install @prisma/prisma-schema-wasm
Module System
Min. Node Version
Typescript Support
Node Version
NPM Version
1,195 Stars
9,992 Commits
240 Forks
18 Watching
318 Branches
115 Contributors
Updated on 27 Nov 2024
Rust (99.22%)
TypeScript (0.38%)
Makefile (0.13%)
Shell (0.12%)
Nix (0.09%)
HTML (0.01%)
TSQL (0.01%)
Dockerfile (0.01%)
JavaScript (0.01%)
Cumulative downloads
Total Downloads
Last day
-6.3%
54,872
Compared to previous day
Last week
-3%
295,537
Compared to previous week
Last month
15.1%
1,199,051
Compared to previous month
Last year
973%
9,762,766
Compared to previous year
No dependencies detected.
This repository contains a collection of engines that power the core stack for Prisma, most prominently Prisma Client and Prisma Migrate.
If you're looking for how to install Prisma or any of the engines, the Getting Started guide might be useful.
This document describes some of the internals of the engines, and how to build and test them.
This repository contains four engines:
Additionally, the psl (Prisma Schema Language) is the library that defines how the language looks like, how it's parsed, etc.
You'll also find:
docker-compose.yml
file that's helpful for running tests and bringing up
containers for various databasesflake.nix
file for bringing up all dependencies and making it easy to
build the code in this repository (the use of this file and nix
is
entirely optional, but can be a good and easy way to get started).envrc
file to make it easier to set everything up, including the nix shell
The API docs (cargo doc) are published on our fabulous repo page.
Prerequisites:
direnv allow
on
the repository root.
./.envrc
manually in your
shell.Note for nix users: it should be enough to direnv allow
.
How to build:
To build all engines, simply execute cargo build
on the repository root. This
builds non-production debug binaries. If you want to build the optimized
binaries in release mode, the command is cargo build --release
.
Depending on how you invoked cargo
in the previous step, you can find the
compiled binaries inside the repository root in the target/debug
(without
--release
) or target/release
directories (with --release
):
Prisma Component | Path to Binary |
---|---|
Query Engine | ./target/[debug|release]/query-engine |
Schema Engine | ./target/[debug|release]/schema-engine |
Prisma Format | ./target/[debug|release]/prisma-fmt |
The Prisma Schema Language is a library which defines the data structures and parsing rules for prisma files, including the available database connectors. For more technical details, please check the library README.
The PSL is used throughout the schema engine, as well as prisma format. The DataModeL (DML), which is an annotated version of the PSL is also used as input for the query engine.
The Query Engine is how Prisma Client queries are executed. Here's a brief description of what it does:
When used through Prisma Client, there are two ways for the Query Engine to be executed:
./query-engine/query-engine
)./query-engine/query-engine-node-api
)You can also run the Query Engine as a stand-alone GraphQL server.
Warning: There is no guaranteed API stability. If using it on production please be aware the api and the query language can change any time.
Notable environment flags:
RUST_LOG_FORMAT=(devel|json)
sets the log format. By default outputs json
.QE_LOG_LEVEL=(info|debug|trace)
sets the log level for the Query Engine. If
you need Query Graph debugging logs, set it to "trace"FMT_SQL=1
enables logging formatted SQL queriesPRISMA_DML_PATH=[path_to_datamodel_file]
should point to the datamodel file
location. This or PRISMA_DML
is required for the Query Engine to run.PRISMA_DML=[base64_encoded_datamodel]
an alternative way to provide a
datamodel for the server.RUST_BACKTRACE=(0|1)
if set to 1, the error backtraces will be printed to
the STDERR.LOG_QUERIES=[anything]
if set, the SQL queries will be written to the INFO
log. Needs the right log level enabled to be seen from the terminal.RUST_LOG=[filter]
sets the filter for the logger. Can be either trace
,
debug
, info
, warning
or error
, that will output ALL logs from every
crate from that level. The .envrc
in this repo shows how to log different
parts of the system in a more granular way.Starting the Query Engine:
The engine can be started either with using the cargo
build tool, or
pre-building a binary and running it directly. If using cargo
, replace
whatever command that starts with ./query-engine
with cargo run --bin query-engine --
.
You can also pass --help
to find out more options to run the engine.
Running make show-metrics
will start Prometheus and Grafana with a default metrics dashboard.
Prometheus will scrape the /metrics
endpoint to collect the engine's metrics
Navigate to http://localhost:3000
to view the Grafana dashboard.
The Schema Engine does a couple of things:
The engine uses:
_prisma_migrations
tableprisma/migrations
directory which acts as a database of existing
migrationsPrisma format can format prisma schema files. It also comes as a WASM module via a node package. You can read more here.
When trying to debug code, here's a few things that might be useful:
dbg!()
statements to validate code paths, inspect variables, etc.,RUST_LOG
environment variable; see the documentation,test-cli
to test migration and introspection without having
to go through the prisma
npm package.There are two test suites for the engines: Unit tests and integration tests.
Unit tests: They test internal functionality of individual crates and components.
You can find them across the whole codebase, usually in ./tests
folders at
the root of modules. These tests can be executed via cargo test
. Note that
some of them will require the TEST_DATABASE_URL
enviornment variable set up.
Integration tests: They run GraphQL queries against isolated instances of the Query Engine and asserts that the responses are correct.
You can find them at ./query-engine/connector-test-kit-rs
.
Prerequisites:
direnv
, then direnv allow
on the repository root.
./.envrc
manually in your shell.Setup:
There are helper make
commands to set up a test environment for a specific
database connector you want to test. The commands set up a container (if needed)
and write the .test_config
file, which is picked up by the integration
tests:
make dev-mysql
: MySQL 5.7make dev-mysql8
: MySQL 8make dev-postgres
: PostgreSQL 10make dev-sqlite
: SQLitemake dev-mongodb_5
: MongoDB 5*On windows:
If not using WSL, make
is not available and you should just see what your
command does and do it manually. Basically this means editing the
.test_config
file and starting the needed Docker containers.
To actually get the tests working, read the contents of .envrc
. Then Edit environment variables for your account
from Windows settings, and add at least
the correct values for the following variables:
WORKSPACE_ROOT
should point to the root directory of prisma-engines
project.PRISMA_BINARY_PATH
is usually
%WORKSPACE_ROOT%\target\release\query-engine.exe
.SCHEMA_ENGINE_BINARY_PATH
should be
%WORKSPACE_ROOT%\target\release\schema-engine.exe
.Other variables may or may not be useful.
Run:
Run cargo test
in the repository root.
Please refer to the Testing driver adapters section in the connector-test-kit-rs README.
ℹ️ Important note on developing features that require changes to the both the query engine, and driver adapters code
As explained in Testing driver adapters, running DRIVER_ADAPTER=$adapter make qe-test
will ensure you have prisma checked out in your filesystem in the same directory as prisma-engines. This is needed because the driver adapters code is symlinked in prisma-engines.
When working on a feature or bugfix spanning adapters code and query-engine code, you will need to open sibling PRs in prisma/prisma
and prisma/prisma-engines
respectively.
Locally, each time you run DRIVER_ADAPTER=$adapter make test-qe
tests will run using the driver adapters built from the source code in the working copy of prisma/prisma. All good.
In CI, tho', we need to denote which branch of prisma/prisma we want to use for tests. In CI, there's no working copy of prisma/prisma before tests run.
The CI jobs clones prisma/prisma main
branch by default, which doesn't include your local changes. To test in integration, we can tell CI to use the branch of prisma/prisma containing
the changes in adapters. To do it, you can use a simple convention in commit messages. Like this:
git commit -m "DRIVER_ADAPTERS_BRANCH=prisma-branch-with-changes-in-adapters [...]"
GitHub actions will then pick up the branch name and use it to clone that branch's code of prisma/prisma, and build the driver adapters code from there.
When it's time to merge the sibling PRs, you'll need to merge the prisma/prisma PR first, so when merging the engines PR you have the code of the adapters ready in prisma/prisma main
branch.
prisma/prisma
You can trigger releases from this repository to npm that can be used for testing the engines in prisma/prisma
either automatically or manually:
Any branch name starting with integration/
will, first, run the full test suite in GH Actions and, second, run the release workflow (build and upload engines to S3 & R2).
To trigger the release on any other branch, you have two options:
[integration]
string anywhere in your commit messages/The journey through the pipeline is the same as a commit on the main
branch.
prisma/engines-wrapper
and publish a new @prisma/engines-version
npm package but on the integration
tag.prisma/prisma
to create a chore(Automated Integration PR): [...]
PR with a branch name also starting with integration/
prisma/prisma
we also trigger the publish pipeline when a branch name starts with integration/
, this will publish all prisma/prisma
monorepo packages to npm on the integration
tag.This end to end will take minimum ~1h20 to complete, but is completely automated :robot:
Notes:
prisma/prisma-engines
and prisma/prisma
repositories. So, it is possible that the engines would be published and only then test suite will
discover a defect. It is advised that to keep an eye on both test and publishing workflows.Additionally to the automated integration release for integration/
branches, you can also trigger a publish manually in the Buildkite [Test] Prisma Engines
job if that succeeds for any branch name. Click "🚀 Publish binaries" at the bottom of the test list to unlock the publishing step. When all the jobs in [Release] Prisma Engines
succeed, you also have to unlock the next step by clicking "🚀 Publish client". This will then trigger the same journey as described above.
When rust-analzyer runs cargo check
it will lock the build directory and stop any cargo commands from running until it has completed. This makes the build process feel a lot longer. It is possible to avoid this by setting a different build path for
rust-analyzer. To avoid this. Open VSCode settings and search for Check on Save: Extra Args
. Look for the Rust-analyzer › Check On Save: Extra Args
settings and add a new directory for rust-analyzer. Something like:
--target-dir:/tmp/rust-analyzer-check
To trigger an Automated integration releases from this repository to npm or Manual integration releases from this repository to npm branches of forks need to be pulled into this repository so the Buildkite job is triggered. You can use these GitHub and git CLI commands to achieve that easily:
gh pr checkout 4375
git checkout -b integration/sql-nested-transactions
git push --set-upstream origin integration/sql-nested-transactions
If there is a need to re-create this branch because it has been updated, deleting it and re-creating will make sure the content is identical and avoid any conflicts.
git branch --delete integration/sql-nested-transactions
gh pr checkout 4375
git checkout -b integration/sql-nested-transactions
git push --set-upstream origin integration/sql-nested-transactions --force
If you have a security issue to report, please contact us at security@prisma.io
No vulnerabilities found.
Reason
all changesets reviewed
Reason
30 commit(s) and 6 issue activity found in the last 90 days -- score normalized to 10
Reason
no dangerous workflow patterns detected
Reason
license file detected
Details
Reason
no binaries found in the repo
Reason
security policy file detected
Details
Reason
branch protection is not maximal on development and all release branches
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
project is not fuzzed
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Reason
16 existing vulnerabilities detected
Details
Score
Last Scanned on 2024-11-18
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More