Gathering detailed insights and metrics for @bufferapp/datadog-ci
Gathering detailed insights and metrics for @bufferapp/datadog-ci
npm install @bufferapp/datadog-ci
Typescript
Module System
Min. Node Version
Node Version
NPM Version
49.9
Supply Chain
86.9
Quality
76.4
Maintenance
25
Vulnerability
80.7
License
TypeScript (99.01%)
JavaScript (0.97%)
Shell (0.01%)
Dockerfile (0.01%)
Love this project? Help keep it running — sponsor us today! 🚀
Total Downloads
611
Last Day
1
Last Week
3
Last Month
17
Last Year
102
Apache-2.0 License
133 Stars
3,982 Commits
57 Forks
543 Watchers
90 Branches
366 Contributors
Updated on Feb 17, 2025
Minified
Minified + Gzipped
Latest Version
0.13.4
Package Id
@bufferapp/datadog-ci@0.13.4
Unpacked Size
388.22 kB
Size
69.12 kB
File Count
73
NPM Version
6.14.12
Node Version
14.16.1
Cumulative downloads
Total Downloads
Last Day
0%
1
Compared to previous day
Last Week
-40%
3
Compared to previous week
Last Month
112.5%
17
Compared to previous month
Last Year
-42.4%
102
Compared to previous year
18
23
Execute commands with Datadog from within your Continuous Integration/Continuous Deployment scripts. A good way to perform end to end tests of your application before applying you changes or deploying. It currently features running synthetics tests and waiting for the results.
The package is under @datadog/datadog-ci and can be installed through NPM or Yarn:
1# NPM 2npm install --save-dev @datadog/datadog-ci 3 4# Yarn 5yarn add --dev @datadog/datadog-ci
1Usage: datadog-ci <command> <subcommand> [options] 2 3Available command: 4 - dependencies 5 - lambda 6 - sourcemaps 7 - synthetics
Each command allows interacting with a product of the Datadog platform. The commands are defined in the src/commands folder.
Further documentation for each command can be found in its folder, ie:
Pull requests for bug fixes are welcome, but before submitting new features or changes to current functionality open an issue and discuss your ideas or propose the changes you wish to make. After a resolution is reached a PR can be submitted for review.
When developing the tool it is possible to run commands using yarn launch
. It relies on ts-node
so does not need building the project for every new change.
1yarn launch synthetics run-tests --config dev/global.config.json
This tool uses clipanion to handle the different commands.
The tests are written using jest.
The coding style is checked with tslint and the configuration can be found in the tslint.json file.
Commands are stored in the src/commands folder.
The skeleton of a command is composed of a README, an index.ts
and a folder for the tests.
1src/ 2└── commands/ 3 └── fakeCommand/ 4 ├── __tests__/ 5 │ └── index.test.ts 6 ├── README.md 7 └── index.ts
Documentation of the command must be placed in the README.md file, the current README must be updated to link to the new command README.
The index.ts
file must export classes extending the Command
class of clipanion
. The commands of all src/commands/*/index.ts
files will then be imported and made available in the datadog-ci
tool.
A sample index.ts
file for a new command would be:
1import {Command} from 'clipanion' 2 3export class HelloWorldCommand extends Command { 4 public async execute() { 5 this.context.stdout.write('Hello world!') 6 } 7} 8 9module.exports = [HelloWorldCommand]
Lastly, test files must be created in the __tests__/
folder. jest
is used to run the tests and a CI has been set using Github Actions to ensure all tests are passing when merging a Pull Request.
The tests can then be launched through the yarn test
command, it will find all files with a filename ending in .test.ts
in the repo and execute them.
The CI performs tests to avoid regressions by building the project, running unit tests and running one end-to-end test.
The end-to-end test installs the package in a new project, configures it (using files in the .github/workflows/e2e
folder) and runs a synthetics run-tests
command in a Datadog Org (Synthetics E2E Testing Org
) to verify the command is able to perform a test.
The synthetics tests ran are a browser test (id neg-qw9-eut
) and an API test (id v5u-56k-hgk
), both loading a page which outputs the headers of the request and verifying the X-Fake-Header
header is present. This header is configured as an override in the .github/workflows/e2e/test.synthetics.json
file. The API and Application keys used by the command are stored in Github Secrets named datadog_api_key
and datadog_app_key
.
The goal of this test is to verify the command is able to run tests and wait for their results as expected as well as handling configuration overrides.
1# Compile and watch 2yarn watch 3 4# Run the tests 5yarn jest 6 7# Build code 8yarn build 9 10# Format code 11yarn format 12 13# Make bin executable 14yarn prepack
Releasing a new version of datadog-ci
unfolds as follow:
yarn version [--patch|--minor|--major]
depending on the nature of the changes introduced. You may refer to Semantic Versioning to determine which to increment.If you need to create a pre-release or releasing in a different channel here's how it works:
alpha
, beta
, ...).version-channel
, it can be 0.10.9-alpha
or 1-beta
...version-channel
This is a pre-release
checkboxNo vulnerabilities found.
Reason
all changesets reviewed
Reason
no dangerous workflow patterns detected
Reason
30 commit(s) and 4 issue activity found in the last 90 days -- score normalized to 10
Reason
license file detected
Details
Reason
1 existing vulnerabilities detected
Details
Reason
branch protection is not maximal on development and all release branches
Details
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
binaries present in source code
Details
Reason
security policy file not detected
Details
Reason
project is not fuzzed
Details
Reason
Project has not signed or included provenance with any releases.
Details
Reason
dependency not pinned by hash detected -- score normalized to 0
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Score
Last Scanned on 2025-02-10
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More