Gathering detailed insights and metrics for @pulumi/command
Gathering detailed insights and metrics for @pulumi/command
Gathering detailed insights and metrics for @pulumi/command
Gathering detailed insights and metrics for @pulumi/command
@brandonkal/pulumi-command
A Pulumi package for running arbitrary commands.
@unmango/pulumi-commandx
Mostly helper functions for creating statically typed `Command` resources.
@unmango/pulumi-kubernetes-the-hard-way
This is a Pulumi implementation of Kelsey Hightower's [Kubernetes the Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way). It attempts to provide a set of building blocks to build a kubernetes cluster from scratch.
pulumi-one-password-native-unofficial
This is a pulumi providier that allows you to use the [OnePassword CLI](https://support.1password.com/command-line-getting-started/) to manage secrets in your pulumi stack. As well as the OnePassword Connect Server.
npm install @pulumi/command
Typescript
Module System
Node Version
NPM Version
Java (33.24%)
Python (25.2%)
Go (14.64%)
C# (14.08%)
TypeScript (11.66%)
Makefile (1.17%)
Total Downloads
0
Last Day
0
Last Week
0
Last Month
0
Last Year
0
Apache-2.0 License
71 Stars
409 Commits
31 Forks
19 Watchers
275 Branches
113 Contributors
Updated on Jul 14, 2025
Latest Version
1.1.0
Package Id
@pulumi/command@1.1.0
Unpacked Size
147.11 kB
Size
27.39 kB
File Count
53
NPM Version
10.8.2
Node Version
20.19.1
Published on
May 27, 2025
Cumulative downloads
Total Downloads
Last Day
0%
NaN
Compared to previous day
Last Week
0%
NaN
Compared to previous week
Last Month
0%
NaN
Compared to previous month
Last Year
0%
NaN
Compared to previous year
1
2
The Pulumi Command Provider enables you to execute commands and scripts either locally or remotely as part of the Pulumi resource model. Resources in the command package support running scripts on create
and destroy
operations, supporting stateful local and remote command execution.
There are many scenarios where the Command package can be useful:
Some users may have experience with Terraform "provisioners", and the Command package offers support for similar scenarios. However, the Command package is provided as independent resources which can be combined with other resources in many interesting ways. This has many strengths, but also some differences, such as the fact that a Command resource failing does not cause a resource it is operating on to fail.
You can use the Command package from a Pulumi program written in any Pulumi language: C#, Go, JavaScript/TypeScript, Python, and YAML. You'll need to install and configure the Pulumi CLI if you haven't already.
NOTE: The Command package is in preview. The API design may change ahead of general availability based on user feedback.
The simplest use case for local.Command
is to just run a command on create
, which can return some value which will be stored in the state file, and will be persistent for the life of the stack (or until the resource is destroyed or replaced). The example below uses this as an alternative to the random
package to create some randomness which is stored in Pulumi state.
1import { local } from "@pulumi/command"; 2 3const random = new local.Command("random", { 4 create: "openssl rand -hex 16", 5}); 6 7export const output = random.stdout;
1package main 2 3import ( 4 "github.com/pulumi/pulumi-command/sdk/go/command/local" 5 "github.com/pulumi/pulumi/sdk/v3/go/pulumi" 6) 7 8func main() { 9 pulumi.Run(func(ctx *pulumi.Context) error { 10 11 random, err := local.NewCommand(ctx, "my-bucket", &local.CommandInput{ 12 Create: pulumi.String("openssl rand -hex 16"), 13 }) 14 if err != nil { 15 return err 16 } 17 18 ctx.Export("output", random.Stdout) 19 return nil 20 }) 21}
This example creates and EC2 instance, and then uses remote.Command
and remote.CopyFile
to run commands and copy files to the remote instance (via SSH). Similar things are possible with Azure, Google Cloud and other cloud provider virtual machines. Support for Windows-based VMs is being tracked here.
Note that implicit and explicit (dependsOn
) dependencies can be used to control the order that these Command
and CopyFile
resources are constructed relative to each other and to the cloud resources they depend on. This ensures that the create
operations run after all dependencies are created, and the delete
operations run before all dependencies are deleted.
Because the Command
and CopyFile
resources replace on changes to their connection, if the EC2 instance is replaced, the commands will all re-run on the new instance (and the delete
operations will run on the old instance).
Note also that deleteBeforeReplace
can be composed with Command
resources to ensure that the delete
operation on an "old" instance is run before the create
operation of the new instance, in case a scarce resource is managed by the command. Similarly, other resource options can naturally be applied to Command
resources, like ignoreChanges
.
1import { interpolate, Config } from "@pulumi/pulumi";
2import { local, remote, types } from "@pulumi/command";
3import * as aws from "@pulumi/aws";
4import * as fs from "fs";
5import * as os from "os";
6import * as path from "path";
7import { size } from "./size";
8
9const config = new Config();
10const keyName = config.get("keyName") ?? new aws.ec2.KeyPair("key", { publicKey: config.require("publicKey") }).keyName;
11const privateKeyBase64 = config.get("privateKeyBase64");
12const privateKey = privateKeyBase64 ? Buffer.from(privateKeyBase64, 'base64').toString('ascii') : fs.readFileSync(path.join(os.homedir(), ".ssh", "id_rsa")).toString("utf8");
13
14const secgrp = new aws.ec2.SecurityGroup("secgrp", {
15 description: "Foo",
16 ingress: [
17 { protocol: "tcp", fromPort: 22, toPort: 22, cidrBlocks: ["0.0.0.0/0"] },
18 { protocol: "tcp", fromPort: 80, toPort: 80, cidrBlocks: ["0.0.0.0/0"] },
19 ],
20});
21
22const ami = aws.ec2.getAmiOutput({
23 owners: ["amazon"],
24 mostRecent: true,
25 filters: [{
26 name: "name",
27 values: ["amzn2-ami-hvm-2.0.????????-x86_64-gp2"],
28 }],
29});
30
31const server = new aws.ec2.Instance("server", {
32 instanceType: size,
33 ami: ami.id,
34 keyName: keyName,
35 vpcSecurityGroupIds: [secgrp.id],
36}, { replaceOnChanges: ["instanceType"] });
37
38// Now set up a connection to the instance and run some provisioning operations on the instance.
39
40const connection: types.input.remote.ConnectionInput = {
41 host: server.publicIp,
42 user: "ec2-user",
43 privateKey: privateKey,
44};
45
46const hostname = new remote.Command("hostname", {
47 connection,
48 create: "hostname",
49});
50
51new remote.Command("remotePrivateIP", {
52 connection,
53 create: interpolate`echo ${server.privateIp} > private_ip.txt`,
54 delete: `rm private_ip.txt`,
55}, { deleteBeforeReplace: true });
56
57new local.Command("localPrivateIP", {
58 create: interpolate`echo ${server.privateIp} > private_ip.txt`,
59 delete: `rm private_ip.txt`,
60}, { deleteBeforeReplace: true });
61
62const sizeFile = new remote.CopyFile("size", {
63 connection,
64 localPath: "./size.ts",
65 remotePath: "size.ts",
66})
67
68const catSize = new remote.Command("checkSize", {
69 connection,
70 create: "cat size.ts",
71}, { dependsOn: sizeFile })
72
73export const confirmSize = catSize.stdout;
74export const publicIp = server.publicIp;
75export const publicHostName = server.publicDns;
76export const hostnameStdout = hostname.stdout;
There may be cases where it is useful to run some code within an AWS Lambda or other serverless function during the deployment. For example, this may allow running some code from within a VPC, or with a specific role, without needing to have persistent compute available (such as the EC2 example above).
Note that the Lambda function itself can be created within the same Pulumi program, and then invoked after creation.
The example below simply creates some random value within the Lambda, which is a very roundabout way of doing the same thing as the first "random" example above, but this pattern can be used for more complex scenarios where the Lambda does things a local script could not.
1import { local } from "@pulumi/command";
2import * as aws from "@pulumi/aws";
3import * as crypto from "crypto";
4
5const f = new aws.lambda.CallbackFunction("f", {
6 publish: true,
7 callback: async (ev: any) => {
8 return crypto.randomBytes(ev.len/2).toString('hex');
9 }
10});
11
12const rand = new local.Command("execf", {
13 create: `aws lambda invoke --function-name "$FN" --payload '{"len": 10}' --cli-binary-format raw-in-base64-out out.txt >/dev/null && cat out.txt | tr -d '"' && rm out.txt`,
14 environment: {
15 FN: f.qualifiedArn,
16 AWS_REGION: aws.config.region!,
17 AWS_PAGER: "",
18 },
19})
20
21export const output = rand.stdout;
local.Command
with CURL to manage external REST APIThis example uses local.Command
to create a simple resource provider for managing GitHub labels, by invoking curl
commands on create
and delete
commands against the GitHub REST API. A similar approach could be applied to build other simple providers against any REST API directly from within Pulumi programs in any language. This approach is somewhat limited by the fact that local.Command
does not yet support diff
/read
. Support for Read and Diff may be added in the future.
This example also shows how local.Command
can be used as an implementation detail inside a nicer abstraction, like the GitHubLabel
component defined below.
1import * as pulumi from "@pulumi/pulumi";
2import * as random from "@pulumi/random";
3import { local } from "@pulumi/command";
4
5interface LabelArgs {
6 owner: pulumi.Input<string>;
7 repo: pulumi.Input<string>;
8 name: pulumi.Input<string>;
9 githubToken: pulumi.Input<string>;
10}
11
12class GitHubLabel extends pulumi.ComponentResource {
13 public url: pulumi.Output<string>;
14
15 constructor(name: string, args: LabelArgs, opts?: pulumi.ComponentResourceOptions) {
16 super("example:github:Label", name, args, opts);
17
18 const label = new local.Command("label", {
19 create: "./create_label.sh",
20 delete: "./delete_label.sh",
21 environment: {
22 OWNER: args.owner,
23 REPO: args.repo,
24 NAME: args.name,
25 GITHUB_TOKEN: args.githubToken,
26 }
27 }, { parent: this });
28
29 const response = label.stdout.apply(JSON.parse);
30 this.url = response.apply((x: any) => x.url as string);
31 }
32}
33
34const config = new pulumi.Config();
35const rand = new random.RandomString("s", { length: 10, special: false });
36
37const label = new GitHubLabel("l", {
38 owner: "pulumi",
39 repo: "pulumi-command",
40 name: rand.result,
41 githubToken: config.requireSecret("githubToken"),
42});
43
44export const labelUrl = label.url;
1# create_label.sh 2curl \ 3 -s \ 4 -X POST \ 5 -H "authorization: Bearer $GITHUB_TOKEN" \ 6 -H "Accept: application/vnd.github.v3+json" \ 7 https://api.github.com/repos/$OWNER/$REPO/labels \ 8 -d "{\"name\":\"$NAME\"}"
1# delete_label.sh 2curl \ 3 -s \ 4 -X DELETE \ 5 -H "authorization: Bearer $GITHUB_TOKEN" \ 6 -H "Accept: application/vnd.github.v3+json" \ 7 https://api.github.com/repos/$OWNER/$REPO/labels/$NAME
There are cases where it's important to run some cleanup operation before destroying a resource such as when destroying the resource does not properly handle orderly cleanup. For example, destroying an EKS Cluster will not ensure that all Kubernetes object finalizers are run, which may lead to leaking external resources managed by those Kubernetes resources. This example shows how we can use a delete
-only Command
to ensure some cleanup is run within a cluster before destroying it.
1resources: 2 cluster: 3 type: eks:Cluster 4 5 cleanupKubernetesNamespaces: 6 # We could also use `RemoteCommand` to run this from 7 # within a node in the cluster. 8 type: command:local:Command 9 properties: 10 # This will run before the cluster is destroyed. 11 # Everything else will need to depend on this resource 12 # to ensure this cleanup doesn't happen too early. 13 delete: | 14 kubectl --kubeconfig <(echo "$KUBECONFIG_DATA") delete namespace nginx 15 # Process substitution "<()" doesn't work in the default interpreter sh. 16 interpreter: ["/bin/bash", "-c"] 17 environment: 18 KUBECONFIG_DATA: "${cluster.kubeconfigJson}"
1import * as pulumi from "@pulumi/pulumi";
2import * as command from "@pulumi/command";
3import * as eks from "@pulumi/eks";
4
5const cluster = new eks.Cluster("cluster", {});
6
7// We could also use `RemoteCommand` to run this from within a node in the cluster
8const cleanupKubernetesNamespaces = new command.local.Command("cleanupKubernetesNamespaces", {
9 // This will run before the cluster is destroyed. Everything else will need to
10 // depend on this resource to ensure this cleanup doesn't happen too early.
11 "delete": "kubectl --kubeconfig <(echo \"$KUBECONFIG_DATA\") delete namespace nginx\n",
12 // Process substitution "<()" doesn't work in the default interpreter sh.
13 interpreter: [
14 "/bin/bash",
15 "-c",
16 ],
17 environment: {
18 KUBECONFIG_DATA: cluster.kubeconfigJson,
19 },
20});
When a local command creates assets as part of its execution, these can be captured by specifying assetPaths
or archivePaths
.
1const lambdaBuild = local.runOutput({ 2 dir: "../my-function", 3 command: `yarn && yarn build`, 4 archivePaths: ["dist/**"], 5}); 6 7new aws.lambda.Function("my-function", { 8 code: lambdaBuild.archive, 9 // ... 10});
When using the assetPaths
and archivePaths
, they take a list of 'globs'.
/
on all platforms - including Windows.!
are 'exclude' rules.*
matches anything except /
**
matches anything, including /
./
) e.g. file.text
or subfolder/file.txt
.Given the rules:
1- "assets/**" 2- "src/**.js" 3- "!**secret.*"
When evaluating against this folder:
1- assets/ 2 - logos/ 3 - logo.svg 4- src/ 5 - index.js 6 - secret.js
The following paths will be returned:
1- assets/logos/logo.svg 2- src/index.js
Please refer to Contributing to Pulumi for installation guidance.
Run the following commands to install Go modules, generate all SDKs, and build the provider:
$ make ensure
$ make build
$ make install
Add the bin
folder to your $PATH
or copy the bin/pulumi-resource-command
file to another location in your $PATH
.
Navigate to the simple example and run Pulumi:
$ cd examples/simple
$ yarn link @pulumi/command
$ yarn install
$ pulumi up
No vulnerabilities found.
Reason
30 commit(s) and 6 issue activity found in the last 90 days -- score normalized to 10
Reason
no dangerous workflow patterns detected
Reason
no binaries found in the repo
Reason
license file detected
Details
Reason
security policy file detected
Details
Reason
packaging workflow detected
Details
Reason
dependency not pinned by hash detected -- score normalized to 8
Details
Reason
Found 4/12 approved changesets -- score normalized to 3
Reason
no effort to earn an OpenSSF best practices badge detected
Reason
project is not fuzzed
Details
Reason
Project has not signed or included provenance with any releases.
Details
Reason
detected GitHub workflow tokens with excessive permissions
Details
Reason
SAST tool is not run on all commits -- score normalized to 0
Details
Reason
56 existing vulnerabilities detected
Details
Score
Last Scanned on 2025-07-07
The Open Source Security Foundation is a cross-industry collaboration to improve the security of open source software (OSS). The Scorecard provides security health metrics for open source projects.
Learn More