Author with AI
Use your favorite LLM (ChatGPT, Claude, etc.) to help you write Gruntwork Runbooks.
How to use it
- Copy the following prompt into your LLM:
Write me a runbook that [describe what you want the runbook to do].Use the following context to help you write the runbook, which is the complete Runbooks documentation.
- Copy the LLM context below and paste it into the same message.
- Send it! The LLM will have full knowledge of all available blocks, props, and conventions.
For best results, be specific about what your runbook should do: which tools it installs, what inputs it collects, what scripts it runs, and what checks it performs.
LLM Context
# Gruntwork Runbooks Documentation
> Gruntwork Runbooks turns your infrastructure expertise into guided
> workflows that any developer can safely execute.
>
> This document contains the full documentation for authoring and using Runbooks.
> Paste it into your LLM for complete context when writing runbooks.
Source: https://runbooks.gruntwork.io
---
## Intro
### Overview
## Meet Runbooks!
Gruntwork Runbooks are interactive markdown documents that enable subject matter experts to capture their knowledge and expertise in a way that is easy for others to understand and use.
Gruntwork Runbooks is [open source](https://github.com/gruntwork-io/runbooks), built by [Gruntwork](https://gruntwork.io), and free to use.
### What can Runbooks do?
The Runbooks tool loads individual "runbook files" that:
- Render markdown text
- Collect user input from dynamically generated web forms
- Propagate user-entered values to:
- Generate customized files and folders based on templates
- Run customized scripts
- Run customized "checks"
This collection of primitives -- inputs, code templates, scripts, and checks -- is a streamlined way to capture expertise and a highly efficient (and enjoyable) way for someone else to consume it.
### How is Runbooks useful?
DevOps and Platform Engineers are often the "expertise bottlenecks" when it comes to enabling their organization to achieve an infrastructure goal.
Historically, there has not been a straightforward way to "capture" this hard-won expertise and make it available to others. And yet platform engineers are often the bottleneck for application engineers or others who depend on them to accomplish infrastructure goals.
In any situation where a "consumer" of expertise is blocked on the "producer" of expertise, Runbooks is an opportunity to free up the expert and empower consumers.
In practice, Runbooks is especially useful for:
- developer self-service
- setting up landing zones
- documenting internal processes or standard operating procedures
- codifying all the ways to use a given IaC pattern
Learn more by reading about [Runbooks use cases](/intro/use_cases).
### What does a runbook look like?
When a user runs `runbooks open /path/to/runbook` (or `runbooks open <remote-url>`), here's a sample of what they'll see:




For a full walkthrough of what's happening here, see the [UI tour](/intro/ui_tour).
### How is a runbook structured?
A typical Runbook directory looks like the file tree below. In this example, the file names are somewhat arbitrary, but the folder structure is conventional.
```
├── runbook.mdx
├── assets/
│ └── architecture.jpg
├── checks/
│ └── preflight_checks.sh
├── scripts/
│ └── install_mise.sh
└── templates/
└── consume_lambda_module/
├── boilerplate.yml
└── terragrunt.hcl
```
You can learn more about how to write a Runbook in the [Authoring Runbooks](/authoring/overview) section, but for now, the basic idea is that a `runbook.mdx` file contains a mix of markdown and interactive [blocks](/authoring/blocks/). Those blocks optionally reference files in the `assets/`, `checks/`, `scripts/`, and `templates/` directories.
### How do you open a Runbook?
**Runbook consumers** want to apply the expertise of someone else (**Runbooks authors**), so they consume Runbooks by opening them on their local machine, and using them in their web browser.
To open a Runbook, you can use the `runbooks open` command with a local path:
```bash
runbooks open /path/to/runbook
```
Or open a runbook directly from a remote URL — no need to clone the repo first:
```bash
runbooks open https://github.com/org/repo/tree/main/runbooks/launch-rds
```
Runbooks supports GitHub and GitLab browser URLs, as well as OpenTofu/Terraform-style source addresses. See the [runbooks open](/commands/open/) docs for all supported formats and authentication options for private repos.
## Next
To see a Runbook in action, let's take the UI tour!
---
### UI Tour
On this page, we'll walk through the **Runbooks consumer** experience when a user opens the [lambda feature demo](https://github.com/gruntwork-io/runbooks/tree/main/testdata/sample-runbooks/lambda). Our user's goal is to launch an AWS Lambda function on AWS in a way that matches their organization's standards.
You can see either a [written walkthrough](#written-walkthrough) or [video walkthrough](#video-walkthrough).
## Written walkthrough
First, the user installs Runbooks and downloads the `lambda` feature demo to their local file system.
Now, they open the runbook.
```bash
runbooks open /path/to/lambda
```
Runbooks launches a web browser to access `localhost` on the default port (7825) and renders the runbook.

It looks like this runbook will help us launch an AWS Lambda function.
So far, the runbook is just rendering markdown text. Useful, but not very interesting. Let's see what else this runbook contains.

Here, the user is given some "pre-flight checks" to make sure their local system has the right tools installed (in this case `mise`, a package manager). The user can click "Check" and Runbooks will run the given command (in this case `mise --version && mise self-update`) directly on their local machine.

The Runbooks consumer can just use the web UI without knowing anything about how the Runbook is written. For the Runbook author, that first gray box is is a [Check block](/authoring/blocks/check/) and is defined like this:
```mdx
```
The key point here is that authors declare what they want to happen, and Runbooks dynamically renders it as an interactive web UI.
Let's scroll a little further down so we can actually generate some of the code we need to launch our Lambda function.

Here the Runbook is dynamically rendering a web form to capture user values by using a [Template block](/authoring/blocks/template/). To collect these specific values from the user, the Runbooks author declared a set of variables in their runbook like this:
```yaml
variables:
- name: Environment
type: enum
description: Target environment for deployment
options:
- non-prod
- prod
default: non-prod
x-section: Deployment Settings
- name: AwsRegion
type: enum
options:
- us-east-1
- us-east-2
- us-west-1
- us-west-2
- eu-central-1
- eu-west-1
- eu-west-2
description: The AWS region to deploy the Lambda function to
default: "us-west-2"
validations:
- required
x-section: Deployment Settings
- name: FunctionName
type: string
description: Name for your Lambda function (will be suffixed with environment)
default: example-lambda
validations:
- required
x-section: Function Settings
- name: Description
type: string
description: Description of the Lambda function (optional)
x-section: Function Settings
...
```
Back to the user, they click a "Generate" button at the bottom of the form (not shown), and Runbooks will generate a set of files based on a code template defined by the Runbook author, all parameterized by the values entered by the user.

As the user changes values in the form, the rendered files will update in real time. This lets the user see exactly how their form values impact the code that's generated.
The generated files are written directly to the user's local computer. That means we can easily create a GitHub Pull Request (or similar) with these files. In this case, the Runbook author included a script to do that just that.

Here, the Runbook author is using a [Command block](/authoring/blocks/command/) to create the Pull Request, and notice how the author configured the Command block to ask for additional values (GitHub org name, GitHub repo name). Those values will be used to customize the script that runs.
Finally, the user gets a Check block to validate that the Lambda function deployed successfully.
.
And now you've seen the Runbook experience!
## Video walkthrough
You can also view the above as a full video walkthrough.
<div style="position: relative; padding-bottom: 56.25%; height: 0;"><iframe src="https://www.loom.com/embed/0848381b1e174670895e3228a69b865a" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe></div>
## Next
Now that you understand how Runbooks work, it's time to install the CLI tool on your local machine!
---
### Installation
### Homebrew (macOS & Linux)
The quickest way to install Runbooks is via [Homebrew](https://brew.sh/):
```bash
brew install gruntwork-io/tap/runbooks
```
To install a specific version:
```bash
brew install gruntwork-io/tap/runbooks@0.5.2
```
To upgrade to the latest version:
```bash
brew upgrade gruntwork-io/tap/runbooks
```
### Installing pre-built binaries
Alternatively, you can download pre-built binaries from the [GitHub releases page](https://github.com/gruntwork-io/runbooks/releases).
#### macOS
**Using curl (Apple Silicon M1/M2/M3/etc.):**
```bash
# Download the latest release
curl -Lo runbooks https://github.com/gruntwork-io/runbooks/releases/latest/download/runbooks_darwin_arm64
# Make it executable
chmod +x runbooks
# Move to your PATH
sudo mv runbooks /usr/local/bin/
```
**Using curl (Intel Mac):**
```bash
# Download the latest release
curl -Lo runbooks https://github.com/gruntwork-io/runbooks/releases/latest/download/runbooks_darwin_amd64
# Make it executable
chmod +x runbooks
# Move to your PATH
sudo mv runbooks /usr/local/bin/
```
> **Tip:** Not sure which chip you have? Run `uname -m` in your terminal. If it says `arm64`, you have Apple Silicon. If it says `x86_64`, you have an Intel Mac.
#### Linux
**Using curl (x86_64/AMD64):**
```bash
# Download the latest release
curl -Lo runbooks https://github.com/gruntwork-io/runbooks/releases/latest/download/runbooks_linux_amd64
# Make it executable
chmod +x runbooks
# Move to your PATH
sudo mv runbooks /usr/local/bin/
```
**Using curl (ARM64):**
```bash
# Download the latest release
curl -Lo runbooks https://github.com/gruntwork-io/runbooks/releases/latest/download/runbooks_linux_arm64
# Make it executable
chmod +x runbooks
# Move to your PATH
sudo mv runbooks /usr/local/bin/
```
> **Tip:** Not sure which architecture you have? Run `uname -m` in your terminal. If it says `x86_64`, use the amd64 version. If it says `aarch64` or `arm64`, use the arm64 version.
#### Windows
**Using PowerShell:**
```powershell
# Download the latest release
Invoke-WebRequest -Uri "https://github.com/gruntwork-io/runbooks/releases/latest/download/runbooks_windows_amd64.exe" -OutFile "runbooks.exe"
# Move to a directory in your PATH (create if needed)
New-Item -ItemType Directory -Force -Path "$env:LOCALAPPDATA\runbooks"
Move-Item -Force runbooks.exe "$env:LOCALAPPDATA\runbooks\runbooks.exe"
```
Then add `%LOCALAPPDATA%\runbooks` to your PATH:
1. Press `Win + R`, type `sysdm.cpl`, and press Enter
2. Go to the **Advanced** tab and click **Environment Variables**
3. Under "User variables", select **Path** and click **Edit**
4. Click **New** and add `%LOCALAPPDATA%\runbooks`
5. Click **OK** to save
> **Tip:** Alternatively, you can add to PATH via PowerShell (requires reopening your terminal):
```powershell
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";$env:LOCALAPPDATA\runbooks", "User")
```
### Building from source
As an alternative to downloading pre-built binaries, you can build the binaries yourself:
1. Clone the repository:
```bash
git clone https://github.com/gruntwork-io/runbooks.git
cd runbooks
```
2. Build the frontend:
```bash
cd web
bun install
bun run build
```
3. Build the binary:
```bash
go build -o runbooks main.go
```
4. Move the binary to your PATH (optional):
```bash
sudo mv runbooks /usr/local/bin/
```
5. Enable execute permissions for your binary
```bash
chmod u+x /usr/local/bin/runbooks
```
6. Verify installation:
```bash
runbooks
```
### Verifying installation
Once installed, you can verify that `runbooks` is working by running:
```bash
runbooks version
```
## Next
Now that you have Runbooks installed, let's write your first runbook!
---
### Write Your First Runbook
Let's create a simple runbook from scratch!
## Create the runbook file
1. Create a new folder called `my-first-runbook`.
1. Inside that folder, create a file called `runbook.mdx`.
1. Create subfolders called `scripts` and `templates/project`.
1. Copy/paste the following content into each file:
You can also [download the complete example](/my-first-runbook.zip) as a zip file.
Make the script executable:
```bash
chmod +x my-first-runbook/scripts/setup.sh
```
## Open your runbook
Save the files and open the runbook:
```bash
runbooks open my-first-runbook/
```
Your browser will open showing the runbook interface. Follow the steps to execute each block!
## Next
This Runbook generates some files. Let's learn more about the [files workspace](/intro/files_workspace/) — where generated files and cloned repositories show up in the Runbooks UI.
---
### Files Workspace
When you use a Runbook, you'll often work with files, whether they're freshly generated from templates, or cloned from a git repository. The **files workspace** (labeled just "Files" in the Runbooks UI) is the panel on the right side of the Runbooks UI where all of these files live.
This page explains the two types of files you'll encounter, when each one appears, and how to work with them.
## Two types of files
The files workspace can show two different types of files.
- [Generated files](#generated-files)
- [Repository files](#repository-files)
Let's start by learning about them at a high level.
### Generated files
##### Why use them
You want the runbook to generate new files and expect the user will then perform some _out-of-band action_ with the files, such as manually integrating them into a separate project.
##### Where they come from
A [\](/authoring/blocks/template/) block generates files when the user clicks "Generate" (the default `target` is `"generated"`). [\](/authoring/blocks/command/) and [\](/authoring/blocks/check/) blocks generate files when they run scripts that write to the `$GENERATED_FILES` environment variable.
##### Where they're stored
The files are stored in a `generated/` folder on the user's local machine, relative to the [working directory](/commands/open#flags), or whatever folder the user specifies with the [output-path](/commands/open#flags) flag.
##### When you see them
After the user clicks "Generate" on a \block, or after a \or \block writes to `$GENERATED_FILES`, the files workspace is automatically opened.
### Repository files
##### Why use them
You want the user to `git clone` a repository and then create, modify, or delete files in that repo using subsequent blocks in the runbook. Presumably, after git cloning the files and making changes, the runbook will either `git push` the changes or open a pull request on them.
##### Where they come from
Initially, the files are cloned from a git repository via a [\](/authoring/blocks/gitclone/) block.
Any subsequent changes in the files come from a [\](/authoring/blocks/template/) block that sets `target="worktree"`, or [\](/authoring/blocks/command/), and [\](/authoring/blocks/check/) blocks that run scripts that write to the `$REPO_FILES` environment variable.
##### Where they're stored
The files are stored in a local path (relative to the [working directory](/commands/open#flags)) specified in the [\](/authoring/blocks/gitclone/) block.
##### When you see them
After a \block successfully clones a repository, the files workspace automatically switches to the **Repository** tab to show the cloned files.
> **Tip:** If a runbook only produces generated files (no git clone), you'll only see the "Generated files" tab. If it only clones a repository, you'll only see the "Repository" tab. When both are present, you can switch between them using the context bar at the top of the workspace.
## Generated files
Generated files are new files that the runbook creates on your local machine. There are two main ways they get created:
### Template blocks
The [Template](/authoring/blocks/template/) and [TemplateInline](/authoring/blocks/templateinline/) blocks generate files from [Boilerplate templates](/authoring/boilerplate/). When you fill in the form fields and click "Generate", the template engine processes your inputs and creates the output files.
```mdx
```
Template blocks automatically re-render when you change input values, so your generated files stay in sync with the form.
### Command and Check blocks with $GENERATED_FILES
[Command](/authoring/blocks/command/) and [Check](/authoring/blocks/check/) blocks can write files to the `$GENERATED_FILES` environment variable to capture them:
```mdx
"$GENERATED_FILES/tf-outputs.json"`}
title="Export Terraform outputs"
/>
```
Any files written to `$GENERATED_FILES` automatically appear in the workspace after the block completes successfully.
### Where generated files are stored
Generated files are persisted to a folder on your local machine — they're not ephemeral or stored in memory.
By default, generated files are written to a `generated/` folder in the working directory (the directory you run the `runbooks` command from):
- **generated/** ← Created in your working directory
- main.tf
- variables.tf
- outputs.tf
You can customize the output location with CLI flags:
```bash
# Use a specific working directory
runbooks open my-runbook --working-dir /path/to/project
# Write to a custom subfolder within the working directory
runbooks open my-runbook --output-path ./infrastructure
# Use an isolated temporary directory (auto-cleaned on exit)
runbooks open my-runbook --working-dir=::tmp
```
> **Tip:** The `--working-dir=::tmp` keyword is useful for sandboxed execution where you don't want generated files to persist after you close the runbook.
### Subdirectories within templates
If your template or script writes to a subdirectory of `$GENERATED_FILES`, the subdirectory will be created automatically. For example, if you write to `$GENERATED_FILES/config/app.json`, the `config/` subdirectory will be created automatically.
- generated
- config/
- app.json
- main.tf
- variables.tf
- outputs.tf
```bash
# In a command script
mkdir -p "$GENERATED_FILES/config"
echo '{}' > "$GENERATED_FILES/config/app.json"
```
## Repository files
Repository files come from a git repository that was cloned by a [GitClone](/authoring/blocks/gitclone/) block in the runbook. When the clone completes, the workspace automatically switches to the **Repository** tab to show the cloned files.
The Repository tab has two sub-tabs:
### All files
The **All files** tab shows the complete file tree of the cloned repository. Click any file to view its contents with syntax highlighting.
### Changed files
The **Changed files** tab shows a diff view of any files that have been modified since the repository was cloned. This is useful when subsequent blocks in the runbook (like Command or Template blocks with `target="worktree"`) write files into the cloned repository. You'll see:
- Which files were added, modified, or deleted
- Line-by-line additions and deletions (similar to a GitHub pull request diff)
- A summary of total changes at the top
> **Note:** The "Changed files" tab appears automatically the first time a change is detected. If no files have been modified, the tab will show zero changes.
### Multiple repositories
If a runbook clones more than one repository (using multiple GitClone blocks), a dropdown appears in the workspace header letting you switch between them.
## Security: Where files can be read and written
Runbooks enforces rules about where files can be read and written:
### Path restrictions
| Rule | Description |
|------|-------------|
| **Within working directory** | Output paths must resolve to locations within the working directory |
| **No directory traversal** | Paths containing `..` (like `../../../etc/passwd`) are rejected |
| **No system directories** | Protected directories like `/etc`, `/usr`, `C:\Windows` are blocked |
### Workspace API restrictions
The workspace APIs (file tree, file content, and git changes) reject any path containing directory traversal sequences. These APIs are scoped to workspace directories — the paths registered by GitClone blocks and the configured output directory — so they only serve files within the intended workspace.
> **Note:** These restrictions apply to template output paths and workspace APIs. Scripts executed via Command blocks have full filesystem access and can write anywhere the user has permissions.
### Protected directories
Runbooks blocks writes to system-critical directories:
- `/`, `/etc`, `/usr`, `/bin`, `/var`, `/home`, `/root`
- `C:\`, `C:\Windows`, `C:\Program Files`, `C:\Users`
## Next
Now that you understand how the files workspace works, explore the [use cases](/intro/use_cases/) where Runbooks shines.
---
### Use Cases
Runbooks enables infrastructure experts to scale their expertise. Here are the most common practical scenarios where infrastructure expertise winds up being a painful bottleneck:
## Developer self-service
It can be painful for app developers to look up how to use an infrastructure pattern, learn enough of the technology to write the code, debug it, and deploy it. Sometimes the pain is too much, so app developers lean on infrastructure experts, which shifts the pain from "learning more than I have capacity for right now" to "waiting for the expert to get back to me."
Runbooks capture a subject matter expert's knowledge and experience in a runbook and enable app developers to easily launch new infrastructure that follows your organization's best practices. For example:
- Create a new K8s Service
- Scaffold a new app repo
- Launch a new database
- Launch a new static website
> **Tip:**
In the [UI tour](../ui_tour), we walk through how an app developer might deploy a Lambda function.
## Landing zones
At some point, most engineering organizations need a way to systematically create new AWS accounts, GCP projects, or Azure subscription, organize them correctly, and install the proper baseline of resources for security and governance. This problem statement is broadly referred to as setting up a "Landing zone."
For a problem statement that can be summarized in one sentence, the solutions available for Landing Zones are surprisingly complex and often treat infrastructure-as-code as a second-class citizen.
One way to think about setting up a landing zone is to solve the following problems:
1. Define your baseline (e.g. as OpenTofu modules)
2. Define a way to create a new AWS account, GCP project, or Azure subscription (e.g. via the cloud API, or as an OpenTofu module)
3. Present a UI that allows users to configure their desired account/project/subscription
There are well-established solutions for [1] (like defining an OpenTofu module), and [2] (like calling a cloud API). Runbooks can serve as an excellent way to combine [1] and [2] into [3], a configurable, powerful UI that asks users to configure their desired new account/project/subscription (e.g. with an email, name, description), calls the relevant API to provision the account/project/subscription, and then generates the necessary code to install the baseline.
Because Runbooks can ask for a user-entered value like an AWS account ID, you can plumb this value through your generated code and scripts to accomplish whatever you need.
You could even have the runbook provision SSO access, teach the user how to use the newly provisioned resources, guide them to review policies, or give them customized links to access their account, all without coding any custom user interface.
## Streamline internal processes
Many internal processes are captured as static documentation. But with Runbooks, you can complete the actual process by the time you're doing reading the Runbook! For example:
- Provision a new AWS Account, configure the account baseline and all networking configurations, and validate everything along the way.
- Stand up a new customer with all the infrastructure and all the information they need to get started.
## Document IaC modules
When you create a reusable infrastructure-as-code (IaC) module, the code itself is not enough to be useful. You also need to document how to use the module, show common usage examples, and validate that it works as expected.
Traditional documentation describes your module, but Runbooks can actually collect configuration from the user, generate the module code, and validate that the module was created correctly. For example:
- Create an SNS topic that notifies a Slack channel
- Create an SNS topic that notifies an email address
- Create an SNS topic that notifies a PagerDuty service
You might define one Runbook for each of these, or a single Runbook that can create any of these configurations.
# Next
You're almost done with the intro! As a last step, let's see how [Runbooks compares to alternatives](/intro/runbooks_vs_other/).
---
### Runbooks vs. Other
## vs. Static documentation
Static documentation is often hosted in services like Notion, Confluence, or directly in git repos (e.g. on GitHub or GitLab). It's easy to write, but can quickly get out of date, lacks automated validation, and requires users to manually copy/paste and adapt the code samples to their unique needs.
For consumers, Runbooks can generate the files they need based on custom user inputs entered from a web form, execute arbitrary commands to automate other steps, and give built-in validation checks so users can "do a thing, then check a thing." In short, Runbooks can streamline the "full experience" for consumers, not just a small part of it.
For authors, writing runbooks is just as easy as writing documentation because you author a runbook by writing a file in MDX, which is markdown plus a limited number of special components, which Runbooks calls [blocks](/authoring/blocks).
Runbooks also gives authors an opportunity for fast feedback loops. When users are frustrated by static documentation, they often suffer silently. But the nature of Runbooks enables users to give specific feedback about a missing check, missing command, or missing input value for templates. Authors can iteratively incorporate this feedback so that a Runbook can gradually grow to reflect the accumulated body of experience of all its consumers, leaving the next Runbook consumer with a surprisingly streamlined experience.
Finally, because Runbooks can generate arbitrary code, Runbook authors can even produce automated tests along with the "consumable" code to validate that the Runbook works as expected.
## vs. Internal developer portals
Internal Developer Portals (IDPs) like [Backstage](https://backstage.io/) and [Port](https://www.getport.io/) provide a unified interface for developers and typically include service catalogs, software templates, API documentation, and dashboards that give visibility into the entire engineering ecosystem.
One popular use case for IDPs is template generation. For example, Backstage uses the Scaffolder plugin to enable templates that uses the Nunjucks templating language. Backstage presents users with a nice catalog of templates to choose from, however the templating experience itself suffers from several shortcomings.
For end users, they cannot preview the code they will generate in real time, they cannot easily validate that the template they generated performed as expected, and any documentation associated with the template is typically generated as code rather than being part of the experience.
For template authors, the authoring experience can be challenging, requiring repeated runs of the same template and wrestling with unique Backstage configuration issues. In addition, Backstage itself is non-trivial to both setup and maintain.
By contrast, Runbooks offers a self-contained first-class templating experience for both end users and template authors. For consumers, they install runbooks from GitHub and run `runbooks open /path/to/runbook` (or `runbooks open https://github.com/org/repo/tree/main/path/to/runbook` for a remote URL) and can instantly read rich documentation, see the files they will generate in real-time, run a customized set of commands, and validate that everything is working correctly.
For authors, there is nothing to configure. You download the `runbooks` binary and author a Runbook by writing a `runbook.mdx` file, and seeing real-time changes with `runbooks watch /path/to/runbook`. Authors can test template generation locally using the Runbooks tool itself, or for even more control over the feedback loop, authors can opt to directly use the [Gruntwork Boilerplate](https://github.com/gruntwork-io/boilerplate) templating engine. As a result, authors have real-time feedback loops on everything they create.
## Vs. Jupyter Notebooks
Jupyter notebooks are interactive computational documents that combine live code, visualizations, narrative text, and equations in a single environment. They follow a "literate programming" paradigm where documentation and code coexist, making them ideal for data analysis, scientific computing, education, and reproducible research.
Jupyter Notebooks are oriented heavily around IPython, an extension of standard Python, where they maintain a "Python program state" as you work.
Runbooks also combine both code and documentation in a single environment, however there are a few key differences compared to Jupyter Notebooks:
1. **Author-Focused vs Consumer-Focused**: Jupyter Notebooks are optimized for the author, with a special focus on giving authors a useful "canvas" to incrementally evolve program state and produce artifacts. They are especially well suited to enabling notebook authors to "show their work."
By contrast, Runbooks is focused more on the _consumer_ of the Runbook than the author. In the Runbooks way of thinking, authors are not "exploring ideas," but codifying their knowledge and insights around a specific DevOps pattern. Runbook consumers then get a first-class experience learning and applying this pattern for their needs.
Moreover, running a Jupyter Notebook is not straightforward for those who do it only periodically. By contrast, Runbooks can be opened by downloading the runbooks binary and running `runbooks open /path/to/runbook` or pointing it at a remote URL like `runbooks open https://github.com/org/repo/tree/main/runbooks/my-runbook`.
2. **Internal Program State vs External Artifacts**: With each "cell" in a Jupyter Notebook, the notebook author evolves the state of a Python program.
By contrast, with each block in a Runbook, the Runbook consumer is making progress against their use case and generating the artifacts of generated files, updated external state (by running commands), and personal confidence that they are succeeding.
In other words, for Jupyter Notebooks, the "artifact" is updates to an internal program, whereas for Runbooks, the artifact is files, external state changes, and end user confidence.
2. **Optimized for power vs UX**: Jupyter Notebooks are powerful environments that can execute arbitrary code, generate charts, and allow authors to trace back execution history and restart execution.
By contrast, Runbooks offers a less powerful canvas for code execution. For example, Runbooks does not support a concept of "program state" that can be passed down to subsequent blocks. However, Runbooks offers a more streamlined file generation experience, making it simple for Runbook consumers to enter values in a web form to generate custom files, run custom commands, or run custom checks.
In short, Runbooks trades power for a more streamlined UX on a more narrow set of highly important capabilities. As a result, any Runbook _could_ be written as a Jupyter Notebook, but the authorship experience would be more clumsy, and the end user experience would be more confusing.
---
## CLI
### Overview
The Runbooks CLI provides several commands for working with runbooks. All commands accept a local path or a remote URL as the runbook source.
- [`runbooks open`](/commands/open) — Open a runbook for consumers
- [`runbooks serve`](/commands/serve) — Start the backend server (for developers)
- [`runbooks watch`](/commands/watch) — Watch a runbook for changes (for authors)
### Remote URLs
Every command that accepts a `RUNBOOK_SOURCE` can open a runbook directly from a remote URL, removing the need to manually download any runbook files.
```bash
runbooks open https://github.com/org/repo/tree/main/runbooks/launch-rds
```
See [`runbooks open`](/commands/open/#remote-sources) for supported URL formats and authentication for private repos.
---
### open
# runbooks open
Use `runbooks open` to open a runbook in your browser and use it.
This command is intended for **consumers** of runbooks.
## Usage
```bash
runbooks open RUNBOOK_SOURCE [flags]
```
## Arguments
- `RUNBOOK_SOURCE` - A local path or remote URL pointing to a `runbook.mdx` file, the directory containing a `runbook.mdx` file, or an OpenTofu/Terraform module directory.
### Local paths
```bash
runbooks open ./path/to/runbook
runbooks open /absolute/path/to/runbook
runbooks open ./path/to/runbook.mdx
```
### Remote sources
You can open a runbook directly from a GitHub or GitLab URL without cloning the repo first:
```bash
# GitHub browser URL (copy/paste from your browser)
runbooks open https://github.com/org/repo/tree/main/runbooks/setup-vpc
# GitLab browser URL
runbooks open https://gitlab.com/org/repo/-/tree/main/runbooks/setup-vpc
# OpenTofu/Terraform module source format
runbooks open github.com/org/repo//runbooks/setup-vpc?ref=v1.0
runbooks open "git::https://github.com/org/repo.git//runbooks/setup-vpc?ref=main"
```
| Format | Example |
|--------|---------|
| GitHub browser (dir) | `https://github.com/org/repo/tree/main/runbooks/setup-vpc` |
| GitHub browser (file) | `https://github.com/org/repo/blob/main/runbooks/setup-vpc/runbook.mdx` |
| GitLab browser (dir) | `https://gitlab.com/org/repo/-/tree/main/runbooks/setup-vpc` |
| GitLab browser (file) | `https://gitlab.com/org/repo/-/blob/main/runbooks/setup-vpc/runbook.mdx` |
| OpenTofu GitHub shorthand | `github.com/org/repo//path?ref=v1.0` |
| OpenTofu git::https | `git::https://github.com/org/repo.git//path?ref=main` |
> **Note:** When opening a remote runbook, a temporary working directory is automatically used (equivalent to `--working-dir=::tmp`). You can override this with `--working-dir` if needed.
### OpenTofu/Terraform modules
If `RUNBOOK_SOURCE` points to a directory containing `.tf` files (but no `runbook.mdx`), Runbooks auto-detects it as an OpenTofu/Terraform module. It generates a temporary runbook that parses the module's variables and renders a web form for them.
```bash
# Local module directory
runbooks open ./modules/rds
# Remote module via GitHub URL
runbooks open https://github.com/org/repo/tree/main/modules/rds
# Remote module via OpenTofu shorthand
runbooks open github.com/org/repo//modules/rds?ref=v1.0
```
By default, the auto-generated runbook uses the `::terragrunt` template. Use `--tf-runbook` to choose a different built-in template or provide your own custom runbook. See [Flags](#flags) below.
### Authentication for private repos
For public repositories, no authentication is needed. For private repositories, set one of the following:
- **GitHub:** `GITHUB_TOKEN` or `GH_TOKEN` environment variable, or run `gh auth login`
- **GitLab:** `GITLAB_TOKEN` environment variable, or run `glab auth login`
## Flags
- `--working-dir <path>` - Base directory for script execution and file generation (default: current directory)
- All relative paths are resolved from this directory
- Can be absolute or relative to current directory
- Use `--working-dir=::tmp` for a temporary directory (automatically cleaned up on exit), useful for isolated testing or sandboxed execution
- `--output-path <path>` - Directory where generated files will be written (default: `generated`)
- Resolved relative to the working directory
- `--tf-runbook <keyword-or-path>` - Select a built-in template or provide a local custom runbook for OpenTofu/Terraform modules. Built-in templates:
- `::terragrunt` (default) — Generates a `terragrunt.hcl` file
- `::terragrunt-github` — Full GitOps workflow: GitHub auth, clone, directory picker, generate `terragrunt.hcl`, and open a pull request
- `::tofu` — Generates a plain `main.tf` file
- Or pass a local path to a custom runbook directory (e.g., `--tf-runbook ./my-runbook/`)
- Remote URLs are not supported — download the template first and pass the local path. This is intentional: a remote `--tf-runbook` could silently pull in an untrusted runbook that executes scripts on your machine.
- `--port <number>` - Port for the backend + frontend server (default: 7825). Useful when running multiple runbooks simultaneously or when the default port is already in use.
- `--no-telemetry` - Disable anonymous telemetry. Can also be set via `RUNBOOKS_TELEMETRY_DISABLE=1` environment variable.
## What it does
When you run `runbooks open`:
1. **Downloads (if remote)** - If given a remote URL, downloads just the runbook directory via sparse git clone
2. **Auto-detects (if TF module)** - If the target is an OpenTofu/Terraform module directory (contains `.tf` files but no `runbook.mdx`), generates a temporary runbook using the selected template
3. **Starts the Backend Server** - Launches a Go-based HTTP server on `localhost` (port 7825 by default, configurable with `--port`)
4. **Launches the Browser** - Opens your default web browser to the server URL
5. **Serves the Frontend** - The web UI connects to the backend API to process the runbook
6. **Keeps Running** - The server continues running until you close the browser or press Ctrl+C
## Troubleshooting
**Port already in use:**
If the default port (7825) is already in use, you'll see an error. Either stop the other process or use `--port` to pick a different port (e.g., `runbooks open --port 8080 ./my-runbook`).
**Browser doesn't open:**
If the browser doesn't open automatically, you can manually navigate to `http://localhost:7825` (or your custom `--port`) after running the command.
**Browser window with the runbook doesn't open:**
Sometimes, your browser seems to launch, but you don't see the Runbook. The most likely issue here is that your browser launched with multiple tabs and windows and your Runbooks tab is not visible. Either look through all your browser windows, or go directly to http://localhost:7825 (or your custom `--port`).
**Runbook not found:**
Make sure the path points to a valid `runbook.mdx` file.
**Authentication errors for remote runbooks:**
If you see "authentication required", make sure you have set the appropriate token for the git host. See the Authentication section above.
---
### serve
# runbooks serve
Use `runbooks serve` to start the backend API server without starting the frontend server or opening the browser.
This command is intended for **developers** of the Runbooks tool itself.
## Usage
```bash
runbooks serve RUNBOOK_SOURCE [flags]
```
## Arguments
- `RUNBOOK_SOURCE` - A local path or remote URL pointing to a `runbook.mdx` file, the directory containing a `runbook.mdx` file, or an OpenTofu/Terraform module directory. See [runbooks open](/commands/open) for supported remote URL formats and OpenTofu/Terraform module auto-detection.
## Flags
- `--working-dir <path>` - Base directory for script execution and file generation (default: current directory)
- All relative paths are resolved from this directory
- Can be absolute or relative to current directory
- Use `--working-dir=::tmp` for a temporary directory (automatically cleaned up on exit), useful for isolated testing or sandboxed execution
- `--output-path <path>` - Directory where generated files will be written (default: `generated`)
- Resolved relative to the working directory
- `--tf-runbook <keyword-or-path>` - Select a built-in template or provide a local custom runbook for OpenTofu/Terraform modules. Remote URLs are not supported. See [runbooks open](/commands/open/#flags) for available templates and details.
- `--port <number>` - Port for the backend server (default: 7825).
- `--no-telemetry` - Disable anonymous telemetry. Can also be set via `RUNBOOKS_TELEMETRY_DISABLE=1` environment variable.
## What it does
When you run `runbooks serve`:
1. **Starts the Backend Server** - Launches a Go-based HTTP server on `localhost` (port 7825 by default, configurable with `--port`)
2. **Serves the API** - Provides REST endpoints for the frontend to call
3. **Does NOT Open Browser** - You must start the frontend dev server separately (see [Development workflow](#development-workflow) below) and manually navigate to `http://localhost:5173` (or other frontend server port)
## Development workflow
Here's a typical development workflow:
1. Start the backend:
```bash
runbooks serve testdata/demo-runbook
```
Or if you want to easily re-compile the backend:
```bash
go run main.go serve testdata/demo-runbook
```
2. In another terminal, start the frontend:
```bash
cd web
bun dev
```
3. Open your browser to `http://localhost:5173` (Vite's port)
4. Make changes to:
- React code in `/web/src` - hot reloads automatically
- Go code - restart the `serve` command
- Runbook files - refresh the browser
---
### watch
# runbooks watch
Use `runbooks watch` to "watch" a `runbook.mdx` file (or its scripts, checks, or template files) for changes and automatically reload the Runbook as needed.
This command is intended for **authors** of runbooks.
> **Caution:** When you update a `command` property, local template, or local script, the runbook will automatically load these changes in the _UI_, but because of the [executable registry](/security/execution-model/#executable-registry), the runbook will continue to execute the _old_ versions of these files. The solution is to always restart your runbook (re-run `runbooks watch`) when you make a template or script change.
## Usage
```bash
runbooks watch RUNBOOK_SOURCE [flags]
```
## Arguments
- `RUNBOOK_SOURCE` - A local path or remote URL pointing to a `runbook.mdx` file, the directory containing a `runbook.mdx` file, or an OpenTofu/Terraform module directory. See [runbooks open](/commands/open) for supported remote URL formats and OpenTofu/Terraform module auto-detection.
> **Note:** When watching a remote runbook, file watching operates on the local downloaded copy. Changes to the remote repository will not be detected automatically.
## Flags
- `--working-dir <path>` - Base directory for script execution and file generation (default: current directory)
- All relative paths are resolved from this directory
- Can be absolute or relative to current directory
- Use `--working-dir=::tmp` for a temporary directory (automatically cleaned up on exit), useful for isolated testing or sandboxed execution
- `--output-path <path>` - Directory where generated files will be written (default: `generated`)
- Resolved relative to the working directory
- `--tf-runbook <keyword-or-path>` - Select a built-in template or provide a local custom runbook for OpenTofu/Terraform modules. Remote URLs are not supported. See [runbooks open](/commands/open/#flags) for available templates and details.
- `--port <number>` - Port for the backend + frontend server (default: 7825). Useful when running multiple runbooks simultaneously or when the default port is already in use.
- `--disable-live-file-reload` - Enable executable registry validation. Scripts will not reload from disk when they are updated; instead, you must re-run the `runbooks` command to apply script changes. This trades lower convenience for higher security. See [Execution Security Model](/security/execution-model) for details.
- `--no-telemetry` - Disable anonymous telemetry. Can also be set via `RUNBOOKS_TELEMETRY_DISABLE=1` environment variable.
## What it does
When you run `runbooks watch`:
1. **Starts the Backend Server** - Launches a Go-based HTTP server on `localhost` (port 7825 by default, configurable with `--port`)
2. **Opens Your Browser** - Automatically navigates to the server URL
3. **Watches for Changes** - Monitors the runbook file for any modifications
4. **Auto-Reloads** - Automatically refreshes the browser when changes are detected (within ~300ms)
### Writing a new runbook
```bash
runbooks watch /path/to/runbook
```
Then in your editor:
1. Make changes to `runbook.mdx`
2. Save the file
3. See your changes instantly in the browser - no manual refresh needed!
## Technical details
### File watching
- Uses `fsnotify` for efficient file system monitoring
- Watches the directory containing your runbook file
- Implements debouncing (300ms) to handle editors that save files multiple times
- Only triggers on Write and Create events for your specific runbook file
### Auto-reload mechanism
- Uses Server-Sent Events (SSE) to push notifications from server to browser
- The browser maintains a persistent connection to `/api/watch/sse`
- When the file changes, the server sends a `file-change` event
- The browser receives the event and automatically reloads the page
---
## Authoring Runbooks
### Overview
# Authoring Runbooks
This section covers everything you need to know to write your own Runbooks.
## What's in a Runbook?
A Runbook combines **markdown documentation** with **interactive blocks** that can:
- Validate the user's current state with automated checks
- Execute shell commands and scripts
- Collect user input through forms
- Generate files from templates
All of this runs locally on the user's machine through a web interface.
## Quick start
For an initial walkthrough, see [Write Your First Runbook](/intro/write_your_first_runbook/) for a complete tutorial.
## Section Guide
- **[Runbook Structure.](/authoring/runbook-structure/)** Learn the file format and folder organization for Runbooks.
- **[Markdown.](/authoring/markdown/)** Reference for supported markdown elements.
- **[Inputs & Outputs.](/authoring/inputs-and-outputs/)** How data flows between blocks — collecting user input, wiring it with `inputsId`, and passing runtime outputs downstream.
- **[Boilerplate Templates.](/authoring/boilerplate/)** Guide to template syntax and `boilerplate.yml` files.
- **[Blocks.](/authoring/blocks/)** Reference for all interactive block components:
---
### Runbook Structure
# Runbook Structure
This page explains the file format and folder organization for a runbook.
## File Format
A runbook lives in its own folder and must contain a file named `runbook.mdx`. This file is written in **MDX** (Markdown + JSX), which lets you combine:
- **Standard Markdown** — Headers, paragraphs, lists, code blocks, images, links
- **Runbook Blocks** — Interactive components like ``, ``, ``, and ``
You can learn more about:
- [Supported markdown elements](/authoring/markdown)
- [Available blocks](/authoring/blocks)
### Common structure
While you can organize your `runbook.mdx` file however you like, here's a common pattern:
`````mdx
# Runbook Title
Introduction paragraph explaining what this runbook does.
## Pre-flight Checks
Make sure the user is set up for success.
## Execute Actions
Run commands on behalf of the user to complete the runbook.
## Generate Code
Generate code the user needs to accomplish their task.
## Post-flight Checks
Verify everything worked.
`````
## Folder organization
Blocks often reference scripts and templates stored in folders. While you can store these anywhere, here's the conventional folder structure:
```
my-runbook/
├── runbook.mdx # Main runbook file
├── assets/ # Images and other assets
│ └── diagram.png
├── checks/ # Shell scripts for blocks
│ ├── prereq1.sh
│ └── prereq2.sh
├── scripts/ # Shell scripts for blocks
│ ├── deploy.sh
│ └── cleanup.sh
└── templates/ # Boilerplate templates for blocks
└── my-template/
├── boilerplate.yml
├── main.tf
└── variables.tf
```
### Folder Purposes
| Folder | Purpose | Used By |
|--------|---------|---------|
| `assets/` | Images, diagrams, and other static files | Markdown image syntax |
| `checks/` | Shell scripts that validate prerequisites | `` blocks |
| `scripts/` | Shell scripts that execute actions | `` blocks |
| `templates/` | Boilerplate template directories | `` blocks |
### Relative Paths
Always reference files relative to your runbook:
```mdx

```
---
### Markdown Support
# Markdown in Runbooks
Runbooks uses **GitHub-flavored Markdown** (GFM) with full support for common markdown elements. That means you can include any of the following elements in your runbooks.
## Supported Elements
### Headers
```markdown
# Header 1
## Header 2
### Header 3
#### Header 4
##### Header 5
###### Header 6
```
### Text Formatting
```markdown
**Bold text**
*Italic text*
***Bold and italic***
~~Strikethrough~~
`Inline code`
```
### Lists
Unordered lists:
```markdown
- Item 1
- Item 2
- Nested item
- Another nested item
- Item 3
```
Ordered lists:
```markdown
1. First item
2. Second item
3. Third item
1. Nested numbered item
```
Task lists:
```markdown
- [x] Completed task
- [ ] Incomplete task
- [ ] Another task
```
### Links
```markdown
[Link text](https://example.com)
[Link with title](https://example.com "Title text")
```
### Autolinks
URLs and email addresses are automatically converted to clickable links:
```markdown
Visit https://gruntwork.io for more info.
Contact support@example.com for help.
```
### Images
```markdown


```
Images are resolved relative to the runbook file location.
### Code Blocks
Inline code:
```markdown
Use the `npm install` command to install dependencies.
```
Code blocks with syntax highlighting:
````markdown
```bash
echo "Hello, world!"
```
```python
def hello():
print("Hello, world!")
```
```javascript
console.log("Hello, world!");
```
````
Supported languages include: bash, sh, shell, python, javascript, typescript, go, rust, java, terraform, hcl, yaml, json, and many more.
### Blockquotes
```markdown
> This is a blockquote.
> It can span multiple lines.
>
> And have multiple paragraphs.
```
### Horizontal Rules
```markdown
---
***
___
```
### Tables
```markdown
| Header 1 | Header 2 | Header 3 |
|----------|----------|----------|
| Cell 1 | Cell 2 | Cell 3 |
| Cell 4 | Cell 5 | Cell 6 |
```
With alignment:
```markdown
| Left-aligned | Center-aligned | Right-aligned |
|:-------------|:--------------:|--------------:|
| Left | Center | Right |
```
### Footnotes
```markdown
Here is a sentence with a footnote.[^1]
[^1]: This is the footnote content.
```
Footnotes are collected and rendered at the bottom of the document.
## MDX Features
Because Runbooks supports MDX, you also have access to a few special features beyond standard markdown elements.
### Mix Markdown and JSX
```mdx
# My Runbook
Regular markdown text here.
More markdown text.
```
### Use JavaScript Expressions
```mdx
Today's date: {new Date().toLocaleDateString()}
```
### HTML
You can use HTML directly in markdown:
```markdown
<div style="color: red;">
This text will be red.
</div>
```
### Escaping Special Characters
If you need to display special characters literally, escape them with a backslash:
```markdown
\* This won't be italic
\# This won't be a header
\`This won't be code\`
```
### Code Blocks in Special Blocks
When embedding YAML or other code in special blocks, use proper fencing:
```mdx
```yaml
variables:
- name: Example
type: string
\```
```
Note: Use a backslash before the closing triple backticks to escape them within the outer code block.
---
### Inputs & Outputs
Every runbook has a **global context,** which is the shared collection of key-value pairs that blocks use to communicate. The keys are variables. Blocks can write values to the global context, or read values from the global context.
There are two ways values are written to the global context:
- **Inputs:** [Input providers](#input-providers) (like ``, ``, and ``) collect values from end users via web forms and publish them to the global context. Consumer blocks pull in those values via the [`inputsId` prop](#wiring-blocks-with-inputsid).
- **Outputs:** Several block types (e.g., ``, ``, ``, ``) publish results to the global context. Downstream blocks reference them via the [`outputs` namespace](#block-outputs).
## Variables
Variables are the keys in the global context. You reference them in templates, scripts, and block props using `{{ .inputs.VarName }}` syntax for inline rendering blocks (``, ``, and ``) or `{{ .VarName }}` for template files used by ``. For example:
```mdx
```
When this block runs, Runbooks resolves `{{ .inputs.Email }}` and `{{ .inputs.AccountName }}` from the global context, producing a command like:
```bash
aws organizations create-account --email alice@example.com --account-name my-account
```
### Boilerplate variables
Under the hood, Runbooks uses [Gruntwork Boilerplate](/authoring/boilerplate/) as its template engine. This means that variables work exactly like Boilerplate variables:
- The same `{{ .VarName }}` / `{{ .inputs.VarName }}` syntax
- The same types (`string`, `int`, `bool`, `enum`, `list`, `map`)
- The same validation rules (`required`, `email`, `url`, `alpha`, `digit`, `alphanumeric`, `countrycode2`, `semver`, `regex`)
- The same `boilerplate.yml` format for defining them
- The full library of Boilerplate helper functions and pipe modifiers (`upper`, `lower`, `snakeCase`, `camelCase`, `fromJson`, conditionals, loops, and more)
If you've used Boilerplate before, everything you know carries over directly. If you haven't, the [Boilerplate Templates](/authoring/boilerplate/) page covers the full syntax, including variable definitions, template rendering, helper functions, and how to test templates locally with the Boilerplate CLI.
#### Template Syntax: New vs. Legacy
Runbooks supports two template syntaxes for referencing variables:
- **New syntax (recommended):** `{{ .inputs.VarName }}` for inputs and `{{ .outputs.blockId.outputName }}` for outputs. This explicit namespace syntax makes it clear where values come from.
- **Legacy syntax (backward compatible):** `{{ .VarName }}` for inputs. This syntax is supported for backward compatibility with existing Boilerplate templates.
Both syntaxes work simultaneously. When variables are passed under the `inputs` namespace, they are automatically duplicated at the root level so existing templates using `{{ .VarName }}` continue to work without modification. This means you can:
- Use existing Boilerplate templates without changes
- Mix both syntaxes in the same template if needed
- Gradually migrate to the new syntax at your own pace
> **Caution:** **Reserved variable names:** `inputs` and `outputs` are reserved keywords and cannot be used as variable names in your `boilerplate.yml` files.
## Input Providers
An **input provider** is a block that collects variable values and makes them available to other blocks. Let's look at the input providers that are available in Runbooks.
### \The [``](/authoring/blocks/inputs/) block collects variable values from end users via a web form that is dynamically generated based on the contents of a `boilerplate.yml` file. For example, this `boilerplate.yml` file is defined inline and declares a single variable called `ProjectName`:
`````mdx
```yaml
variables:
- name: ProjectName
type: string
description: Name for your project
```
`````
### \The [``](/authoring/blocks/template/) block generates files from a Boilerplate template directory. When it references a template that contains a `boilerplate.yml` file, it also acts as an input provider by rendering a web form based on the variables defined in the `boilerplate.yml` file.
```mdx
```
In the above example, there is a local `boilerplate.yml` file at `templates/rds/boilerplate.yml`.
### \The [``](/authoring/blocks/tfmodule/) block parses an OpenTofu/Terraform module's `.tf` files at runtime and auto-generates a web form from the module's variables. It also publishes a `_module` namespace with metadata about the module. See the [TfModule docs](/authoring/blocks/tfmodule/#the-_module-namespace) for the full namespace reference.
```mdx
```
In the above example, there is an OpenTofu or Terraform module at `../modules/rds`. TfModule also supports `source="."` for [colocated runbooks](/authoring/blocks/tfmodule/#colocated-runbooks-source) (runbook lives alongside the `.tf` files) and `source="::cli_runbook_source"` for [generic runbooks](/authoring/blocks/tfmodule/#dynamic-source-from-cli-source) that accept any module URL from the CLI.
## Wiring Blocks with `inputsId`
You can take the values collected from an input provider and use them in other blocks that consume variables. Any block that consumes variables (e.g. ``, ``, ``, ``) accepts an `inputsId` prop that references an input provider by its `id`:
For example, you can use the values collected from the `` block in a `` block:
`````mdx
```yaml
variables:
- name: ProjectName
type: string
```
`````
### Multiple Sources
Pass an array of IDs to `inputsId` to merge variables from more than one input provider. Variables are merged in order, so in the event of a name conflict, later IDs override earlier ones.
In this example, the Command will use the variables collected from the `` blocks with the IDs `global-config` and `local-config`.
```mdx
```
### Embedded Inputs
You can nest an `` block directly inside a `` or ``. The variables are automatically available to the parent without needing `inputsId`:
`````mdx
```yaml
variables:
- name: Name
type: string
```
`````
The embedded Inputs can still be referenced by other blocks via `inputsId="greeting"`.
## When Variables Overlap
When you wire blocks together with `inputsId`, the consuming block receives all the variables from the input provider. For ``, ``, and ``, this is straightforward: every imported variable is available in the template.
It gets more interesting with ``, because a Template has its own `boilerplate.yml` that can *also* define variables. When a Template imports from an input provider, Runbooks compares the two sets of variables and handles each one based on where it's defined:
- **If only the Template defines it:** the variable appears as an editable field in the Template's form. This is how you collect extra inputs (like an environment name or team owner) that the input provider doesn't know about.
- **If only the input provider defines it:** the variable is passed through to the template engine but doesn't appear in the Template's form. This includes namespaced values like `_module.*` from ``.
- **If both define it:** the variable appears in the Template's form but is read-only, staying live-synced to the input provider's value. This prevents the two forms from getting out of sync.
## Block Outputs
Inputs flow from user forms into blocks. **Outputs** flow in the other direction, from blocks that have already run into downstream blocks. This enables multi-step workflows where each step builds on results from previous steps.
### Producing Outputs
[``](/authoring/blocks/command/) and [``](/authoring/blocks/check/) blocks produce outputs by writing `key=value` pairs to the file specified by the `$RUNBOOK_OUTPUT` environment variable:
```bash
#!/bin/bash
ACCOUNT_ID=$(aws organizations create-account ...)
echo "account_id=$ACCOUNT_ID" >> "$RUNBOOK_OUTPUT"
echo "region=us-west-2" >> "$RUNBOOK_OUTPUT"
```
For complex data (lists, maps), serialize as JSON:
```bash
echo 'users=["alice","bob","charlie"]' >> "$RUNBOOK_OUTPUT"
```
You don't need to define the `$RUNBOOK_OUTPUT` environment variable in your script. Runbooks will automatically create it for you.
### Consuming Outputs
Downstream blocks reference outputs using the `outputs` namespace:
```
{{ .outputs.<block-id>.<output-name> }}
```
For example, a Command whose `id` is `create-account` that outputs `account_id` is referenced as:
```
{{ .outputs.create_account.account_id }}
```
This syntax works in `` and `` scripts, `` files, and `` content.
> **Note:** **ID naming:** Block IDs can use hyphens or underscores (e.g., `id="create-account"` or `id="create_account"`). In template syntax, always use underscores. Go templates interpret hyphens as subtraction. IDs that normalize to the same value (e.g., `create-account` and `create_account`) cannot coexist in the same runbook.
### Iterating Over Complex Outputs
Use `fromJson` to parse JSON output values, then `range` to iterate:
```
{{- range (fromJson .outputs.list_users.users) }}
- {{ . }}
{{- end }}
```
### Output Dependencies
If a block references outputs from another block that hasn't run yet:
1. A **warning message** shows which blocks need to run first
2. The **Run / Generate button is disabled** until the upstream block executes
3. The template shows the raw syntax until outputs are available
After a block runs, you can click **"View Outputs"** below the log viewer to inspect its outputs in a table.
> **Note:** Block outputs are stored in browser memory and cleared on page refresh.
### Combining Inputs and Outputs
You can use both standard inputs and block outputs in the same template:
```hcl
# From an Inputs block (via inputsId) - new syntax
environment = "{{ .inputs.Environment }}"
# From a Command block's outputs
account_id = "{{ .outputs.create_account.account_id }}"
```
Both the new namespaced syntax (`{{ .inputs.Environment }}`) and legacy syntax (`{{ .Environment }}`) work for inputs.
## Example
Check out the [block-outputs feature demo](https://github.com/gruntwork-io/runbooks/tree/main/testdata/feature-demos/block-outputs) for a complete working demonstration of inputs and outputs flowing between Command, Check, Template, and TemplateInline blocks.
To run it directly:
```bash
runbooks open https://github.com/gruntwork-io/runbooks/tree/main/testdata/feature-demos/block-outputs
```
---
### Opening Runbooks
The `runbooks open` command opens a runbook in the browser. It accepts two kinds of sources:
1. A **runbook** (a directory containing `runbook.mdx`), or
2. An **OpenTofu/Terraform module** (a directory containing `.tf` files).
Either source can be local or remote.
See the [`open` command reference](/commands/open/) for the full list of flags and supported URL formats.
## Opening a Runbook
Point the CLI at a directory containing a `runbook.mdx`:
```bash
# Local
runbooks open ./runbooks/deploy-rds
# Remote (GitHub URL)
runbooks open https://github.com/org/repo/tree/main/runbooks/deploy-rds
```
Runbooks serves the `runbook.mdx` in the browser. For remote sources, it downloads just the runbook directory via sparse git clone.
## Opening an OpenTofu/Terraform Module
Point the CLI at a directory containing `.tf` files:
```bash
# Local
runbooks open ./modules/rds
# Remote (GitHub URL)
runbooks open https://github.com/org/repo/tree/main/modules/rds
# Remote (OpenTofu shorthand)
runbooks open github.com/org/repo//modules/rds?ref=v1.0
```
When Runbooks detects `.tf` files (and no `runbook.mdx`), it uses a **tf-runbook template** to auto-generate a runbook that:
1. Parses the module's variables from the `.tf` files
2. Renders a web form for those variables
3. Generates output (e.g., a `terragrunt.hcl` file) using the template
By default, the `::terragrunt` built-in template is used. Use the `--tf-runbook` flag to select a different [built-in template](#built-in-templates) or [custom template](#custom-templates).
### Built-in Templates
```bash
runbooks open --tf-runbook=::terragrunt ./modules/rds
```
| Template | What it generates | When to use |
|----------|-------------------|-------------|
| `::terragrunt` (default) | `terragrunt.hcl` with non-default inputs | Standard Terragrunt workflows |
| `::terragrunt-github` | `terragrunt.hcl` + GitHub PR | GitOps: clone a repo, pick a directory, generate config, open a PR |
| `::tofu` | `main.tf` with a `module` block | Plain OpenTofu/Terraform (no Terragrunt) |
> **Tip:** All three built-in templates use `hcl_inputs_non_default`, which only includes variables whose values differ from the declared defaults. See [`_module` namespace](/authoring/blocks/tfmodule/#the-_module-namespace) for details on `hcl_inputs_non_default` vs `hcl_inputs`.
### Custom Templates
If the built-in templates don't fit your needs, create your own. A custom template is a **local** directory containing a `runbook.mdx` that uses [``](/authoring/blocks/tfmodule/) with `source="::cli_runbook_source"`.
> **Caution:** `--tf-runbook` only accepts local paths — remote URLs are not supported. This is a security measure: a custom template is a full runbook that can execute arbitrary scripts on your machine, and fetching one from a remote URL could silently introduce untrusted code. Download and review any third-party template before using it.
```
my-custom-template/
runbook.mdx
```
Inside `runbook.mdx`, the `::cli_runbook_source` keyword resolves to whatever module URL was passed on the command line:
`````mdx title="my-custom-template/runbook.mdx"
# Configure Module
```hcl
terraform {
source = "{{ .inputs._module.source }}"
}
inputs = {
{{- range $name, $hcl := .inputs._module.hcl_inputs }}
{{ $name }} = {{ $hcl }}
{{- end }}
}
```
`````
Run it with:
```bash
runbooks open --tf-runbook ./my-custom-template/ https://github.com/org/repo/tree/main/modules/rds
```
This example uses `` for simplicity, but custom templates can use `` for multi-file scaffolding and extra variables. See [`` — Template Patterns](/authoring/blocks/tfmodule/#template-patterns) for details.
### What `::cli_runbook_source` Resolves To
The keyword resolves to the `RUNBOOK_SOURCE` argument you pass on the command line:
| Command | `::cli_runbook_source` resolves to |
|---------|------------------------------------|
| `runbooks open --tf-runbook ./tpl/ ./modules/rds` | Absolute path to `./modules/rds` |
| `runbooks open --tf-runbook ./tpl/ https://github.com/org/repo/tree/main/modules/rds` | `https://github.com/org/repo/tree/main/modules/rds` |
| `runbooks open --tf-runbook ./tpl/ github.com/org/repo//modules/rds?ref=v1.0` | `github.com/org/repo//modules/rds?ref=v1.0` |
If the runbook is opened without a module URL (e.g., `runbooks open --tf-runbook ./tpl/`), `` renders a message explaining how to provide one.
### Colocated Runbooks
Module authors can ship a custom runbook alongside their `.tf` files by placing a `runbook.mdx` in the module directory:
```
modules/rds/
main.tf
variables.tf
outputs.tf
runbook.mdx <-- custom runbook
```
When someone runs `runbooks open` on this directory, the colocated `runbook.mdx` is served instead of generating one from a template. Inside, use `source="."` to reference the module in the same directory:
`````mdx title="modules/rds/runbook.mdx"
# Configure RDS
```hcl
terraform {
source = "{{ .inputs._module.source }}"
}
inputs = {
{{- range $name, $hcl := .inputs._module.hcl_inputs }}
{{ $name }} = {{ $hcl }}
{{- end }}
}
```
`````
This works with both local and remote modules:
```bash
# Local
runbooks open ./modules/rds
# Remote — still serves the colocated runbook.mdx
runbooks open https://github.com/org/repo/tree/main/modules/rds
```
---
### Boilerplate Templates
# Gruntwork Boilerplate
[Gruntwork Boilerplate](https://github.com/gruntwork-io/boilerplate) is a tool for generating files and folders from templates. Runbooks uses Boilerplate under the hood for all template rendering—this includes the ``, ``, ``, ``, and `` blocks.
This page covers the aspects of Boilerplate most relevant to Runbook authors. For the complete Boilerplate documentation, see the [official Boilerplate repo](https://github.com/gruntwork-io/boilerplate).
## Boilerplate in a nutshell
Boilerplate is a template engine, similar to [Jinja](https://jinja.palletsprojects.com/en/stable/), [Nunjucks](https://mozilla.github.io/nunjucks/), or [Cookiecutter](https://cookiecutter.readthedocs.io/en/stable/).
### Why boilerplate
Boilerplate is differentiated by being purpose-built for DevOps and infrastructure use cases, which gives it a few key features:
1. **Interactive mode:** When used as a CLI tool, boilerplate interactively prompts the user for a set of variables defined in a `boilerplate.yml` file and makes those variables available to your project templates during copying.
2. **Non-interactive mode:** Variables can also be set non-interactively, via command-line options, so that Boilerplate can be used in automated settings (e.g. during automated tests).
3. **Flexible templating:** Boilerplate uses Go Template for templating, which gives you the ability to do formatting, conditionals, loops, and call out to Go functions. It also includes helpers for common tasks such as loading the contents of another file, executing a shell command and rendering the output in a template, and including partial templates.
4. **Dependencies.** You can "chain" templates together, conditionally including other templates depending on variable values.
5. **Variable types:** Boilerplate variables support types, so you have first-class support for strings, ints, bools, lists, maps, and enums.
6. **Validations:** Boilerplate provides a set of validations for a given variable that user input must satisfy.
7. **Scripting:** Need more power than static templates and variables? Boilerplate includes several hooks that allow you to run arbitrary scripts.
8. **Cross-platform:** Boilerplate is easy to install (it's a standalone binary) and works on all major platforms (Mac, Linux, Windows).
### Quick example
Say you want to generate a README for new projects. Create a template folder:
```
my-template/
├── boilerplate.yml
└── README.md
```
**`boilerplate.yml`** — defines the variables:
```yaml
variables:
- name: ProjectName
type: string
description: Name of the project
- name: Author
type: string
description: Who is the author?
default: Anonymous
```
**`README.md`** — the template file:
```markdown
# {{ .ProjectName }}
Created by {{ .Author }}.
```
**Run boilerplate on the command line:**
```bash
boilerplate \
--template-url ./my-template \
--output-folder ./output \
--var ProjectName="My Cool App" \
--var Author="Jane Doe"
```
**Result** — `output/README.md`:
```markdown
# My Cool App
Created by Jane Doe.
```
That's it! Boilerplate takes your template, substitutes the variables, and writes the output.
## What Boilerplate Does for Runbooks
In Runbooks, Boilerplate provides:
1. **Variable definitions** — A YAML schema (`boilerplate.yml`) that defines what inputs users need to provide, including types, defaults, and validation rules.
2. **Template syntax** — Go template syntax for rendering dynamic content in generated files, scripts, and commands.
3. **File generation** — The ability to generate multiple files from templates with user-provided values.
## The `boilerplate.yml` File
Every template directory needs a `boilerplate.yml` file that defines the variables users will fill in. Runbooks reads this file to render interactive forms in the UI.
### Basic Structure
```yaml
variables:
- name: ProjectName
type: string
description: What would you like to call your project?
default: my-project
validations:
- required
- name: Environment
type: enum
description: Which environment is this for?
options:
- dev
- staging
- prod
default: dev
```
This generates a form with a text input for `ProjectName` and a dropdown for `Environment`.
### Variable Properties
Each variable supports these properties:
| Property | Required | Description |
|----------|----------|-------------|
| `name` | Yes | Variable name (used in templates as `.Name`) |
| `type` | No | Data type: `string`, `int`, `bool`, `enum`, `list`, `map` (defaults to `string`) |
| `description` | No | Help text shown in the form |
| `default` | No | Default value |
| `options` | For `enum` | List of allowed values for dropdowns |
| `validations` | No | Validation rules (see below) |
## Variable Types
### `string`
Text input field.
```yaml
- name: BucketName
type: string
description: Name for the S3 bucket
default: my-bucket
```
### `int`
Numeric input field.
```yaml
- name: InstanceCount
type: int
description: Number of instances to deploy
default: 3
```
### `bool`
Checkbox toggle.
```yaml
- name: EnableLogging
type: bool
description: Enable CloudWatch logging?
default: true
```
### `enum`
Dropdown select from predefined options.
```yaml
- name: Region
type: enum
description: AWS region for deployment
options:
- us-east-1
- us-west-2
- eu-west-1
default: us-east-1
```
### `list`
Dynamic list of values. Users can add/remove items.
```yaml
- name: AllowedIPs
type: list
description: IP addresses to allow
default: []
```
### `map`
Key-value pairs. Users can add/remove entries.
```yaml
- name: Tags
type: map
description: Resource tags
default: {}
```
For more complex structured data, see [x-schema](#x-schema) below.
## Validations
Add validation rules to ensure user input meets requirements:
```yaml
- name: ProjectName
type: string
validations:
- required
- name: ContactEmail
type: string
validations:
- email
- name: WebsiteURL
type: string
validations:
- url
- name: Identifier
type: string
validations:
- alphanumeric
- name: CountryCode
type: string
validations:
- countrycode2
- name: Version
type: string
validations:
- semver
- name: ManyValidations
type: string
validations:
- "required"
- "email"
```
### Custom Error Messages
For custom error messages, use the object format with `type` and `message`:
```yaml
variables:
- name: Email
type: string
validations:
- type: required
message: Email is required
- type: email
message: Must be a valid email address
```
### Available Validation Types
| Validation | Description |
|------------|-------------|
| `required` | Field cannot be empty |
| `email` | Must be a valid email address |
| `url` | Must be a valid URL |
| `alpha` | Only letters allowed (no numbers or special characters) |
| `digit` | Only digits allowed (0-9) |
| `alphanumeric` | Only letters and numbers allowed |
| `countrycode2` | Must be a valid two-letter country code (ISO 3166-1 alpha-2) |
| `semver` | Must be a valid semantic version (e.g., `1.0.0`, `2.1.3-beta`) |
| `length(min, max)` | Value must be between `min` and `max` characters long |
| `regex("pattern")` | Value must match the given regular expression pattern |
#### Parameterized Validation Examples
```yaml
- name: ProjectSlug
type: string
validations:
- regex("^[a-z][a-z0-9-]+$")
- name: Description
type: string
validations:
- length(0, 256)
```
## Template Syntax
Template files use Go template syntax. Variables from `boilerplate.yml` are accessed using dot notation:
### Basic Variable Substitution
```hcl
resource "aws_s3_bucket" "main" {
bucket = "{{ .BucketName }}-{{ .Environment }}"
}
```
### Conditionals
```hcl
{{- if .EnableLogging }}
resource "aws_cloudwatch_log_group" "main" {
name = "/app/{{ .ProjectName }}"
}
{{- end }}
```
### Comparisons
```hcl
instance_type = "{{ if eq .Environment "prod" }}t3.large{{ else }}t3.micro{{ end }}"
```
### Loops
```hcl
{{- range .AllowedIPs }}
- {{ . }}
{{- end }}
```
### Working with Maps
```hcl
tags = {
{{- range $key, $value := .Tags }}
{{ $key }} = "{{ $value }}"
{{- end }}
}
```
### Whitespace Control
Use `-` to trim whitespace around template directives:
- `{{-` trims whitespace before
- `-}}` trims whitespace after
```hcl
{{- if .Description }}
description = "{{ .Description }}"
{{- end }}
```
## Built-in Helper Functions
Boilerplate includes helper functions for common transformations:
```yaml
# String case transformations
{{ .ProjectName | snakeCase }} # my_project
{{ .ProjectName | camelCase }} # myProject
{{ .ProjectName | pascalCase }} # MyProject
{{ .ProjectName | kebabCase }} # my-project
{{ .ProjectName | upper }} # MY-PROJECT
{{ .ProjectName | lower }} # my-project
# String checks
{{ hasPrefix "prod" .Environment }} # true if starts with "prod"
{{ hasSuffix "-dev" .Name }} # true if ends with "-dev"
```
## Dynamic File Names
Boilerplate supports template syntax in file names. This lets you generate files with dynamic names based on variables:
```
templates/
├── boilerplate.yml
├── {{ .ProjectName }}.tf
└── {{ .Environment }}/
└── config.yaml
```
With `ProjectName: "vpc"` and `Environment: "prod"`, this generates:
```
vpc.tf
prod/
└── config.yaml
```
## Runbooks Extensions
Runbooks extends Boilerplate with additional YAML properties (prefixed with `x-`) for enhanced UI rendering. These are ignored by the standard Boilerplate CLI but enable richer form experiences in Runbooks.
### `x-section`
A large number of fields on a form can be overwhelming for users. Sections allow you to _group_ fields under a named heading so that you can organize a large number of fields into a discrete number of sections.
In this example, the form will render with two sections: "Basic Settings", "Advanced Settings"
```yaml
variables:
- name: FunctionName
type: string
x-section: Basic Settings
- name: Runtime
type: enum
options: [python3.12, nodejs20.x]
x-section: Basic Settings
- name: MemorySize
type: int
default: 128
x-section: Advanced Settings
- name: Timeout
type: int
default: 30
x-section: Advanced Settings
```
Variables without `x-section` appear in an unnamed section at the top.
### `x-schema`
Sometimes you want to collect a "map" of key-value pairs from users, where the value is a simple string. But in other cases, you want a _collection_ of values for each key. For example, if you want to prompt a user to declare their current AWS accounts, each AWS account has an email address, account ID, and descriptive name.
In these scenarios, you can define a _schema_ for `map` type variables so that Runbooks will render a structured form instead of free-form key-value inputs:
```yaml
- name: AWSAccounts
type: map
description: AWS account configuration
x-schema:
email: string
name: string
id: string
```
This renders a form where each map entry has three typed fields instead of arbitrary key-value pairs.
### `x-schema-instance-label`
Customize the label for each instance of a key-value pair in a schema-based map:
```yaml
- name: AWSAccounts
type: map
description: AWS account configuration
x-schema-instance-label: AWS Account Name
x-schema:
email: string
environment: string
id: string
```
## Authoring Templates with the CLI
While Runbooks provides a live preview experience, you can also use the Boilerplate CLI directly to test templates during development.
### Install Boilerplate
```bash
# macOS
brew install gruntwork-io/tap/boilerplate
# Or download from GitHub releases
# https://github.com/gruntwork-io/boilerplate/releases
```
### Generate Files
```bash
boilerplate \
--template-url ./templates/vpc \
--output-folder ./output \
--var VpcName="my-vpc" \
--var Environment="dev"
```
### Interactive Mode
Without `--var` flags, Boilerplate prompts for values interactively:
```bash
boilerplate \
--template-url ./templates/vpc \
--output-folder ./output
```
### Non-Interactive Mode
Use `--non-interactive` with a vars file for CI/CD:
```bash
boilerplate \
--template-url ./templates/vpc \
--output-folder ./output \
--var-file vars.yml \
--non-interactive
```
## Example: Complete Template
Here's a complete example showing a template directory structure:
```
templates/lambda/
├── boilerplate.yml
├── main.tf
├── variables.tf
└── outputs.tf
```
**`boilerplate.yml`:**
```yaml
variables:
- name: FunctionName
type: string
description: Name for the Lambda function
validations:
- required
x-section: Basic Settings
- name: Runtime
type: enum
description: Lambda runtime
options:
- python3.12
- nodejs20.x
default: python3.12
x-section: Basic Settings
- name: MemorySize
type: int
description: Memory in MB (128-10240)
default: 128
x-section: Advanced Settings
- name: EnableLogging
type: bool
description: Create CloudWatch log group?
default: true
x-section: Advanced Settings
- name: Tags
type: map
description: Resource tags
default: {}
x-section: Advanced Settings
```
**`main.tf`:**
```hcl
resource "aws_lambda_function" "main" {
function_name = "{{ .FunctionName }}"
runtime = "{{ .Runtime }}"
memory_size = {{ .MemorySize }}
handler = "index.handler"
role = aws_iam_role.lambda.arn
tags = {
{{- range $key, $value := .Tags }}
{{ $key }} = "{{ $value }}"
{{- end }}
}
}
{{- if .EnableLogging }}
resource "aws_cloudwatch_log_group" "lambda" {
name = "/aws/lambda/{{ .FunctionName }}"
retention_in_days = 14
}
{{- end }}
```
## Using Templates in Runbooks
Once you've created a template, reference it in your runbook:
```mdx
# Deploy a Lambda Function
Configure your Lambda function below:
After generating, review the files in the file tree on the right.
```
For inline templates that don't need a separate directory:
`````mdx
```yaml
variables:
- name: BucketName
type: string
```
```hcl
resource "aws_s3_bucket" "main" {
bucket = "{{ .BucketName }}"
}
```
`````
---
### Testing Runbooks
Runbooks includes a built-in testing framework that lets you validate that your runbooks work correctly.
To run a test, you define a YAML test configuration file alongside your runbook. Then call `runbooks test /path/to/runbook` or `runbooks test ./...` to run the tests. Test are meant to run locally or in CI.
## Quick Start
1. Generate a test configuration for your runbook:
```bash
runbooks test init ./my-runbook
```
This creates `runbook_test.yml` with reasonable defaults based on your runbook's blocks. You can edit the file to customize the tests if needed.
2. Run the tests:
```bash
runbooks test ./my-runbook
```
## Test Configuration
Tests are defined in a `runbook_test.yml` file located in the same directory as your `runbook.mdx` file. Here's an example of a test configuration file:
```yaml
version: 1
settings:
# use_temp_working_dir: true # Default: use isolated temp directory
# working_dir: . # Alternative: use runbook's directory
# output_path: generated # Where to write generated files (relative to working_dir)
timeout: 5m # Test timeout
parallelizable: true # Can run in parallel with other runbooks
tests:
- name: happy-path
description: Standard successful execution
inputs:
project.Name: "test-project"
project.Language: "python"
steps:
- block: check-requirements
expect: success
- block: create-project
expect: success
outputs: [project_id]
assertions:
- type: file_exists
path: generated/README.md
- type: file_contains
path: generated/README.md
contains: "test-project"
cleanup:
- command: rm -rf /tmp/test-resources
```
### Top-Level Structure
```yaml
version: 1 # Required. Must be 1.
settings: { ... } # Optional. Global settings for all tests.
tests: [ ... ] # Required. Array of test cases (minimum 1).
```
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `version` | integer | Yes | Configuration version. Must be `1`. |
| `settings` | object | No | Global settings that apply to all tests. |
| `tests` | array | Yes | Array of runbook tests to run. At least one test case is required. |
### Settings Object
```yaml
settings:
use_temp_working_dir: true # boolean, default: true
working_dir: "." # string, default: current directory
output_path: "generated" # string, default: "generated"
timeout: "5m" # duration string, default: "5m"
parallelizable: true # boolean, default: true
```
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| `use_temp_working_dir` | boolean | `true` | Use an isolated temporary directory for test execution. Overrides `working_dir` when `true`. The temp directory is automatically cleaned up after the test. |
| `working_dir` | string | current directory | Base directory for script execution and file generation. Use `.` for the runbook's directory. Ignored when `use_temp_working_dir` is `true`. |
| `output_path` | string | `"generated"` | Directory where generated files are written, relative to the working directory. |
| `timeout` | string | `"5m"` | Maximum time for each test case. Uses Go duration format (e.g., `30s`, `5m`, `1h`). |
| `parallelizable` | boolean | `true` | Whether this runbook's tests can run in parallel with tests from other runbooks. |
### Test Case Object
```yaml
tests:
- name: "test-name" # string, required
description: "..." # string, optional
env: { ... } # map[string]string, optional
inputs: { ... } # map[string]InputValue, optional
steps: [ ... ] # array of Step objects, optional
assertions: [ ... ] # array of Assertion objects, optional
cleanup: [ ... ] # array of CleanupAction objects, optional
```
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `name` | string | Yes | Unique identifier for the test case. |
| `description` | string | No | Human-readable description of what the test validates. |
| `env` | map | No | Environment variables to set for all blocks in this test. Keys and values are strings. |
| `inputs` | map | No | Variable values for the test. Keys use `inputsID.varName` format. Values can be literals or fuzz configs. |
| `steps` | array | No | Blocks to execute in order. If empty, all blocks run in document order. |
| `assertions` | array | No | Validations to run after all steps complete. |
| `cleanup` | array | No | Actions to run after the test completes (even on failure). |
### Step Object
```yaml
steps:
- block: "block-id" # string, required
expect: success # ExpectedStatus, default: "success"
outputs: ["output1"] # array of strings, optional
missing_outputs: ["..."] # array of strings, optional
error_contains: "..." # string, optional
assertions: [ ... ] # array of Assertion objects, optional
```
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `block` | string | Yes | ID of the block to execute. |
| `expect` | string | No | Expected execution status. Default: `success`. See Expected Status values below. |
| `outputs` | array | No | Output names to capture from the block. |
| `missing_outputs` | array | No | Expected missing outputs (used with `blocked` status to verify why a block is blocked). |
| `error_contains` | string | No | Expected error message substring (used with `config_error` status). |
| `assertions` | array | No | Assertions to run immediately after this step completes. |
#### Expected Status Values
| Status | Description |
|--------|-------------|
| `success` | Block completed successfully (exit code 0). |
| `fail` | Block failed (exit code 1+). |
| `warn` | Block completed with warning (exit code 2). |
| `blocked` | Block should be blocked due to missing dependencies. |
| `skip` | Skip this block entirely (useful for AwsAuth in non-AWS tests). |
| `config_error` | Block has a configuration error (missing props, invalid config, etc.). |
#### Block IDs
Each step references a block by its ID. How a block gets its ID depends on the block type:
| Block Type | ID Source | Example |
|------------|-----------|---------|
| `` | Explicit `id` prop | `` → `verify-install` |
| `` | Explicit `id` prop | `` → `create-account` |
| `` | Explicit `id` prop | `` → `generate-config` |
| `` | Explicit `id` prop | `` → `settings-preview` |
| `` | Explicit `id` prop | `` → `aws-auth` |
| `` | Explicit `id` prop | `` → `github-auth` |
| `` | Explicit `id` prop | `` → `clone-repo` |
All block types use the explicit `id` prop for identification in test steps.
> **Tip:**
Run `runbooks test init` to automatically discover all block IDs in your runbook.
### Assertion Object
All assertions have a `type` field that determines which other fields are required.
```yaml
assertions:
- type: "assertion_type" # AssertionType, required
# Additional fields depend on the assertion type
```
| Assertion Type | Required Fields | Optional Fields | Description |
|----------------|-----------------|-----------------|-------------|
| `file_exists` | `path` | — | Check that a file exists at the given path. |
| `file_not_exists` | `path` | — | Check that a file does not exist. |
| `file_contains` | `path`, `contains` | — | Check that a file contains a substring. |
| `file_not_contains` | `path`, `contains` | — | Check that a file does not contain a substring. |
| `file_matches` | `path`, `pattern` | — | Check that file content matches a regex pattern. |
| `file_equals` | `path`, `value` | — | Check that file content exactly equals a value. |
| `dir_exists` | `path` | — | Check that a directory exists. |
| `dir_not_exists` | `path` | — | Check that a directory does not exist. |
| `output_equals` | `block`, `output`, `value` | — | Check that a block output equals a value. |
| `output_matches` | `block`, `output`, `pattern` | — | Check that a block output matches a regex pattern. |
| `output_exists` | `block`, `output` | — | Check that a block output exists (is not empty). |
| `files_generated` | `block` | `min_count` | Check that a Template block generated files. |
| `script` | `command` | — | Run a custom script (exit 0 = pass). |
#### Assertion Field Reference
| Field | Type | Description |
|-------|------|-------------|
| `type` | string | The assertion type. Required for all assertions. |
| `path` | string | File or directory path (relative to working directory). |
| `contains` | string | Substring to search for in file content. |
| `pattern` | string | Regular expression pattern. |
| `value` | string | Expected exact value. |
| `block` | string | Block ID for output assertions or `files_generated`. |
| `output` | string | Output name to check. |
| `min_count` | integer | Minimum number of files generated (for `files_generated`). Default: 1. |
| `command` | string | Shell command to execute (for `script` assertions). |
### Cleanup Action Object
```yaml
cleanup:
- command: "rm -rf /tmp/test" # string, optional (inline command)
path: "cleanup/teardown.sh" # string, optional (script file path)
```
| Field | Type | Description |
|-------|------|-------------|
| `command` | string | Inline shell command to execute. |
| `path` | string | Path to a script file to execute. |
Provide either `command` or `path`, not both.
### Input Value
Input values can be specified as either literal values or fuzz configurations.
#### Literal Values
```yaml
inputs:
project.Name: "my-project" # string literal
config.Port: 8080 # integer literal
config.Enabled: true # boolean literal
```
#### Fuzz Values
Fuzz values are used to generate random values for testing. The test framework will generate a new random value that meets the given constraints for each test run.
```yaml
inputs:
project.Name:
fuzz:
type: string # FuzzType, required
# Additional fields depend on the fuzz type
```
### Fuzz Configuration
The `fuzz` object specifies how to generate random values.
| Field | Type | Applies To | Description |
|-------|------|------------|-------------|
| `type` | string | All | Required. The fuzz type (see Fuzz Types below). |
| `length` | integer | `string` | Exact length of the generated string. |
| `minLength` | integer | `string`, `list` | Minimum length. Default: 8 for strings, 5 for list items. |
| `maxLength` | integer | `string`, `list` | Maximum length. Default: `minLength + 10` for strings. |
| `prefix` | string | `string` | Prefix to prepend to the generated string. |
| `suffix` | string | `string` | Suffix to append to the generated string. |
| `includeSpaces` | boolean | `string` | Include space characters. Default: `false`. |
| `includeSpecialChars` | boolean | `string` | Include special characters. Default: `false`. |
| `min` | integer | `int`, `float` | Minimum numeric value. Default: 0. |
| `max` | integer | `int`, `float` | Maximum numeric value. Default: 100. |
| `options` | array | `enum` | Array of valid enum options. Required for `enum` type. |
| `domain` | string | `email`, `url` | Domain to use. Default: random from `example.com`, `test.org`, etc. |
| `minDate` | string | `date`, `timestamp` | Minimum date (supports RFC3339, YYYY-MM-DD). Default: 365 days ago. |
| `maxDate` | string | `date`, `timestamp` | Maximum date. Default: now. |
| `format` | string | `date`, `timestamp` | Output format. Default: `2006-01-02` for date, RFC3339 for timestamp. |
| `wordCount` | integer | `words` | Exact number of words. |
| `minWordCount` | integer | `words` | Minimum word count. Default: 2. |
| `maxWordCount` | integer | `words` | Maximum word count. Default: `minWordCount + 3`. |
| `count` | integer | `list`, `map` | Exact number of items/entries. |
| `minCount` | integer | `list`, `map` | Minimum items/entries. Default: 2. |
| `maxCount` | integer | `list`, `map` | Maximum items/entries. Default: `minCount + 3`. |
| `schema` | array | `map` | Field names for nested map values (generates `map[string]map[string]string`). |
### Fuzz Types
| Type | Output | Description |
|------|--------|-------------|
| `string` | `string` | Alphanumeric string with optional spaces and special characters. |
| `int` | `integer` | Random integer within min/max range. |
| `float` | `float` | Random float within min/max range. |
| `bool` | `boolean` | Random `true` or `false`. |
| `enum` | `string` | Random selection from `options` array. |
| `email` | `string` | Valid email address (e.g., `abc123@example.com`). |
| `url` | `string` | Valid URL (e.g., `https://example.com/path`). |
| `uuid` | `string` | UUID v4 (e.g., `550e8400-e29b-41d4-a716-446655440000`). |
| `date` | `string` | Date string. Default format: `YYYY-MM-DD`. |
| `timestamp` | `string` | Timestamp string. Default format: RFC3339. |
| `words` | `string` | Space-separated random words. |
| `list` | `string` | JSON array of strings (e.g., `["abc", "def"]`). |
| `map` | `string` or `map` | JSON object or nested map depending on `schema`. |
### Complete Example
```yaml
version: 1
settings:
use_temp_working_dir: true
output_path: generated
timeout: 10m
parallelizable: true
tests:
- name: happy-path
description: Standard successful execution with all features
env:
LOG_LEVEL: debug
RUNBOOK_DRY_RUN: "false"
inputs:
# Literal values
project.Name: "test-project"
config.Port: 8080
# Fuzz values
project.Description:
fuzz: { type: words, minWordCount: 3, maxWordCount: 6 }
user.Email:
fuzz: { type: email, domain: "test.example.com" }
resource.ID:
fuzz: { type: uuid }
config.Region:
fuzz: { type: enum, options: ["us-east-1", "us-west-2", "eu-west-1"] }
steps:
- block: validate-inputs
expect: success
- block: create-resources
expect: success
outputs: [resource_id, resource_arn]
assertions:
- type: output_exists
output: resource_id
- block: generate-config
expect: success
assertions:
- type: file_exists
path: generated/config.json
- type: file_contains
path: generated/config.json
contains: "test-project"
- type: file_matches
path: generated/config.json
pattern: '"version":\s*"\d+\.\d+\.\d+"'
- type: output_matches
block: create-resources
output: resource_id
pattern: "^[a-f0-9]{12}$"
cleanup:
- command: rm -rf /tmp/test-resources
- name: missing-dependency
description: Test that blocks are properly blocked
steps:
- block: create-resources
expect: blocked
missing_outputs:
- outputs.validate_inputs.validated
- block: validate-inputs
expect: success
- block: create-resources
expect: success
```
## Cleanup
Cleanup actions run after the test completes, even if the test fails:
```yaml
cleanup:
# Inline command
- command: rm -rf /tmp/test-resources
# Script file
- path: cleanup/teardown.sh
```
## Testing AwsAuth Blocks
For runbooks with `` blocks, the test framework checks for AWS credentials in environment variables and injects them into the session for dependent blocks to use.
By default, the test framework checks for standard AWS environment variables:
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_SESSION_TOKEN` (optional)
- `AWS_REGION` (optional)
It also checks for `AWS_PROFILE` and `AWS_ROLE_ARN` as alternative authentication methods.
### Using a Test Prefix
To avoid conflicts with your local AWS credentials, use the `env_prefix` option on the test step to specify a prefix for test-only environment variables:
```yaml
# runbook_test.yml
tests:
- name: happy-path
steps:
- block: aws-auth
env_prefix: RUNBOOKS_TEST_
expect: success
```
This tells the test executor to check for the following environment variables:
- `RUNBOOKS_TEST_AWS_ACCESS_KEY_ID`
- `RUNBOOKS_TEST_AWS_SECRET_ACCESS_KEY`
- `RUNBOOKS_TEST_AWS_SESSION_TOKEN`
- `RUNBOOKS_TEST_AWS_REGION`
If no prefixed credentials are found, the executor falls back to standard env vars (`AWS_ACCESS_KEY_ID`, etc.).
The `env_prefix` is configured in the test config file, not in the runbook MDX. This keeps test infrastructure concerns separate from the runtime `detectCredentials` behavior that end users experience.
### CI Example
In CI, set the environment variables before running tests. Example for GitHub Actions:
```yaml
# .github/workflows/test.yml
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsRole
aws-region: us-west-2
- name: Run runbook tests
run: runbooks test ./runbooks/...
env:
RUNBOOKS_TEST_AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
RUNBOOKS_TEST_AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
RUNBOOKS_TEST_AWS_SESSION_TOKEN: ${{ env.AWS_SESSION_TOKEN }}
RUNBOOKS_TEST_AWS_REGION: ${{ env.AWS_REGION }}
```
## Testing GitHubAuth Blocks
For runbooks with `` blocks, the test framework checks for GitHub tokens in environment variables.
By default, the test framework checks in this order:
1. `RUNBOOKS_GITHUB_TOKEN` (recommended for tests)
2. `GITHUB_TOKEN`
3. `GH_TOKEN`
### Using a Test Prefix
Similar to AwsAuth, you can use `env_prefix` on the test step to check for custom environment variables:
```yaml
# runbook_test.yml
tests:
- name: happy-path
steps:
- block: gh-auth
env_prefix: CI_
expect: success
```
This tells the test executor to check for `CI_GITHUB_TOKEN` and `CI_GH_TOKEN`.
### CI Example
```yaml
# .github/workflows/test.yml
- name: Run runbook tests
run: runbooks test ./runbooks/...
env:
RUNBOOKS_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
```
## Running Tests
When you run a runbook test, the test framework will:
- Run the runbook in a temporary directory
- Capture the output of any scripts or generated files
- Validate the output of the runbook against the assertions in your test configuration file, if any
- Return a status code of 0 if the test passed, 1 if the test failed, and 2 if the test was skipped
> **Script output differences:**
When scripts run in tests, they execute without a PTY (pseudo-terminal), so you may see less output than when running the runbook interactively. For example, `git clone` shows progress messages ("Receiving objects: 47%...") only when connected to a terminal. In runbook tests, you'll see just the essential output. If you do need verbose output for debugging, many commands have flags to force it (e.g., `git clone --progress`), or just open the runbook and manually run the script.
If a runbook test fails, you will see additional details about which block failed and why.
### Single Runbook
You can run a test on a single runbook.
```bash
# Run all tests for a runbook
runbooks test ./my-runbook
# Run a specific test case
runbooks test ./my-runbook --test happy-path
# Verbose output
runbooks test ./my-runbook -v
```
### Multiple Runbooks
Use the `...` glob pattern to discover and test all runbooks:
```bash
# Test all runbooks in a directory tree
runbooks test ./runbooks/...
# Control parallel execution
runbooks test ./runbooks/... --max-parallel 4
```
### CI Output
Generate JUnit XML for CI integration:
```bash
runbooks test ./runbooks/... --output junit --output-file results.xml
```
## Out-of-Order Testing
In some cases, for testing purposes, you may wish to execute blocks in an order different than how they're defined in the runbook. You can do this by explicitly specifying the order of the blocks in the test configuration file.
Keep in mind, though, that when new blocks are added to the runbook, they will not be included in the test unless you explicitly add them to the test configuration file.
That's why we recommend leaving the `steps` section empty in your test configuration file if possible. This will ensure that all future blocks are included in the test.
```yaml
tests:
- name: out-of-order
description: Test dependency enforcement
steps:
# This should be blocked because create-account hasn't run
- block: create-resources
expect: blocked
missing_outputs:
- outputs.create_account.account_id
# Run the dependency
- block: create-account
expect: success
# Now this should succeed
- block: create-resources
expect: success
```
## Example Workflow
#### GitHub Actions
```yaml
# .github/workflows/test-runbooks.yml
name: Test Runbooks
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Set up runbooks
run: |
curl -sL https://github.com/gruntwork-io/runbooks/releases/latest/download/runbooks_linux_amd64 -o runbooks
chmod +x runbooks
- name: Configure AWS credentials (OIDC)
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.AWS_ROLE_ARN }}
aws-region: ${{ vars.AWS_REGION }}
- name: Run runbook tests
run: ./runbooks test ./runbooks/... --output junit --output-file results.xml
env:
# Map OIDC credentials to RUNBOOKS_TEST_ prefix
RUNBOOKS_TEST_AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
RUNBOOKS_TEST_AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
RUNBOOKS_TEST_AWS_SESSION_TOKEN: ${{ env.AWS_SESSION_TOKEN }}
RUNBOOKS_TEST_AWS_REGION: ${{ vars.AWS_REGION }}
- name: Upload test results
uses: actions/upload-artifact@v4
with:
name: test-results
path: results.xml
```
#### GitLab CI
```yaml
# .gitlab-ci.yml
test-runbooks:
stage: test
variables:
RUNBOOKS_TEST_AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
RUNBOOKS_TEST_AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
RUNBOOKS_TEST_AWS_REGION: $AWS_REGION
script:
- curl -sL https://github.com/gruntwork-io/runbooks/releases/latest/download/runbooks_linux_amd64 -o runbooks
- chmod +x runbooks
- ./runbooks test ./runbooks/... --output junit --output-file results.xml
artifacts:
reports:
junit: results.xml
```
## Testing with External Dependencies
Runbooks that interact with external services (GitHub, AWS, Terraform, etc.) require special testing strategies. We recommend a tiered approach.
As you ascend the tiers, you will gain more completeness at the expense of slower speed and more complexity. This builds on the concepts of the [traditional test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html).
### Tier 1: Template Validation
Template validation tests that templates generate correct files without making any external calls.
```yaml
tests:
- name: template-validation
description: Validate template generation
inputs:
config.ProjectName: "test-project"
config.Region: "us-east-1"
steps:
- block: generate-config # Template block
expect: success
- block: deploy-resources # Skip external calls
expect: skip
assertions:
- type: files_generated
block: generate-config
min_count: 1
```
### Tier 2: Dry-Run Mode
Dry-run mode tests that script logic works correctly without making any external calls. They leverage the fact that the script was written to support a `RUNBOOK_DRY_RUN` environment variable.
Use the `RUNBOOK_DRY_RUN` environment variable pattern to test script logic without side effects. Note that the `RUNBOOK_DRY_RUN` environment variable is not special in any way, it is just a convention. You can use any environment variable you want to trigger the dry-run mode.
**In your Command or Check script:**
```bash
#!/bin/bash
set -e
# Dry-run support
DRY_RUN="${RUNBOOK_DRY_RUN:-false}"
if [[ "$DRY_RUN" == "true" ]]; then
echo "[DRY-RUN] Would create PR in $GITHUB_ORG/$REPO_NAME"
echo "[DRY-RUN] gh pr create --title '$TITLE'"
exit 0
fi
# Real execution
gh pr create --title "$TITLE" --body "$BODY"
```
**In your test configuration file:**
```yaml
tests:
- name: dry-run-flow
description: Test full flow without real API calls
env:
RUNBOOK_DRY_RUN: "true"
steps:
- block: generate-config
expect: success
- block: create-pr
expect: success # Succeeds in dry-run mode
```
### Tier 3: Integration Tests
For full integration testing with real services, set environment variables for credentials in CI and run tests manually or on a schedule:
```yaml
tests:
- name: integration-test
description: Full integration (requires credentials)
env:
# Credentials set in CI environment will automatically be available to the test.
# So no need to set them in the runbook_test.yml file.
steps:
- block: deploy-resources
expect: success
cleanup:
- command: ./scripts/cleanup-test-resources.sh
```
## Best Practices
1. **Start with `test init`**: Generate your test configuration automatically, then customize it.
2. **Test happy paths first**: Ensure your runbook works when everything goes right.
3. **Test edge cases**: Add tests for expected failures, missing dependencies, and error handling.
4. **Use fuzz values**: Generate random inputs to catch edge cases you might not think of.
5. **Keep tests fast**: Use `use_temp_working_dir: true` and avoid slow external dependencies when possible.
6. **Clean up resources**: Use the `cleanup` section to remove any resources created during testing.
7. **Use CI**: Run tests on every pull request to catch issues early.
8. **Use tiered testing**: For runbooks with external dependencies, use dry-run mode in CI and save integration tests for manual/scheduled runs.
Finally, note that tests are not free to write or maintain. But it is generally worth the investment to know that your runbooks work as expected. Choose accordingly how much to invest in testing your runbooks.
---
### Blocks
#### Overview
Blocks are special React components that you can use in your `runbook.mdx` files to add interactive functionality. They're written like HTML/JSX tags within your markdown.
## Available Blocks
- [Admonition](/authoring/blocks/admonition)
- [AwsAuth](/authoring/blocks/awsauth)
- [Check](/authoring/blocks/check)
- [Command](/authoring/blocks/command)
- [DirPicker](/authoring/blocks/dirpicker)
- [GitClone](/authoring/blocks/gitclone)
- [GitHubAuth](/authoring/blocks/githubauth)
- [GitHubPullRequest](/authoring/blocks/githubpullrequest)
- [Inputs](/authoring/blocks/inputs)
- [Template](/authoring/blocks/template)
- [TemplateInline](/authoring/blocks/templateinline)
- [TfModule](/authoring/blocks/tfmodule)
## Advanced Topics
- [Advanced](/authoring/blocks/advanced) - PTY support and other advanced configuration options
---
#### Advanced
This page covers advanced configuration options for Command and Check blocks.
## Pseudo-Terminal (PTY) Support
By default, Runbooks executes scripts using a pseudo-terminal (PTY) on Unix-like systems. This enables full terminal emulation, which means CLI tools behave as if running in a real terminal.
### Why PTY Matters
Many CLI tools (git, npm, docker, terraform, etc.) detect when they're not running in a terminal and change their behavior:
- Progress bars are suppressed
- Colors are disabled
- Some output is hidden entirely
With PTY support enabled (the default), you get the full interactive experience:
```
$ git clone https://github.com/acme/my-repo
Cloning into 'my-repo'...
remote: Enumerating objects: 1234, done.
remote: Counting objects: 100% (1234/1234), done.
Receiving objects: 45% (556/1234)...
```
Without PTY (pipes mode), the same command might only show:
```
$ git clone https://github.com/acme/my-repo
Cloning into 'my-repo'...
```
### Controlling PTY Mode
Both `` and `` blocks support the `usePty` prop:
```mdx
{/* Default: uses PTY for full terminal emulation */}
{/* Explicit PTY mode */}
{/* Pipes mode: disables PTY */}
```
### When to Disable PTY
While PTY mode is generally preferred, there are cases where you might want to disable it:
- Script output is garbled or corrupted
- Script hangs or behaves unexpectedly
- Script relies on detecting non-TTY mode
- You need raw, unprocessed output
On the other hand, if progress bars and colors work correctly, then PTY mode is likely working well.
### Platform Support
| Platform | Default Execution Method |
|----------|-------------------------|
| macOS | PTY (full support) |
| Linux | PTY (full support) |
| Windows | Pipes (PTY not available) |
On Windows, scripts always run in pipes mode regardless of the `usePty` setting.
---
#### <Admonition>
The `` block creates callout boxes to highlight important information, warnings, notes, or tips. It helps draw the user's attention to critical information in your runbook.
## Basic Usage
```mdx
```
## Props
### Required Props
- `type` (string) - Type of admonition: `"note"`, `"info"`, `"warning"`, or `"danger"`
### Optional Props
- `title` (string) - Title for the callout box (defaults based on type). Supports inline markdown (bold, italic, links, code).
- `description` (string) - Content/message to display. Supports inline markdown.
- `closable` (boolean) - Whether users can close the admonition (default: false)
- `confirmationText` (string) - If provided, shows a checkbox that users must check to dismiss
- `allowPermanentHide` (boolean) - When true with confirmationText, adds "Don't show again" option
- `storageKey` (string) - Unique key for localStorage (required with allowPermanentHide)
## Types
### Note (Gray)
For general information or notes:
```mdx
```
### Info (Blue)
For helpful information or tips:
```mdx
```
### Warning (Yellow)
For warnings or cautions:
```mdx
```
### Danger (Red)
For critical warnings or errors:
```mdx
```
## Inline content
Instead of using the `description` prop, you can provide richer content inline:
```mdx
Before proceeding, ensure you have:
- AWS CLI installed
- Terraform v1.0+
- Valid AWS credentials configured
```
The content supports inline markdown:
```mdx
Make sure to review the [deployment guide](https://example.com/guide) before running these commands. **Do not** run this in production without testing first!
```
## Closable Admonitions
Allow users to dismiss the admonition:
```mdx
```
## Confirmation Checkbox
Require users to acknowledge before dismissing:
```mdx
```
## Don't Show Again
Allow users to permanently hide the admonition:
```mdx
```
---
#### <AwsAuth>
The `` block provides a streamlined interface for authenticating to AWS. It supports three authentication methods: static credentials, AWS SSO (IAM Identity Center), and local AWS profiles. Once authenticated, credentials are automatically available to subsequent [Command](/authoring/blocks/command/) and [Check](/authoring/blocks/check/) blocks.
By default, AwsAuth automatically detects credentials from environment variables. When credentials are detected, the user is prompted to confirm before proceeding, preventing accidental operations against the wrong AWS account.
## Basic Usage
```mdx
```
By default, AwsAuth checks for existing credentials in environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, etc.). If found, it validates them and prompts the user to confirm before using them.
## Authentication Methods
The AwsAuth block provides three ways to authenticate:
| Method | Description |
|--------|-------------|
| **Static Credentials** | Enter Access Key ID and Secret Access Key directly |
| **AWS SSO** | Use AWS IAM Identity Center (formerly AWS SSO) |
| **Local Profile** | Use a profile from `~/.aws/credentials` |
## Props
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| `id` | string | required | Unique identifier for this component |
| `title` | string | "AWS Authentication" | Display title shown in the UI |
| `description` | string | - | Description of the authentication purpose |
| `defaultRegion` | string | "us-east-1" | Default AWS region for CLI commands. Sets `AWS_REGION` environment variable |
| `detectCredentials` | `false` or `CredentialSource[]` | `['env']` | Whether and how to detect existing credentials. See [Credential Detection](#credential-detection) |
| `ssoStartUrl` | string | - | AWS SSO start URL (e.g., `https://my-company.awsapps.com/start`). Required for SSO |
| `ssoRegion` | string | "us-east-1" | AWS region where IAM Identity Center is configured |
| `ssoAccountId` | string | - | Pre-select a specific AWS account after SSO authentication |
| `ssoRoleName` | string | - | Pre-select a specific IAM role to assume |
## Environment Variables
When authentication succeeds, the following environment variables are automatically set for subsequent Command and Check blocks:
| Variable | Description |
|----------|-------------|
| `AWS_ACCESS_KEY_ID` | The AWS access key |
| `AWS_SECRET_ACCESS_KEY` | The AWS secret key |
| `AWS_SESSION_TOKEN` | Session token (for temporary credentials from SSO or assume role) |
| `AWS_REGION` | The selected default region |
These variables are set in the session environment, so all subsequent blocks have access without needing to explicitly reference `awsAuthId`.
## Using with Commands and Checks
When you authenticate to AWS using the AwsAuth block, the [environment variables above](#environment-variables) are automatically made available to all subsequent Command and Check blocks. This means that Command and Check blocks will use the most recently authenticated AwsAuth block (if any) to get AWS authentication credentials.
Any subsequent AWS authentication will update the environment variables and then become the default AWS authentication credentials.
However, in some cases, you may want to use a specific AWS authentication with a block, not just the most recent AWS authentication. To do that, you can reference a specific AwsAuth block from Command or Check blocks using the `awsAuthId` prop. For example:
```mdx
```
> **Tip:** While you can use `awsAuthId` to explicitly reference the AwsAuth block, the credentials are also set as session environment variables. This means subsequent blocks will have AWS access even without `awsAuthId`, though using it makes the dependency explicit.
## Configuration Examples
### Pre-selected SSO Account and Role
Skip the account/role selection step by pre-configuring them:
```mdx
```
## Multiple AWS Accounts
You can include multiple AwsAuth blocks in a single runbook to authenticate to different AWS accounts:
```mdx
```
> **Caution:** When using multiple AwsAuth blocks, be aware that the session environment variables (`AWS_ACCESS_KEY_ID`, etc.) will reflect the most recently authenticated credentials. Use `awsAuthId` to ensure each command uses the correct credentials.
## Local Profile Support
The Local Profile tab shows profiles from your `~/.aws/credentials` and `~/.aws/config` files. Two profile types are supported:
| Profile Type | Description |
|--------------|-------------|
| **Static Credentials** | Profiles with `aws_access_key_id` and `aws_secret_access_key` |
| **Assume Role** | Profiles that use `role_arn` to assume a role |
> **Note:** SSO profiles configured in `~/.aws/config` are not shown in the Local Profile tab. Use the AWS SSO tab instead to authenticate with SSO.
## Auto-Detection of Credentials
By default, AwsAuth automatically detects existing credentials from environment variables. When credentials are detected, the user is shown the AWS account ID, account name (if available), and identity (e.g. IAM role), and must confirm before the credentials are used.
> **Note:** Unlike GitHubAuth which auto-authenticates silently, AwsAuth requires explicit user confirmation. Until a user explicitly confirms which AWS account to use, no credentials are available to subsequent blocks. This prevents accidental operations against the wrong AWS account, which could have serious consequences in production environments. See [Credential Protection](#credential-protection) for details.
### Default Behavior
With no configuration, AwsAuth checks for credentials in standard AWS environment variables:
```mdx
{/* Default - same as detectCredentials={['env']} */}
```
If credentials are found and valid, the user sees a confirmation prompt showing:
- AWS Account ID
- AWS Account Name (if available)
- IAM ARN (identity)
- Default Region
- Whether credentials are temporary or static
The user can then choose to "Use These Credentials" or "Use Different Credentials".
### Credential Sources
| Source | Description |
|--------|-------------|
| `'env'` | Check standard AWS env vars (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, etc.) |
| `{ env: { prefix: 'PREFIX_' } }` | Check prefixed env vars (`PREFIX_AWS_ACCESS_KEY_ID`, etc.) |
| `{ block: 'block-id' }` | Use credentials from a Command block's output |
### Disable Auto-Detection
Force manual authentication only (no credential detection):
```mdx
```
### Custom Detection with Prefix
Check for the standard AWS environment variables, but with a custom prefix. For example, the following configuration will check for these environment variables:
- `PROD_AWS_ACCESS_KEY_ID`
- `PROD_AWS_SECRET_ACCESS_KEY`
- `PROD_AWS_SESSION_TOKEN`
- `PROD_AWS_REGION`
```mdx
```
Prefixes must follow these rules:
- Uppercase letters, numbers, and underscores only
- Must start with a letter
- Must end with a trailing underscore (e.g., `PROD_`, `MY_APP_`)
### Test Prefix
To use a different prefix for automated tests (e.g., to avoid conflicts with local credentials), configure `env_prefix` in your test config file (`runbook_test.yml`) rather than in the MDX:
```yaml
# runbook_test.yml
steps:
- block: aws-auth
env_prefix: CI_ # Checks CI_AWS_ACCESS_KEY_ID, CI_AWS_SECRET_ACCESS_KEY, etc.
expect: success
```
This keeps test configuration separate from runtime behavior. See [Testing AwsAuth Blocks](/authoring/testing/#testing-awsauth-blocks) for details.
### From Command Output
Use credentials generated by a previous Command block:
```mdx
```
The Command script should output credentials in standard format:
```bash
#!/bin/bash
CREDS=$(aws sts assume-role --role-arn "$ROLE_ARN" --role-session-name runbook)
echo "AWS_ACCESS_KEY_ID=$(echo $CREDS | jq -r '.Credentials.AccessKeyId')" >> "$RUNBOOK_OUTPUT"
echo "AWS_SECRET_ACCESS_KEY=$(echo $CREDS | jq -r '.Credentials.SecretAccessKey')" >> "$RUNBOOK_OUTPUT"
echo "AWS_SESSION_TOKEN=$(echo $CREDS | jq -r '.Credentials.SessionToken')" >> "$RUNBOOK_OUTPUT"
echo "AWS_REGION=us-west-2" >> "$RUNBOOK_OUTPUT" # Optional: override region
```
The `AWS_REGION` output is optional. If not provided, the AwsAuth block's `defaultRegion` prop is used.
### When Users Reject Auto-Detected Credentials
Auto-detected credentials are **never available to scripts until the user confirms them**. Auto-detection is read-only. It validates the credentials and shows the user which account they belong to, but does not register them to the session.
When a user clicks "Use Different Credentials" to reject auto-detected credentials:
1. The manual authentication UI is shown (Static Credentials, SSO, or Local Profile tabs)
2. The user must authenticate manually before any commands can access AWS
This ensures that if a user sees credentials for the wrong account (e.g., production when they expected development), those credentials are never used—they were never registered in the first place.
Note that this behavior only applies to the AwsAuth block. If standard AWS environment variables are available and the runbook does not contain any AwsAuth blocks, environment credentials work normally and these AWS credentials will automatically be available to all scripts.
This is one key benefit of the AwsAuth block. It ensures that users are always aware of which AWS account they're about to operate in before any credentials are registered to the session.
## Security
### Credential Protection
When a runbook contains any `` blocks, Runbooks automatically **strips AWS credentials from the session** at startup. This means:
1. **Before confirmation**: Even if you have `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_SESSION_TOKEN` set in your terminal, scripts in the runbook cannot access them until you explicitly confirm which account to use
2. **Detection is read-only**: When AwsAuth checks for credentials, it reads from your environment but does NOT register them to the session
3. **Confirmation registers credentials**: Only after you click "Use These Credentials" are the credentials added to the session and available to subsequent scripts
This design prevents a common and dangerous scenario: you have production credentials set in your terminal, open a runbook intending to work in development, and accidentally run scripts against production.
> **Tip:** If your runbook does NOT contain any AwsAuth blocks, environment credentials work normally - they're available to all scripts. The protection only activates when AwsAuth is present, since that signals the runbook author wants explicit credential management.
### Why Confirmation is Required
AWS operations can have significant consequences - creating resources that cost money, modifying production data, or deleting critical infrastructure. Unlike GitHub where most operations are reversible (you can revert commits), AWS operations are often immediate and irreversible.
The confirmation flow ensures users are aware of which AWS account they're about to operate in before any credentials are registered to the session. This is especially important in environments where:
- Users have access to multiple AWS accounts (dev, staging, production)
- Environment variables might be set by tools like aws-vault or granted
- CI/CD pipelines set credentials that persist across sessions
### Credential Handling
- **Credentials stay local**: All authentication happens between your machine and AWS. Credentials are never transmitted to external servers or to Gruntwork (unless your runbook includes scripts that explicitly do this).
- **Session storage**: Credentials are stored only in your local Runbooks session and are never persisted to disk.
- **Validation before use**: Credentials are validated via STS GetCallerIdentity before being registered to the session.
- **Protected until confirmed**: AWS credentials are stripped from the session at startup when AwsAuth blocks are present, and only added back after explicit user confirmation.
## Detailed Example
See the [aws-auth feature demo](https://github.com/gruntwork-io/runbooks/tree/main/testdata/feature-demos/aws-auth) for a good walkthrough of the AwsAuth block.
---
#### <Check>
The `` block validates a user's system state by running shell commands or scripts. It's used to ensure that users have the right tools installed and their environment is properly configured before proceeding.
## Basic Usage
```mdx
```
## vs. Command
Check blocks and [Command](/authoring/blocks/command/) blocks share many features in common, however they each have a distinct purpose. Check blocks are focused on _reading_ the state of the world and validating it, while Command blocks are focused on _mutating_ the state of the world to update it to what is needed.
## Props
### Required Props
- `id` (string) - Unique identifier for this check block
- `title` (string) - Display title shown in the UI. Supports inline markdown (bold, italic, links, code).
### Optional Props
- `description` (string) - Longer description of what's being checked. Supports inline markdown.
- `command` (string) - Inline command to execute (alternative to `path`)
- `path` (string) - Path to a shell script file relative to the runbook (alternative to `command`)
- `inputsId` (string | string[]) - ID of an [Inputs](/authoring/blocks/inputs/) block to get template variables from. Can be a single ID or an array of IDs. When multiple IDs are provided, variables are merged in order (later IDs override earlier ones).
- `awsAuthId` (string) - ID of an [AwsAuth](/authoring/blocks/awsauth/) block for AWS credentials. The credentials are passed as environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION`). The Check button is disabled until authentication completes.
- `githubAuthId` (string) - ID of a [GitHubAuth](/authoring/blocks/githubauth/) block for GitHub credentials. The credentials are passed as environment variables (`GITHUB_TOKEN`, `GITHUB_USER`). The Check button is disabled until authentication completes.
- `successMessage` (string) - Message shown when check succeeds (default: "Success"). Supports inline markdown.
- `warnMessage` (string) - Message shown on warning (default: "Warning"). Supports inline markdown.
- `failMessage` (string) - Message shown when check fails (default: "Failed"). Supports inline markdown.
- `runningMessage` (string) - Message shown while running (default: "Checking..."). Supports inline markdown.
- `usePty` (boolean) - Whether to use a pseudo-terminal (PTY) for script execution. Defaults to `true`. Set to `false` to use pipes instead, which may be needed for scripts that don't work well with PTY or when simpler output handling is preferred. See [PTY Support](/authoring/blocks/advanced#pseudo-terminal-pty-support) for details.
### Inline content
Instead of referencing an external `` block via `inputsId`, you can nest an `` component directly inside the Check:
```mdx
```yaml
variables:
- name: BucketName
type: string
description: Name of the S3 bucket to check
validations:
- required
\```
```
The embedded `` renders directly within the Check block, allowing users to fill in variables before running the check.
Other blocks can reference this Inputs block using the standard `inputsId` pattern.
## Writing Scripts
Check blocks run shell scripts to enable users to run some kind of validation.
Scripts can be defined inline using the `command` prop or stored in external files using the `path` prop.
When writing scripts for Check blocks:
- **Exit codes matter.** Return `0` for success, `1` for failure, or `2` for warning
- **Use logging helpers.** Standardized functions like `log_info` and `log_error` are available
- **Templatize with variables.** Use `{{ .inputs.VariableName }}` syntax to inject user input
Scripts run in a non-interactive shell environment. See [Execution Context](#execution-context) for details.
### Defining Scripts
You can write scripts either inline or by referencing script files.
#### Inline Scripts
For simple checks, you can define the script directly in the `command` prop:
```mdx
```
Inline scripts work best for one-liners or short commands. For anything more complex, use an external script file.
#### External Scripts
Instead of inline commands, you can reference external shell scripts:
```mdx
```
External scripts are plain old bash scripts. The referenced script `checks/aws-authenticated.sh` might look like:
```bash
#!/bin/bash
log_info "Checking AWS authentication..."
if aws sts get-caller-identity &>/dev/null; then
log_info "AWS credentials are valid"
exit 0
else
log_error "Not authenticated to AWS"
exit 1
fi
```
### Exit Codes
The Check block interprets your script's exit codes as follows:
- **Exit code 0**: Success ✓ (green)
- **Exit code 1**: Failure ✗ (red)
- **Exit code 2**: Warning ⚠ (yellow)
These exit codes will determine how the Runbooks UI renders the result of running a script.
### Logging
Runbooks provides standardized logging functions for your scripts by automatically importing a [logging.sh file](https://github.com/gruntwork-io/runbooks/blob/main/scripts/logging.sh) that defines a standardized set of Bash logging functions. Using these functions enables consistent output formatting and allows the Runbooks UI to parse log levels for filtering and export.
#### Log Levels
| Function | Output | Description |
|----------|--------|-------------|
| `log_info "msg"` | `[timestamp] [INFO] msg` | General informational messages |
| `log_warn "msg"` | `[timestamp] [WARN] msg` | Warning conditions |
| `log_error "msg"` | `[timestamp] [ERROR] msg` | Error messages |
| `log_debug "msg"` | `[timestamp] [DEBUG] msg` | Debug output (only when `DEBUG=true`) |
#### Usage Example
```bash
#!/bin/bash
log_info "Starting validation..."
log_debug "Checking environment variable: $MY_VAR"
if [ -z "$MY_VAR" ]; then
log_warn "MY_VAR is not set, using default"
fi
if ! command -v aws &>/dev/null; then
log_error "AWS CLI is not installed"
exit 1
fi
log_info "Validation complete"
```
#### Local Development
Runbooks automatically injects these logging functions into every bash script at runtime — no `source` or `import` is needed. To run these scripts locally outside the Runbooks environment, see the [Command block Local Development guide](/authoring/blocks/command/#local-development).
### With Variables
There are several ways to collect variables to customize a check's command or script.
#### Using inputsId
The Check command or script pulls its values from a separate Inputs block.
```mdx
```yaml
variables:
- name: AwsRegion
type: string
description: AWS region to check
default: us-east-1
\```
```
#### Using Inline Inputs
The Check command collects input values directly. These values can be shared with other blocks, just like a standalone Inputs block.
```mdx
```yaml
variables:
- name: KmsKeyId
type: string
description: KMS Key ID to validate
validations:
- required
\```
```
#### Using Multiple inputsIds
You can reference multiple Inputs blocks by passing an array of IDs. Variables are merged in order, with later IDs overriding earlier ones:
```mdx
```yaml
variables:
- name: GithubOrgName
type: string
- name: GithubRepoName
type: string
\```
```
In this example, the check has access to all variables from both `lambda-config` and `repo-config`. If both define a variable with the same name, the value from `repo-config` (the later ID) takes precedence.
### Execution Context
Scripts run in a **persistent environment** — environment variable changes (`export`, `unset`) and working directory changes (`cd`) carry forward to subsequent blocks. This lets you structure your runbook like a workflow where earlier steps set up the environment for later steps.
Scripts also run in a **non-interactive shell**, which means shell aliases (like `ll`) and shell functions (like `nvm`, `rvm`) are **not available**.
For full details, see [Shell Execution Context](/security/shell-execution-context/).
### Examples
Let's take a look at some example scripts:
#### Basic Validation Script
```bash
#!/bin/bash
# checks/terraform-installed.sh
log_info "Checking for OpenTofu installation..."
if command -v tofu &> /dev/null; then
log_info "OpenTofu is installed: $(tofu version | head -1)"
exit 0
else
log_error "OpenTofu is not installed"
exit 1
fi
```
#### Script with Warning
```bash
#!/bin/bash
# checks/disk-space.sh
log_info "Checking available disk space..."
available=$(df -h / | awk 'NR==2 {print $4}' | sed 's/G//')
log_debug "Available space: ${available}GB"
if [ "$available" -lt 1 ]; then
log_error "Less than 1GB available"
exit 1
elif [ "$available" -lt 5 ]; then
log_warn "Less than 5GB available"
exit 2
else
log_info "Disk space OK: ${available}GB available"
exit 0
fi
```
#### Parameterized Script
```bash
#!/bin/bash
# checks/s3-bucket-exists.sh
BUCKET_NAME="{{ .inputs.BucketName }}"
log_info "Checking if S3 bucket exists..."
log_debug "Bucket name: ${BUCKET_NAME}"
if aws s3 ls "s3://${BUCKET_NAME}" &> /dev/null; then
log_info "Bucket ${BUCKET_NAME} exists"
exit 0
else
log_error "Bucket ${BUCKET_NAME} does not exist"
exit 1
fi
```
## Block Outputs
Check blocks can produce **outputs** that downstream blocks can consume. This enables multi-step workflows where each step builds on the previous one.
### Producing Outputs
Scripts write outputs to a file specified by the `$RUNBOOK_OUTPUT` environment variable. Each output is a `key=value` pair on its own line:
```bash
#!/bin/bash
# Verify an account and output its ID
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
# Write outputs for downstream blocks
echo "account_id=$ACCOUNT_ID" >> "$RUNBOOK_OUTPUT"
exit 0
```
### Consuming Outputs
Reference outputs from other blocks using the `outputs` namespace in templates:
```bash
# Use the account ID from the verify-account block
echo "Using account {{ .outputs.verify_account.account_id }}"
```
The full syntax is: `{{ .outputs.<block-id>.<output-name> }}`
### Dependency Behavior
If a block references outputs from another block that hasn't run yet:
- The **Check button is disabled** until the upstream block executes
- A **warning message** shows which blocks need to run first
- The template shows the raw syntax until outputs are available
### Viewing Outputs
After a block runs, you can view its outputs by clicking **"View Outputs"** below the log viewer. Outputs are displayed in a table and can be copied as JSON.
For more details on block outputs, see the [Command block documentation](/authoring/blocks/command/#block-outputs).
> **See It in Action:**
Check out the [block-outputs feature demo](https://github.com/gruntwork-io/runbooks/tree/main/testdata/feature-demos/block-outputs) for a complete working demonstration of block outputs, including a Check block that consumes outputs from upstream Command blocks.
## Capturing Output Files
Scripts can write files to two different destinations, depending on the workflow.
- [Generated files](/intro/files_workspace/#generated-files)
- [Repository files](/intro/files_workspace/#repository-files)
See the [Files workspace](/intro/files_workspace/) page for more details.
### Writing to Generated Files
Use `$GENERATED_FILES` to capture new files. Any files written to this directory automatically appear in the file panel after the check completes successfully.
#### Basic Example
```bash
#!/bin/bash
# Export current state for review
aws sts get-caller-identity > "$GENERATED_FILES/caller-identity.json"
# Save a diagnostic report
echo "Check ran at $(date)" > "$GENERATED_FILES/diagnostic-report.txt"
```
#### Organizing Files with Subdirectories
You can create subdirectories within `$GENERATED_FILES` to organize your captured files:
```bash
#!/bin/bash
mkdir -p "$GENERATED_FILES/diagnostics"
mkdir -p "$GENERATED_FILES/config"
aws sts get-caller-identity > "$GENERATED_FILES/diagnostics/caller-identity.json"
aws configure list > "$GENERATED_FILES/diagnostics/aws-config.txt"
```
This creates:
```
generated/
├── diagnostics/
│ ├── caller-identity.json
│ └── aws-config.txt
└── config/
└── ...
```
#### How It Works
1. Before your script runs, Runbooks creates a temporary capture directory
2. The `$GENERATED_FILES` environment variable points to this directory
3. Your script writes files to `$GENERATED_FILES`
4. After successful execution, files are copied to the generated files directory
5. The temporary directory is cleaned up
> **Tip:**
Files are only captured after **successful** execution (exit code 0 or 2). If your script fails, any files written to `$GENERATED_FILES` are discarded.
> **Note:**
If multiple checks run concurrently and write files with the same name, the last check to finish will overwrite the file. Use unique filenames or subdirectories to avoid conflicts.
### Writing to Repository Files
If a `` block has cloned a repository, the `$REPO_FILES` environment variable points to the local path of the most recently cloned repository. Scripts can use this to read or validate files inside the cloned repo:
```bash
#!/bin/bash
# Validate a config file in the cloned repo
if [ -f "$REPO_FILES/terragrunt.hcl" ]; then
log_info "Found terragrunt.hcl"
exit 0
else
log_error "Missing terragrunt.hcl"
exit 1
fi
```
Unlike `$GENERATED_FILES`, writes to `$REPO_FILES` are not captured to a temporary directory; they happen directly on the filesystem. Changes are visible in the **Changed** tab via `git diff`.
> **Note:**
`$REPO_FILES` is only set when a `` block has successfully cloned a repository. If no repo has been cloned, this variable is **unset**. Always check for it in your scripts:
```bash
if [ -z "${REPO_FILES:-}" ]; then
echo "No git worktree available. Clone a repo first."
exit 1
fi
```
## Common Use Cases
The `` block works especially well for:
- Pre-flight checks
- Validating `` blocks
- Smoke tests that validate a completed Runbook
This might manifest as:
- **Tool Installation Verification**: Check if required CLI tools are installed
- **Authentication Validation**: Verify users are logged into required services
- **Infrastructure State**: Validate that required resources exist
- **Configuration Validation**: Ensure config files are properly formatted
- **Network Connectivity**: Test connectivity to required services
- **Permissions**: Verify users have necessary permissions
---
#### <Command>
The `` block executes shell commands or scripts with variable substitution. It's used for performing operations like deployments, resource creation, and system configuration.
## Basic Usage
```mdx
```
## vs. Check
Command blocks and [Check](/authoring/blocks/check/) blocks share many features in common, however they each have a distinct purpose. Check blocks are focused on _reading_ the state of the world and validating it, while Command blocks are focused on _mutating_ the state of the world to update it to what is needed.
## Props
### Required Props
- `id` (string) - Unique identifier for this command block
### Optional Props
- `title` (string) - Display title shown in the UI. Supports inline markdown (bold, italic, links, code).
- `description` (string) - Longer description of what the command does. Supports inline markdown.
- `command` (string) - Inline command to execute (alternative to `path`)
- `path` (string) - Path to a shell script file relative to the runbook (alternative to `command`)
- `inputsId` (string | string[]) - ID of an [Inputs](/authoring/blocks/inputs/) block to get template variables from. Can be a single ID or an array of IDs. When multiple IDs are provided, variables are merged in order (later IDs override earlier ones).
- `awsAuthId` (string) - ID of an [AwsAuth](/authoring/blocks/awsauth/) block for AWS credentials. The credentials are passed as environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION`). The Run button is disabled until authentication completes.
- `githubAuthId` (string) - ID of a [GitHubAuth](/authoring/blocks/githubauth/) block for GitHub credentials. The credentials are passed as environment variables (`GITHUB_TOKEN`, `GITHUB_USER`). The Run button is disabled until authentication completes.
- `successMessage` (string) - Message shown when command succeeds (default: "Success"). Supports inline markdown.
- `failMessage` (string) - Message shown when command fails (default: "Failed"). Supports inline markdown.
- `runningMessage` (string) - Message shown while running (default: "Running..."). Supports inline markdown.
- `usePty` (boolean) - Whether to use a pseudo-terminal (PTY) for script execution. Defaults to `true`. Set to `false` to use pipes instead, which may be needed for scripts that don't work well with PTY or when simpler output handling is preferred. See [PTY Support](/authoring/blocks/advanced#pseudo-terminal-pty-support) for details.
### Inline content
Instead of referencing an external `` block via `inputsId`, you can nest an `` component directly inside the Command:
```mdx
```yaml
variables:
- name: Name
type: string
description: Your name
validations:
- required
\```
```
The embedded `` renders directly within the Command block, allowing users to fill in variables before running the command.
Other blocks can reference this Inputs block using the standard `inputsId` pattern.
## Writing Scripts
Command blocks run shell scripts to perform operations like deployments, installations, and configuration changes.
Scripts can be defined inline using the `command` prop or stored in external files using the `path` prop.
When writing scripts for Command blocks:
- **Exit codes matter.** Return `0` for success, any other code for failure
- **Use logging helpers.** Standardized functions like `log_info` and `log_error` are available
- **Templatize with variables.** Use `{{ .inputs.VariableName }}` syntax to inject user input
Scripts run in a non-interactive shell environment. See [Execution Context](#execution-context) for details.
### Defining Scripts
You can write scripts either inline or by referencing script files.
#### Inline Scripts
For simple commands, you can define the script directly in the `command` prop:
```mdx
```
Inline scripts work best for one-liners or short commands. For anything more complex, use an external script file.
#### External Scripts
Instead of inline commands, you can reference external shell scripts:
```mdx
```
External scripts are plain old bash scripts. The referenced script `scripts/deploy.sh` might look like:
```bash
#!/bin/bash
log_info "Starting deployment..."
kubectl apply -f deployment.yaml
if kubectl rollout status deployment/myapp; then
log_info "Deployment complete!"
exit 0
else
log_error "Deployment failed"
exit 1
fi
```
### Exit Codes
The Command block interprets your script's exit codes as follows:
- **Exit code 0**: Success ✓ (green)
- **Any other exit code**: Failure ✗ (red)
These exit codes will determine how the Runbooks UI renders the result of running a script.
### Logging
Runbooks provides standardized logging functions for your scripts by automatically importing a [logging.sh file](https://github.com/gruntwork-io/runbooks/blob/main/scripts/logging.sh) that defines a standardized set of Bash logging functions. Using these functions enables consistent output formatting and allows the Runbooks UI to parse log levels for filtering and export.
#### Log Levels
| Function | Output | Description |
|----------|--------|-------------|
| `log_info "msg"` | `[timestamp] [INFO] msg` | General informational messages |
| `log_warn "msg"` | `[timestamp] [WARN] msg` | Warning conditions |
| `log_error "msg"` | `[timestamp] [ERROR] msg` | Error messages |
| `log_debug "msg"` | `[timestamp] [DEBUG] msg` | Debug output (only when `DEBUG=true`) |
#### Usage Example
```bash
#!/bin/bash
log_info "Starting deployment..."
log_debug "Target environment: $ENVIRONMENT"
if [ -z "$API_KEY" ]; then
log_warn "API_KEY not set, some features may be unavailable"
fi
if ! deploy_application; then
log_error "Deployment failed"
exit 1
fi
log_info "Deployment complete"
```
#### Local Development
When running scripts locally (outside the Runbooks UI), the logging functions aren't automatically injected. To simulate the Runbooks environment, run this one-liner in your terminal session (bash or zsh):
```bash
mkdir -p ~/.config/runbooks && curl -fsSL https://raw.githubusercontent.com/gruntwork-io/runbooks/main/scripts/logging.sh -o ~/.config/runbooks/logging.sh && export BASH_ENV=~/.config/runbooks/logging.sh
```
This sets [`BASH_ENV`](https://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html), which tells bash to source the logging functions before running any script. It works from bash and zsh parent shells and lasts until the terminal is closed. No changes to your scripts are needed — the same script works in both environments.
> **Fish shell:**
The one-liner above uses `export` and `&&`, which are not fish syntax. Fish also does not read `BASH_ENV` itself, but child `bash` processes will if the variable is exported. Use the fish equivalent:
```fish
mkdir -p ~/.config/runbooks; and curl -fsSL https://raw.githubusercontent.com/gruntwork-io/runbooks/main/scripts/logging.sh -o ~/.config/runbooks/logging.sh; and set -gx BASH_ENV ~/.config/runbooks/logging.sh
```
To make this persistent across sessions, add the following to a fish startup file (e.g., `~/.config/fish/conf.d/runbooks.fish`):
```fish
if test -f ~/.config/runbooks/logging.sh
set -gx BASH_ENV ~/.config/runbooks/logging.sh
end
```
### With Variables
There are several ways to collect variables to customize a command or script.
#### Using inputsId
The Command's command or script pulls its values from a separate Inputs block.
```mdx
```yaml
variables:
- name: OrgName
type: string
description: GitHub organization name
- name: RepoName
type: string
description: Repository name
\```
```
#### Using Inline Inputs
The Command collects input values directly. These values can be shared with other blocks, just like a standalone Inputs block.
```mdx
```yaml
variables:
- name: Name
type: string
description: Your name
validations:
- required
\```
```
#### Using Multiple inputsIds
You can reference multiple Inputs blocks by passing an array of IDs. Variables are merged in order, with later IDs overriding earlier ones:
```mdx
```yaml
variables:
- name: GithubOrgName
type: string
description: GitHub organization name
- name: GithubRepoName
type: string
description: Repository name
\```
```
In this example, the command has access to all variables from both `lambda-config` and `repo-config`. If both define a variable with the same name, the value from `repo-config` (the later ID) takes precedence.
### Execution Context
Scripts run in a **persistent environment** — environment variable changes (`export`, `unset`) and working directory changes (`cd`) carry forward to subsequent blocks. This lets you structure your runbook like a workflow where earlier steps set up the environment for later steps.
Scripts also run in a **non-interactive shell**, which means shell aliases (like `ll`) and shell functions (like `nvm`, `rvm`) are **not available**.
For full details, see [Shell Execution Context](/security/shell-execution-context/).
### Examples
Let's take a look at some example scripts:
#### Simple Deployment Script
```bash
#!/bin/bash
# scripts/deploy.sh
set -e # Exit on error
log_info "Starting deployment..."
kubectl apply -f deployment.yaml
kubectl rollout status deployment/myapp
log_info "Deployment complete!"
```
#### Parameterized Script
```bash
#!/bin/bash
# scripts/create-vpc.sh
REGION="{{ .inputs.AwsRegion }}"
VPC_NAME="{{ .inputs.VpcName }}"
CIDR_BLOCK="{{ .inputs.CidrBlock }}"
log_info "Creating VPC $VPC_NAME in $REGION..."
log_debug "CIDR block: $CIDR_BLOCK"
aws ec2 create-vpc \
--cidr-block "$CIDR_BLOCK" \
--tag-specifications "ResourceType=vpc,Tags=[{Key=Name,Value=$VPC_NAME}]" \
--region "$REGION"
log_info "VPC created successfully!"
```
#### Script with Error Handling
```bash
#!/bin/bash
# scripts/safe-deploy.sh
set -e
function cleanup {
log_info "Cleaning up..."
# Cleanup code here
}
trap cleanup EXIT
log_info "Running pre-deployment checks..."
./check-prerequisites.sh || { log_error "Pre-checks failed"; exit 1; }
log_info "Deploying..."
./deploy.sh
log_info "Running post-deployment validation..."
./validate-deployment.sh || { log_error "Validation failed"; exit 1; }
log_info "Deployment successful!"
```
## Capturing Output Files
Scripts can write files to two different destinations, depending on the workflow.
- [Generated files](/intro/files_workspace/#generated-files)
- [Repository files](/intro/files_workspace/#repository-files)
See the [Files workspace](/intro/files_workspace/) page for more details.
### Writing to Generated Files
Use `$GENERATED_FILES` to capture new files. Any files written to this directory automatically appear in the file panel after the command completes successfully.
#### Basic Example
```bash
#!/bin/bash
# Export OpenTofu outputs to generated files
tofu output -json > "$GENERATED_FILES/tf-outputs.json"
# Copy a config file
cp config.yaml "$GENERATED_FILES/"
```
Or as an inline command:
```mdx
"$GENERATED_FILES/greeting.txt"`}
successMessage="File created successfully!"
/>
```
#### Organizing Files with Subdirectories
You can create subdirectories within `$GENERATED_FILES` to organize your captured files:
```bash
#!/bin/bash
mkdir -p "$GENERATED_FILES/terraform"
mkdir -p "$GENERATED_FILES/config"
tofu output -json > "$GENERATED_FILES/opentofu/outputs.json"
echo '{"env": "production"}' > "$GENERATED_FILES/config/settings.json"
```
This creates:
```
generated/
├── opentofu/
│ └── outputs.json
└── config/
└── settings.json
```
#### How It Works
1. Before your script runs, Runbooks creates a temporary capture directory
2. The `$GENERATED_FILES` environment variable points to this directory
3. Your script writes files to `$GENERATED_FILES`
4. After successful execution, files are copied to the generated files directory
5. The temporary directory is cleaned up
> **Tip:** Files are only captured after **successful** execution (exit code 0 or 2). If your script fails, any files written to `$GENERATED_FILES` are discarded.
> **Note:** If multiple commands run concurrently and write files with the same name, the last command to finish will overwrite the file. Use unique filenames or subdirectories to avoid conflicts.
### Writing to Repository Files
If a `` block has cloned a repository, the `$REPO_FILES` environment variable points to the local path of the most recently cloned repository. Scripts can use this to modify files inside the cloned repo directly:
```bash
#!/bin/bash
# Modify a file in the cloned repository
echo "new setting = true" >> "$REPO_FILES/config.hcl"
```
Unlike `$GENERATED_FILES`, writes to `$REPO_FILES` are not captured to a temporary directory; they happen directly on the filesystem. Changes are visible in the **Changed** tab via `git diff`.
> **Note:** `$REPO_FILES` is only set when a `` block has successfully cloned a repository. If no repo has been cloned, this variable is **unset**. Always check for it in your scripts:
```bash
if [ -z "${REPO_FILES:-}" ]; then
echo "No git worktree available. Clone a repo first."
exit 1
fi
```
## Block Outputs
Commands can produce **outputs** that downstream blocks consume. Scripts write `key=value` pairs to the `$RUNBOOK_OUTPUT` file, and downstream blocks reference them via `{{ .outputs.<block-id>.<key> }}`.
```bash
#!/bin/bash
ACCOUNT_ID=$(aws organizations create-account ...)
echo "account_id=$ACCOUNT_ID" >> "$RUNBOOK_OUTPUT"
```
After a block runs, click **"View Outputs"** below the log viewer to inspect its outputs.
For the full guide on producing outputs, consuming them in templates, dependency behavior, and working with complex data types, see [Inputs & Outputs — Block Outputs](/authoring/inputs-and-outputs/#block-outputs).
## Common Use Cases
The `` block works especially well for mutating the world to a desired state. This could be either the user's local environment, the company's world, or the external world.
This might manifest as:
- **Installing tools**: Install tools needed to execute the runbook
- **Configure environment**: Configure the user's environment
- **Provisioning resources**: Hit an API to provision resource.
- **Deployments**: Deploy applications or infrastructure to cloud environments
- **Database Operations**: Run migrations or seed data
- **Build Steps**: Compile code or build Docker images
---
#### <DirPicker>
The `` block provides a cascading set of dropdowns for selecting a path.
For example, if the user git cloned a repository with the following directory structure:
```
.
├── dev
│ ├── region-1
│ └── region-2
└── prod
├── region-1
├── region-2
└── region-3
```
The DirPicker block could be configured with the following props:
```mdx
```
Users then get a form with one dropdown for each directory level.

The options available in the dropdowns are the immediate subdirectories of the current directory. For example, if the user selects "dev" in the first dropdown, the options in the second dropdown will be "region-1" and "region-2".
Users can also type a path directly into an auto-populated and editable text input below the dropdowns (called "Target Path" in the above screenshot).
The selected path is registered as a block output, making it available to downstream `` blocks via the `{{ .outputs.<id>.PATH }}` syntax.
### Basic Usage
DirPicker needs a root directory to browse. You can provide one in two ways:
**Option 1: `rootDir`** — pass an explicit directory path:
```mdx
```
**Option 2: `gitCloneId`** — reference a `` block. DirPicker waits for the clone to complete, then uses the cloned repository as its root directory:
```mdx
```
At least one of `rootDir` or `gitCloneId` must be provided. If both are set, `rootDir` takes precedence.
### Use cases
We built the DirPicker block specifically for the developer self-service use case.
For example, suppose a developer who doesn't know infrastructure-as-code very well wants to deploy a new application to a specific AWS account. They could use the DirPicker block to get a guided UI that helps them select the account, region, and application name, and then the block generates exactly the right path.
More generally, the DirPicker block can be used to help users select any path when the meaning of the levels of a directory hierarchy is known in advance.
### Props
| Prop | Type | Required | Description |
|------|------|----------|-------------|
| `id` | `string` | Yes | Unique block identifier. Used by downstream blocks to reference outputs. |
| `rootDir` | `string` | No* | Absolute path to the root directory to browse. When set, the block uses this path directly. |
| `gitCloneId` | `string` | No* | ID of a `` block. DirPicker waits for the clone to complete, then uses the cloned repository as its root directory. |
| `title` | `string` | No | Display title. Supports inline markdown. Defaults to "Select Directory". |
| `description` | `string` | No | Description text below the title. Supports inline markdown. Defaults to "Choose a target directory". |
| `dirLabels` | `string[]` | Yes | Ordered labels for each directory level (e.g., `['Account', 'Region']`). Also caps the number of dropdowns to `dirLabels.length` unless `dirLabelsExtra` is true. |
| `dirLabelsExtra` | `boolean` | No | When `true`, allow navigating deeper than `dirLabels.length`. Extra levels are labelled "Level N". Defaults to `false`. |
| `pathLabel` | `string` | No | Label for the editable path text input. Defaults to "Target Path". |
| `pathLabelDescription` | `string` | No | Description text shown below the path label. Supports inline markdown. |
\* At least one of `rootDir` or `gitCloneId` is required. If both are set, `rootDir` takes precedence.
### Outputs
| Output | Description |
|--------|-------------|
| `PATH` | The composed directory path (relative to the workspace root). Updated on every dropdown selection or manual edit. |
### Directory-Level Labels
The required `dirLabels` prop assigns meaningful names to each cascading directory level. Each entry in the array labels the corresponding depth of the directory tree. Each dropdown shows a labelled header (e.g. "Account") and its placeholder reads "Select account...".
If the user drills deeper than the number of labels provided and the block is configured with `dirLabelsExtra={true}`, extra levels fall back to "Level N".
### Using Outputs in Templates
The DirPicker outputs `PATH`, which can be referenced in a `` block's `outputPath` prop:
```mdx
```
The `outputPath` resolves `{{ .outputs.*.* }}` expressions client-side using block outputs, so the file is written to the directory the user selected.
### How It Works
1. DirPicker resolves its root directory from `rootDir` (if set) or by reading the `CLONE_PATH` output of the referenced `` block once the clone completes.
2. It fetches the immediate subdirectories via `GET /api/workspace/dirs`.
3. Each dropdown selection triggers a fetch for the next level's subdirectories, building a cascading drill-down.
4. An editable text input below the dropdowns shows the composed path. Users can edit this directly for full control.
5. On every change (dropdown or manual edit), the block registers `PATH` as an output.
Hidden directories (those starting with `.`) are excluded from the dropdown options.
---
#### <GitClone>
The `` block provides a streamlined way to clone git repositories. It works with any git upstream, and includes optional GitHub integration for browsing organizations, repositories, and branches when a GitHub token is available.
Compared to using the [Command](/authoring/blocks/command/) block to clone a git repository, the GitClone block provides a purpose-built UI for cloning a git repository, the ability to search the GitHub API for orgs, repos, branches, and tags, and automatically shows the [file workspace](/authoring/workspace), where users can see the contents of the cloned repository, along with any changes to it.
### Basic Usage
At its simplest, the GitClone block provides a single text input for a git URL:
```mdx
```
### With GitHub Authentication
When paired with a `` block, the GitClone block enables a "Browse GitHub repositories" dropdown for discovering repos by organization and name, and a ref selector for choosing a branch or tag to clone. The GitHub token is also automatically injected when cloning GitHub URLs, enabling access to private repositories.
```mdx
```
### Pre-filled Values
You can pre-populate the URL, ref, sparse checkout path, and local path fields. Users can still edit these values before cloning. This follows the same pattern as the `prefilledVariables` prop on the [Inputs](/authoring/blocks/inputs) block.
```mdx
```
### Props
| Prop | Type | Required | Description |
|------|------|----------|-------------|
| `id` | `string` | Yes | Unique block identifier. Used by downstream blocks to reference outputs. |
| `title` | `string` | No | Display title for the block header. Supports inline markdown. |
| `description` | `string` | No | Description text below the title. Supports inline markdown. |
| `gitHubAuthId` | `string` | No | Reference to a [``](/authoring/blocks/githubauth) block. When set, the block waits for authentication to complete and uses the token for GitHub API access and clone authentication. |
| `prefilledUrl` | `string` | No | Pre-fills the Git URL input field. The user can edit this value before cloning. |
| `prefilledRef` | `string` | No | Pre-fills the ref (branch or tag) to clone. When set, the specified ref is passed to `git clone --branch`. Defaults to the repository's default branch if empty. |
| `prefilledRepoPath` | `string` | No | Pre-fills the sparse checkout path. When set, only the specified subdirectory of the repository will be cloned (e.g., `modules/vpc`). |
| `prefilledLocalPath` | `string` | No | Pre-fills the local path (relative to the current working directory) where files will be cloned. Defaults to the repository name if empty. |
| `usePty` | `boolean` | No | Whether to use a pseudo-terminal (PTY) for the git clone process. Defaults to `true`. PTY enables rich output like progress bars and colors. Set to `false` if your environment doesn't support PTY. |
| `showFileTree` | `boolean` | No | Whether to show the cloned repository's file tree in the workspace panel after cloning. Defaults to `true`. When enabled, the "All files" and "Changed" tabs display the cloned files and any subsequent modifications. |
### Ref Selection
The "Ref" field lets users specify a branch or tag to clone instead of the default branch. There are two ways to set the ref:
1. **GitHub browser.** When a GitHub token is available and a repo is selected in the GitHub browser, a searchable "Ref" dropdown appears listing all branches and tags in that repo. The default branch is automatically pre-selected and marked with a "default" badge.
2. **Text input.** The "Ref" field below the GitHub browser accepts any branch or tag name directly, independent of GitHub authentication. Use the `prefilledRef` prop to set this in advance.
The ref is passed to `git clone --branch`, which accepts both branch names and tag names.
### File Workspace Integration
When `showFileTree` is `true` (the default), the GitClone block registers the cloned repository with the workspace panel. After a successful clone:
- The **All files** tab shows the full file tree of the cloned repository. Click any file to view its contents.
- The **Changed** tab shows a GitHub PR-style diff view of any files modified after cloning (e.g., by Command or Template blocks).
- If multiple `` blocks are used in a single runbook, a dropdown in the workspace header lets the user switch between repositories.
Set `showFileTree={false}` if you don't want the cloned repository to appear in the workspace panel (e.g., for helper repositories that the user doesn't need to browse).
### Accepted Git URL Formats
The GitClone block accepts the following URL formats:
| Format | Example | Notes |
|--------|---------|-------|
| HTTPS | `https://github.com/org/repo.git` | Recommended. Token auth injected automatically for GitHub URLs. |
| HTTPS (no .git) | `https://github.com/org/repo` | Also works without the `.git` suffix. |
| Any host | `https://gitlab.com/org/repo.git` | Works with any git hosting provider. |
| SSH | `git@github.com:org/repo.git` | Token auth does **not** apply to SSH URLs. Use HTTPS for token-based auth. |
### GitHub Authentication
The GitClone block resolves GitHub credentials in the following order:
1. **`gitHubAuthId`** — If specified, uses the `GITHUB_TOKEN` from the referenced GitHubAuth block's outputs.
2. **Session environment** — Checks the session environment for `GITHUB_TOKEN` or `GH_TOKEN`.
3. **No token** — The "Browse GitHub repositories" section is hidden. The user can still clone public repos or use SSH URLs.
When a GitHub token is available and the user enters a `github.com` HTTPS URL, the token is automatically injected on the server side for authentication. The token is never exposed in the browser.
### Sparse Checkout
Use the `prefilledRepoPath` prop (or the "Repo Path" input field) to clone only a specific subdirectory:
```mdx
```
This uses `git sparse-checkout` under the hood, which is efficient even for large repositories because it only downloads the blobs for the specified path.
### Custom Local Path
The `prefilledLocalPath` prop (or the "Local Path" input field) controls where files are cloned, relative to the current working directory. If not set, the repository name is used.
### Environment Variables
When a `` block successfully clones a repository, it sets the `$REPO_FILES` environment variable for all subsequent Command and Check blocks. This variable points to the local path of the most recently cloned repository.
```bash
#!/bin/bash
# In a Command or Check script after GitClone
echo "Cloned repo is at: $REPO_FILES"
# Modify files directly in the cloned repo
echo "new_setting = true" >> "$REPO_FILES/config.hcl"
```
> **Note:** `$REPO_FILES` always points to the **most recently** cloned repository. If you have multiple `` blocks, the variable reflects whichever one ran last.
Template and TemplateInline blocks can also write directly into the cloned repository by setting `target="worktree"`:
```mdx
```
### Block Outputs
After a successful clone, the GitClone block produces outputs that can be referenced by downstream blocks:
| Output | Description | Example |
|--------|-------------|---------|
| `CLONE_PATH` | Absolute path where files were cloned | `/Users/josh/Code/my-repo` |
| `FILE_COUNT` | Number of files downloaded (excluding `.git`) | `127` |
| `REF` | The ref (branch/tag) that was cloned, if specified | `v1.2.0` |
Reference outputs in downstream blocks using template variables:
```mdx
```
> **Note:** Block IDs with hyphens are converted to underscores in template variables. For example, `clone-repo` becomes `clone_repo`.
### View Logs
The GitClone block includes a collapsible "View Logs" section (identical to the [Command](/authoring/blocks/command/) and [Check](/authoring/blocks/check/) blocks) that shows the full `git clone` output, including real-time streaming during the clone operation. This includes progress indicators, transfer statistics, and any error details.
### Example: Full Workflow
```mdx
```
---
#### <GitHubAuth>
The `` block provides a streamlined interface for authenticating to GitHub.
It automatically detects if the user is already authenticated to GitHub by checking their local environment variables or the GitHub CLI status. It also enables users to authenticate manually via OAuth or by entering a Personal Access Token (PAT). Once authenticated, GitHub credentials are automatically available to subsequent [Command](/authoring/blocks/command/) and [Check](/authoring/blocks/check/) blocks.
You can still authenticate to GitHub without this block by using Runbooks' support for environment variables, however this is the recommended way to authenticate to GitHub in Runbooks.
### Basic Usage
```mdx
```
By default, Runbooks will automatically detect and use credentials from:
1. Environment variables (`GITHUB_TOKEN` or `GH_TOKEN`)
2. GitHub CLI (`gh auth token`)
If no credentials are detected, users can authenticate manually via OAuth or PAT.
### Authentication Methods
The GitHubAuth block supports multiple authentication sources:
| Method | Description |
|--------|-------------|
| **Environment Variable** | Auto-detected from `GITHUB_TOKEN` or `GH_TOKEN` |
| **GitHub CLI** | Auto-detected from `gh auth token` |
| **OAuth** | Sign in via GitHub's device authorization flow (recommended) |
| **Personal Access Token** | Enter a PAT (classic or fine-grained) directly in the UI |
### GitHub Enterprise Support
The GitHubAuth block supports GitHub.com, GitHub Enterprise Cloud and GitHub Enterprise Server. However, OAuth is only supported for GitHub.com and GitHub Enterprise Cloud. For GitHub Enterprise Server, you can use PAT authentication or contact Gruntwork for enterprise support options.
## Props
| Prop | Type | Default | Description |
|------|------|---------|-------------|
| `id` | string | required | Unique identifier for this component |
| `title` | string | "GitHub Authentication" | Display title shown in the UI |
| `description` | string | - | Description of the authentication purpose |
| `detectCredentials` | `false` or `CredentialSource[]` | `['env', 'cli']` | Whether and how the block should automatically detect credentials in the user's environment |
| `oauthClientId` | string | Gruntwork default | Custom OAuth App client ID |
| `oauthScopes` | string[] | `["repo"]` | OAuth scopes to request |
## Usage
When authentication succeeds, the following environment variables are set in the session:
| Variable | Description |
|----------|-------------|
| `GITHUB_TOKEN` | The GitHub access token |
| `GITHUB_USER` | The authenticated user's login name |
All subsequent Command and Check blocks automatically have access to these credentials. If you authenticate multiple times, the most recent credentials become the default.
### Explicit Block References
You can explicitly reference a GitHubAuth block using the `githubAuthId` prop. This provides several benefits:
- **Prevents premature execution**: The Run/Check button is disabled until the referenced GitHubAuth block has valid credentials, preventing accidental execution without authentication
- **Clear dependency**: Documents which authentication is required for each command
- **Deterministic credentials**: When multiple GitHubAuth blocks exist, ensures the command uses the exact credentials you specify
```mdx
```
#### How Environment Variables Work with References
When a GitHubAuth block authenticates, it does two things:
1. **Sets session-level environment variables** (`GITHUB_TOKEN`, `GITHUB_USER`) that are available to all subsequent blocks
2. **Stores credentials** that can be passed to specific blocks via `githubAuthId`
When you use `githubAuthId` to reference a GitHubAuth block, the credentials from that specific GitHubAuth block are passed as environment variables that override any session-level values _for that block only_. This ensures the command uses the exact credentials from the referenced block, even if:
- Multiple GitHubAuth blocks exist with different credentials
- Session-level `GITHUB_TOKEN` was set by a different auth block
- The user re-authenticated with different credentials later
Without `githubAuthId`, blocks use whatever `GITHUB_TOKEN` is currently in the session environment (set by the most recent authentication).
> **Tip:** Using `githubAuthId` is recommended even when you only have one GitHubAuth block. It prevents users from accidentally running commands before authenticating, makes the dependency explicit in your runbook, and helps with automated testing.
### Credential Detection
By default, GitHubAuth automatically detects existing credentials from environment variables and GitHub CLI. You can customize this behavior with the `detectCredentials` prop.
#### Default Behavior
With no configuration, GitHubAuth tries these sources in order:
```mdx
{/* Default - same as detectCredentials={['env', 'cli']} */}
```
1. **Environment variables** - `GITHUB_TOKEN` or `GH_TOKEN`
2. **GitHub CLI** - `gh auth token`
If either source provides valid credentials, authentication succeeds immediately. If neither is found, users see the manual OAuth/PAT interface.
#### Detection Sources
| Source | Description |
|--------|-------------|
| `'env'` | Check standard env vars (`GITHUB_TOKEN`, `GH_TOKEN`) |
| `{ env: { prefix: 'PREFIX_' } }` | Check prefixed env vars (`PREFIX_GITHUB_TOKEN`, `PREFIX_GH_TOKEN`) |
| `'cli'` | Check GitHub CLI (`gh auth token`) |
| `{ block: 'block-id' }` | Use token from a Command block's output |
#### Custom Detection Order
Check only environment variables:
```mdx
```
Check prefixed environment variables (e.g., `PROD_GITHUB_TOKEN`):
```mdx
```
**Prefix rules (security):**
- Uppercase letters, numbers, and underscores only
- Must start with a letter
- Optional trailing underscore (recommended for readability)
- Only allows reading `PREFIX_GITHUB_TOKEN` / `PREFIX_GH_TOKEN`
#### Test Prefix
To use a different prefix for automated tests (e.g., to avoid conflicts with local credentials), configure `env_prefix` in your test config file (`runbook_test.yml`) rather than in the MDX:
```yaml
# runbook_test.yml
steps:
- block: gh-auth
env_prefix: CI_ # Checks CI_GITHUB_TOKEN, CI_GH_TOKEN
expect: success
```
Without `env_prefix`, the test framework checks `RUNBOOKS_GITHUB_TOKEN` first, which helps avoid accidentally using credentials from your local environment when running tests. See [Testing GitHubAuth Blocks](/authoring/testing/#testing-githubauth-blocks) for details.
#### Disable Auto-Detection
Force manual authentication only:
```mdx
```
#### From Command Output
Use a token generated by a previous Command block:
```mdx
```
The Command script should output the token:
```bash
#!/bin/bash
# Example: fetch token from a secrets manager
TOKEN=$(vault read -field=token secret/github)
echo "GITHUB_TOKEN=$TOKEN" >> "$RUNBOOK_OUTPUT"
```
## Permissions
Once authenticated to GitHub, Runbooks will update the environment variables `GITHUB_TOKEN` and `GITHUB_USER` with the authenticated user's token and username. GitHub tokens have permissions that control what actions they can perform. The way permissions are configured depends on the authentication method:
| Method | How Permissions Are Set |
|--------|------------------------|
| **Environment Variable** | Permissions are inherited from however the token was originally created |
| **GitHub CLI** | Permissions are inherited from `gh auth login`; Runbooks warns if `repo` scope is missing |
| **OAuth** | Runbook author specifies OAuth scopes via `oauthScopes` prop; user approves on GitHub |
| **Personal Access Token (PAT)** | User configures permissions when creating the token on GitHub |
### OAuth Scopes
When users authenticate via OAuth, the `oauthScopes` prop controls what permissions are requested. Users see these scopes on GitHub's authorization page and must approve them.
The default is `["repo"]`, which grants full read/write access to repositories. Consider requesting fewer permissions if your runbook has limited needs:
| Scope | Description |
|-------|-------------|
| `repo` | Full access to private and public repositories (default) |
| `public_repo` | Access to public repositories only |
| `repo:status` | Read/write commit status (useful for CI) |
| `read:org` | Read organization membership |
For a complete list, see [GitHub's OAuth scopes documentation](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/scopes-for-oauth-apps).
```mdx
```
> **Caution:** OAuth permissions are coarse-grained, and the minimum viable OAuth scope that Runbooks can use is `repo`, which grants full read/write access to all repositories and is still quite permissive. If you want to narrow down the permissions, consider defining a fine-grained token and using it as an environment variable, or manually entering it into the UI.
### Personal Access Tokens
When users authenticate with a PAT, the token's permissions are determined by how the user created it on GitHub. Runbook authors cannot control PAT permissions.
- **Classic PATs** use the same scope system as OAuth (e.g., `repo`, `read:org`)
- **Fine-grained PATs** offer more granular control per-repository
If your runbook requires specific permissions, document them clearly so users know what to configure when creating their PAT.
### Auto-detected credentials
When GitHubAuth detects credentials from environment variables or GitHub CLI, those tokens retain whatever permissions they were created with. If a detected token lacks required permissions, operations may fail at runtime.
> **Note:** When detecting credentials from GitHub CLI, Runbooks checks if the token has the `repo` scope and displays a warning if it's missing, since many common operations require it.
## Security
### Principles
The GitHubAuth block is designed with a local-first security model:
1. **Gruntwork never sees your credentials.** All authentication happens between your machine and GitHub. Tokens are never transmitted to Gruntwork servers.
2. **Credentials stay in memory.** Tokens are stored only in your local Runbooks session and are never persisted to disk.
### How Credentials Stay Local
Each authentication method keeps your credentials on your machine:
| Method | How It Works |
|--------|-------------|
| **Environment Variable** | Runbooks reads `GITHUB_TOKEN` from your local environment. The token never leaves your machine. |
| **GitHub CLI** | Runbooks calls `gh auth token` locally. The token is retrieved from your CLI's secure storage and stays local. |
| **OAuth** | Uses GitHub's [Device Authorization Grant](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow). GitHub sends the token directly to your local Runbooks instance—Gruntwork's servers are never involved. |
| **Personal Access Token** | You paste the token into the local UI. It's sent directly to your local Runbooks server (localhost), never to any external server. |
### OAuth Flow Details
When you authenticate via OAuth:
1. Runbooks (running on your machine) requests a device code from GitHub
2. You open github.com/login/device and enter the code
3. You see "Gruntwork Runbooks wants to access your account" and approve
4. GitHub sends the token directly to your local Runbooks instance
The token goes from GitHub to your machine. Gruntwork has no server in this flow and cannot intercept your token.
### Threat Analysis
As part of our internal security practices, we've analyzed the potential risks of using the GitHubAuth block.
#### Malicious Runbook with Custom `oauthClientId`
**Scenario**: An attacker publishes a runbook with `oauthClientId="attacker-app-id"`.
**What happens**:
- You see "Attacker's App" (not "Gruntwork Runbooks") on GitHub's authorization page
- If you approve, the token still goes to YOUR local machine, not the attacker
- The attacker registered an OAuth app but has no server receiving tokens
**Risk**: None. The attacker cannot intercept your token because it goes directly from GitHub to your machine. The only effect is that "Attacker's App" appears in your GitHub authorized apps list, which you can revoke.
**Mitigation**: Despite no risk, this is still a potential indicator of suspicious activity, so Runbooks displays a warning when `oauthClientId` differs from the default. Always verify the app name on GitHub's authorization page before approving.
#### Malicious Runbook with Custom Base URL
**Scenario**: An attacker publishes a runbook that attempts to redirect OAuth to a fake GitHub login page (e.g., `oauthBaseUrl="https://github-login.attacker.com"`).
**What happens**: Nothing. Runbooks does not allow `oauthBaseUrl` as a prop. GitHubAuth only communicates with Github.com, so the attack fails before it starts.
**Risk**: None. This attack vector is blocked by design.
#### Scope Escalation
**Scenario**: A runbook requests excessive permissions with `oauthScopes={["repo", "admin:org", "delete_repo"]}`.
**What happens**: Runbooks displays the requested scopes in the UI before you authenticate. GitHub's authorization page also displays all requested scopes. If you approve without reviewing either, you grant broader access than needed.
**Risk**: Low. Both Runbooks and GitHub clearly display requested scopes, giving you two opportunities to review before granting access.
**Mitigation**: Review the scopes shown in Runbooks before clicking "Sign in with GitHub", and verify them again on GitHub's authorization page. Be suspicious of runbooks requesting `admin:*`, `delete_repo`, or `write:*` scopes beyond the default `repo`.
### Managing Your Authorizations
Since Gruntwork never has your tokens, revoking access doesn't "take back" anything from Gruntwork as there's nothing to take back. However, revoking at GitHub invalidates the token itself, which is useful if:
- You want to ensure the token in your local Runbooks session stops working
- You're done with a runbook and want to clean up
- You suspect a token may have been exposed
You can manage authorizations at:
- [github.com/settings/applications](https://github.com/settings/applications) - Revoke OAuth authorizations
- [github.com/settings/tokens](https://github.com/settings/tokens) - Revoke personal access tokens
Alternatively, simply closing your Runbooks session discards any tokens stored locally.
---
#### <GitHubPullRequest>
The `` block provides a streamlined way to create GitHub pull requests directly from a runbook. It integrates with the [``](/authoring/blocks/gitclone/) block to push changes made during the runbook workflow and open a PR.
This block completes the git workflow loop: clone a repository, make changes (via Command, Template, or other blocks), and open a pull request.
### Basic Usage
Pair with `` and `` to create a complete workflow:
```mdx
```
### Pre-filled Values
You can pre-populate the PR title, description, labels, and branch name. Users can still edit these values before creating the PR.
```mdx
```
### Template Expressions
The `prefilledPullRequestTitle`, `prefilledPullRequestDescription`, and `prefilledBranchName` props support template expressions that reference inputs and outputs from other blocks. Escape sequences like `\n` are converted to real newlines, which is useful for multi-line PR descriptions:
```mdx
```
Template expressions are resolved in real-time as upstream blocks produce outputs. If the block references outputs from blocks that haven't run yet, it displays a warning and disables the "Create Pull Request" button until all dependencies are satisfied.
### Props
| Prop | Type | Required | Description |
|------|------|----------|-------------|
| `id` | `string` | Yes | Unique block identifier. Used by downstream blocks to reference outputs. |
| `title` | `string` | No | Display title for the block header. Supports inline markdown and template expressions. |
| `description` | `string` | No | Description text below the title. Supports inline markdown and template expressions. |
| `inputsId` | `string \| string[]` | No | Reference to one or more [``](/authoring/blocks/inputs) blocks by ID for template variable substitution. When multiple IDs are provided, variables are merged in order (later IDs override earlier ones). |
| `githubAuthId` | `string` | No | Reference to a [``](/authoring/blocks/githubauth) block. When set, the block waits for authentication to complete and uses the token for GitHub API access. |
| `prefilledPullRequestTitle` | `string` | No | Pre-fills the PR title field. Supports template expressions. |
| `prefilledPullRequestDescription` | `string` | No | Pre-fills the PR description field. Supports template expressions, markdown, and `\n` for newlines. |
| `prefilledPullRequestLabels` | `string[]` | No | Pre-selects labels by name. Labels must exist in the repository. |
| `prefilledBranchName` | `string` | No | Pre-fills the branch name. Supports template expressions. Defaults to `runbook/<timestamp>`. |
### Block Outputs
After a successful PR creation, the block produces outputs that can be referenced by downstream blocks:
| Output | Description | Example |
|--------|-------------|---------|
| `PR_ID` | The pull request number | `42` |
| `PR_URL` | The full URL of the created pull request | `https://github.com/org/repo/pull/42` |
Reference outputs in downstream blocks using template variables:
```mdx
```
> **Note:** Block IDs with hyphens are converted to underscores in template variables. For example, `create-pr` becomes `create_pr`.
### Git Push
After creating a PR, the block displays a "Git Push" button. This allows you to push additional changes made after the PR was created (for example, by running more Command or Template blocks). Each push stages all changes, commits them with the message "Additional changes", and pushes to the same branch.
### Prerequisites
The GitHubPullRequest block requires two prerequisites:
1. **GitHub authentication** -- A `` block must be completed to provide the GitHub token for API access.
2. **A cloned repository** -- A `` block must have successfully cloned a repository. The PR is created against this repository.
Both prerequisites are checked automatically, and the block shows amber warnings when they are not met:
- **GitHub authentication**: The block looks for a `GITHUB_TOKEN` output from the `GitHubAuth` block referenced by `githubAuthId`. When the `GitHubAuth` block authenticates via pre-existing credentials (e.g., environment variables or the GitHub CLI) rather than its interactive OAuth flow, there is no explicit `GITHUB_TOKEN` output — instead the block registers an `__AUTHENTICATED` marker to signal success. The prerequisite is satisfied by either. Until one is present, the block displays "Waiting for GitHub authentication" and names the specific block to complete.
- **Cloned repository**: The block checks whether a `GitClone` block has registered an active worktree. Until a repository has been cloned, the block displays "No repository available". The block remains in a `pending` state and the Create PR button is disabled until both prerequisites are satisfied.
### Labels
The block fetches available labels from the repository and presents them in a searchable multi-select dropdown. Labels can also be pre-selected using the `prefilledPullRequestLabels` prop.
### Example: Full Workflow
```mdx
> config.hcl"
/>
```
---
#### <Inputs>
The `` block dynamically renders a web form to collect user input. The collected values can be used by [Command](/authoring/blocks/command/), [Check](/authoring/blocks/check/), or [Template](/authoring/blocks/template/) blocks for variable substitution.
## Basic Usage
`````mdx
```yaml
variables:
- name: ProjectName
type: string
description: Name of your project
validations:
- required
```
`````
## vs. Template
Inputs and [Template](/authoring/blocks/template/) both render forms to collect variable values, but they have different purposes:
| Feature | `` | `` |
|---------|------------|--------------|
| Collects variables | ✓ | ✓ |
| Generates files | ✗ | ✓ |
| Button text | "Submit" | "Generate" |
| Use case | Collect values for other blocks | Generate files from templates |
Use `` when you need to collect variable values without generating files. Use `` when you need to generate files from a Boilerplate template directory.
## Props
### Required Props
- `id` (string) - Unique identifier for this component. Other blocks reference this ID via `inputsId` to access the collected values.
### "Exactly one required" props
You must provide variables using exactly one of the following methods (but not both):
- **Inline YAML** - Write the variable definitions directly inside the `` tags in the style of a `boilerplate.yml` file.
- `path` (string) - Path to a directory containing a `boilerplate.yml` file, relative to the runbook.
### Optional Props
- `prefilledVariables` (object) - An object of variable names to values that pre-populate the form fields. These values override any `default` values defined in the YAML.
## Variable Definitions
Variables are defined using the [Boilerplate variable syntax](/authoring/boilerplate/#the-boilerplateyml-file).
## Declaring variables
There are two ways to declare the variables you want to collect from users. Use exactly one of these, but not both at the same time.
### Inline YAML
Define variables directly in the Inputs block:
`````mdx
```yaml
variables:
- name: Username
type: string
description: Your username
validations:
- required
- name: NotifyByEmail
type: bool
description: Receive email notifications
default: true
```
`````
## Loading from Path
Load variables from an external `boilerplate.yml` file:
```mdx
```
This loads variable definitions from `templates/vpc/boilerplate.yml` relative to your runbook file.
## Prefilled Variables
Pre-populate form fields with specific values. These override any `default` values:
`````mdx
`````
## Using Variables in Commands and Checks
The primary use case for Inputs is to provide variables to Command and Check blocks. Inputs can also feed other Templates.
### Reference by inputsId
`````mdx
```yaml
variables:
- name: VpcName
type: string
description: Name for the VPC
- name: CidrBlock
type: string
description: CIDR block for the VPC
default: "10.0.0.0/16"
```
`````
### Reference Multiple Inputs
You can reference multiple Inputs blocks by passing an array of IDs. Variables are merged in order, with later IDs overriding earlier ones:
`````mdx
```yaml
variables:
- name: Environment
type: enum
options: [dev, staging, prod]
default: dev
```
```yaml
variables:
- name: AppName
type: string
- name: Port
type: int
default: 8080
```
`````
In this example, the command has access to variables from both Inputs blocks.
### Embedded in Command or Check
You can embed Inputs directly inside a Command or Check block:
`````mdx
```yaml
variables:
- name: Name
type: string
description: Your name
validations:
- required
```
`````
When embedded, the Inputs form renders inline within the parent block without a separate submit button. The variables are automatically available to the parent.
Other blocks can still reference embedded Inputs using the standard `inputsId` pattern.
## Sections (Grouping Variables)
Group related variables under section headers using the `x-section` extension:
```yaml
variables:
- name: FunctionName
type: string
description: Name of the Lambda function
x-section: Basic Configuration
- name: Runtime
type: enum
options: [python3.12, nodejs20.x]
x-section: Basic Configuration
- name: MemorySize
type: int
description: Memory allocation in MB
default: 128
x-section: Advanced Settings
- name: Timeout
type: int
description: Timeout in seconds
default: 30
x-section: Advanced Settings
```
Variables with the same `x-section` value are grouped together in the form under a section header.
## Common Use Cases
The `` block works well for:
- **Collecting configuration values**: Gather settings that Commands and Checks need
- **Reusable variables**: Define values once and reference from multiple blocks
- **User preferences**: Collect environment-specific settings like regions or account names
- **Parameterizing scripts**: Provide dynamic values to shell scripts
## Complete Example
Here's a complete example showing Inputs used with Check and Command:
`````mdx
# Create a GitHub Repository
Enter your repository settings:
```yaml
variables:
- name: OrgName
type: string
description: GitHub organization name
validations:
- required
- name: RepoName
type: string
description: Repository name
validations:
- required
- name: Visibility
type: enum
description: Repository visibility
options:
- private
- public
default: private
```
First, verify you're authenticated to GitHub:
Now create the repository:
Verify the repository was created:
`````
---
#### <Template>
The `` block generates files from a [Gruntwork Boilerplate](/authoring/boilerplate/) template directory. It renders a form for any variables defined in the template, and saves generated files to the workspace.
## Basic Usage
```mdx
```
This loads the `boilerplate.yml` file from `templates/vpc/boilerplate.yml` (relative to your runbook file), renders a form for the variables, and generates files when the user clicks "Generate".
## vs. TemplateInline
Template and [TemplateInline](/authoring/blocks/templateinline/) both render Boilerplate templates, but they serve different purposes:
| Feature | `` | `` |
|---------|-------------|-------------------|
| Template source | Directory with `boilerplate.yml` | Inline in runbook |
| Form rendered | Yes, from `boilerplate.yml` | No form (uses [Inputs](/authoring/blocks/inputs/)) |
| File generation | Always saves to workspace | Optional (`generateFile={true}`) |
| Use case | Generate files from template directories | Show preview of single files inline |
Use `` when you have a full Boilerplate template directory with multiple files and a `boilerplate.yml` configuration. Use `` when you want to show a quick inline preview or generate a single file without a separate template directory.
## Props
### Required Props
- `id` (string) - Unique identifier for this component. Used to reference this Template's variables from other blocks.
- `path` (string) - Path to the boilerplate template directory, relative to the runbook file. The directory must contain a `boilerplate.yml` file.
### Optional Props
- `inputsId` (string | string[]) - ID of Inputs or Template block(s) to import variable values from. When multiple IDs are provided as an array, variables are merged in order (later IDs override earlier ones).
- `target` (`"generated"` | `"worktree"`) - Where template output is written. Defaults to `"generated"`, which writes to the standard generated files directory (`$GENERATED_FILES`). Set to `"worktree"` to write directly into the active git worktree (the most recently cloned repo via ``). Requires a `` block to have run first.
> **Tip:** Use `target="worktree"` when you want to generate files directly into a cloned repository. For example, to scaffold new modules or configuration files in an existing repo.
## Directory Structure
The `path` prop should point to a directory containing a `boilerplate.yml` file and any template files:
```mdx
```
Expected directory structure:
```
templates/vpc/
├── boilerplate.yml # Variable definitions (required)
├── arbitrary_template_files.txt # Template file
```
The `boilerplate.yml` file defines the variables that will be collected from the user. Template files use [Boilerplate's Go template syntax](/authoring/boilerplate/#template-syntax) for variable substitution.
## With Variables
There are several ways to provide variables to a Template.
### Standalone Template
The Template displays a form with all variables defined in its `boilerplate.yml`:
```mdx
```
### Using inputsId
Import variables from a separate Inputs block:
`````mdx
```yaml
variables:
- name: Environment
type: enum
options: [dev, staging, prod]
default: dev
- name: Region
type: string
default: us-east-1
```
`````
Variables from `config` are merged with the template's own variables. Shared variables become read-only in the Template form.
### Using Multiple inputsIds
You can reference multiple Inputs blocks by passing an array of IDs. Variables are merged in order, with later IDs overriding earlier ones:
`````mdx
```yaml
variables:
- name: OrgName
type: string
```
```yaml
variables:
- name: Environment
type: enum
options: [dev, prod]
```
`````
## Variable Categories
Because a Template has its own `boilerplate.yml` *and* can import variables via `inputsId`, Runbooks needs to decide what happens when both sides define a variable. The result depends on where each variable is defined — Template-only variables are editable, provider-only variables pass through invisibly, and variables defined in both places become read-only. See [Inputs & Outputs — Variable Categories](/authoring/inputs-and-outputs/#variable-categories) for the full explanation.
## Template File Syntax
Template files use [Boilerplate's Go template syntax](/authoring/boilerplate/#template-syntax):
```hcl
# main.tf
resource "aws_vpc" "main" {
cidr_block = "{{ .CidrBlock }}"
tags = {
Name = "{{ .VpcName }}"
Environment = "{{ .Environment }}"
}
}
{{- if .EnableFlowLogs }}
resource "aws_flow_log" "main" {
vpc_id = aws_vpc.main.id
# ...
}
{{- end }}
```
## Using Block Outputs
Template files can reference outputs from other blocks (e.g., [Command](/authoring/blocks/command/), [Check](/authoring/blocks/check/), [TfModule](/authoring/blocks/tfmodule/), [DirPicker](/authoring/blocks/dirpicker/)) using the `outputs` namespace:
```hcl
account_id = "{{ .outputs.create_account.account_id }}"
```
Runbooks scans template files for output references and disables the Generate button until all upstream blocks have run.
For the full guide on producing outputs, consuming them, dependency behavior, and ID naming rules, see [Inputs & Outputs — Block Outputs](/authoring/inputs-and-outputs/#block-outputs).
## Common Use Cases
The `` block works especially well for generating files from structured templates:
- **Generate infrastructure code**: Create Terraform, Terragrunt, or other IaC files based on user inputs.
- **Scaffold projects**: Generate project structures with configuration files customized to user preferences.
- **Create configuration files**: Generate CI/CD pipelines, Docker Compose files, Kubernetes manifests, or other config files.
- **Multi-file generation**: Generate multiple related files that need consistent variable values.
## Complete Example
Here's a complete runbook showing Template with imported variables and validation:
`````mdx
# Deploy a VPC
First, configure your environment:
```yaml
variables:
- name: Environment
type: enum
options: [dev, staging, prod]
default: dev
- name: Region
type: string
default: us-east-1
```
Now configure your VPC. The Environment and Region will be inherited from above:
`````
---
#### <TemplateInline>
The `` block renders Boilerplate templates directly in your runbook, displaying the rendered output inline. Unlike ``, which loads templates from a directory, `` lets you write template content directly in your runbook file.
## Basic Usage
`````mdx
```yaml
variables:
- name: Name
type: string
default: World
```
```txt
Hello, {{ .inputs.Name }}!
```
`````
## vs. Template
TemplateInline and [Template](/authoring/blocks/template/) both render Boilerplate templates, but they serve different purposes:
| Feature | `` | `` |
|---------|-------------------|--------------|
| Template source | Inline in runbook | Directory with `boilerplate.yml` |
| Form rendered | No (uses [Inputs](/authoring/blocks/inputs/)) | Yes, from `boilerplate.yml` |
| File generation | Optional (`generateFile={true}`) | Always saves to workspace |
| Use case | Show preview of single files inline | Generate files from template directories |
Use `` when you want to show users what generated content looks like inline, or for quick single-file templates. Use `` when you have a full Boilerplate template directory with multiple files and a `boilerplate.yml` configuration.
## Props
### Required Props
- `id` (string) - Unique identifier for this block.
- **[Inline template content]** - The template content to render, written as a fenced code block inside the `` tags. The code block should include a language hint (e.g., `hcl`, `yaml`, `dockerfile`) for syntax highlighting.
### Optional Props
- `inputsId` (string | string[]) - ID of the Inputs block(s) to get variable values from. When multiple IDs are provided as an array, variables are merged in order (later IDs override earlier ones). If not provided, the template renders without variable substitution.
- `outputPath` (string) - File path to display for the rendered output (e.g., `config.yaml`). This appears as the filename in the code block header.
- `generateFile` (boolean) - Whether to add the rendered file to the file tree. When `false` (the default), the template is preview-only and displays inline. Set to `true` to also save the rendered file to the workspace.
- `target` (`"generated"` | `"worktree"`) - Where template output is written when `generateFile={true}`. Defaults to `"generated"`, which writes to the standard generated files directory (`$GENERATED_FILES`). Set to `"worktree"` to write directly into the active git worktree (the most recently cloned repo via ``). Requires a `` block to have run first.
> **Note:** For security reasons, `outputPath` must be a relative path. Absolute paths and directory traversal attempts (e.g., `../`) are blocked.
## With Variables
There are several ways to provide variables to a TemplateInline.
### Using inputsId
Reference a separate Inputs block to get variable values:
`````mdx
```yaml
variables:
- name: AppName
type: string
- name: NodeVersion
type: string
default: "18"
```
```dockerfile
FROM node:{{ .inputs.NodeVersion }}-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
```
`````
### Using Multiple inputsIds
You can reference multiple Inputs blocks by passing an array of IDs. Variables are merged in order, with later IDs overriding earlier ones:
`````mdx
```yaml
variables:
- name: Environment
type: enum
options: [dev, staging, prod]
default: dev
```
```yaml
variables:
- name: ServiceName
type: string
- name: Port
type: int
default: 8080
```
```hcl
resource "aws_ecs_service" "main" {
name = "{{ .inputs.ServiceName }}-{{ .inputs.Environment }}"
load_balancer {
container_port = {{ .inputs.Port }}
}
}
```
`````
In this example, the template has access to variables from both Inputs blocks.
## Multiple Files
You can have multiple `` blocks referencing the same ``:
`````mdx
```yaml
variables:
- name: AppName
type: string
- name: Environment
type: enum
options: [dev, prod]
```
```yaml
app:
name: {{ .inputs.AppName }}
environment: {{ .inputs.Environment }}
```
```bash
APP_NAME={{ .inputs.AppName }}
ENVIRONMENT={{ .inputs.Environment }}
```
`````
## With Boilerplate Logic
You can use full Boilerplate template syntax including conditionals and loops:
`````mdx
```yaml
variables:
- name: Environment
type: enum
options: [dev, prod]
- name: EnableMonitoring
type: bool
default: false
```
```hcl
resource "aws_instance" "app" {
ami = "ami-12345678"
instance_type = "{{ if eq .inputs.Environment "prod" }}t3.large{{ else }}t3.micro{{ end }}"
{{- if .inputs.EnableMonitoring }}
monitoring = true
{{- end }}
}
```
`````
For full details on template syntax, see [Boilerplate Template Syntax](/authoring/boilerplate/#template-syntax).
## Using Block Outputs
TemplateInline can reference outputs from [Command](/authoring/blocks/command/) or [Check](/authoring/blocks/check/) blocks using the `outputs` namespace:
`````mdx
```txt
Account ID: {{ .outputs.create_account.account_id }}
Region: {{ .outputs.create_account.region }}
```
`````
Runbooks scans the template content for output references and shows a warning until all upstream blocks have run.
For the full guide on producing outputs, consuming them, dependency behavior, and ID naming rules, see [Inputs & Outputs — Block Outputs](/authoring/inputs-and-outputs/#block-outputs).
## Generating Files
By default, `` is preview-only—it shows the rendered output but doesn't save files. To also save the rendered content to the generated files workspace, set `generateFile={true}`:
`````mdx
```yaml
variables:
- name: ProjectName
type: string
```
```markdown
# {{ .inputs.ProjectName }}
Welcome to {{ .inputs.ProjectName }}!
```
`````
> **Tip:** Use `generateFile={true}` when you want users to both see the output inline _and_ have the file saved to their workspace.
## Block IDs
Like all other blocks, `` requires an `id` prop. The `id` uniquely identifies the block within the runbook and is used for dependency resolution between blocks.
Learn more about block IDs in the [testing documentation](/authoring/testing/#block-ids).
## Complete Example
Here's a complete example showing `` with multiple `` blocks:
`````mdx
# Configure Your Docker Service
Enter your service configuration:
```yaml
variables:
- name: ServiceName
type: string
description: Name of your service
validations:
- required
- name: Port
type: int
description: Port to expose
default: 8080
- name: Environment
type: enum
description: Deployment environment
options: [dev, staging, prod]
default: dev
```
Here's what your Docker Compose file will look like:
```yaml
version: "3.8"
services:
{{ .inputs.ServiceName }}:
build: .
ports:
- "{{ .inputs.Port }}:{{ .inputs.Port }}"
environment:
- NODE_ENV={{ .inputs.Environment }}
```
And your environment file:
```bash
SERVICE_NAME={{ .inputs.ServiceName }}
PORT={{ .inputs.Port }}
ENVIRONMENT={{ .inputs.Environment }}
```
`````
---
#### <TfModule>
The `` block parses an OpenTofu/Terraform module, dynamically renders a web form to collect values for all the module's variables, and publishes the collected values as outputs that can be referenced by other blocks.
In other words, it turns this OpenTofu/Terraform file:
```hcl title="variables.tf"
variable "bucket_name" {
type = string
description = "Name of the S3 bucket"
validation {
condition = can(regex("^[a-z0-9][a-z0-9.-]*[a-z0-9]$", var.bucket_name))
error_message = "Must be lowercase alphanumeric with dots and hyphens."
}
}
variable "versioning_enabled" {
type = bool
default = true
description = "Enable versioning"
}
variable "tags" {
type = map(string)
default = {}
description = "Tags to apply"
}
# @runbooks:group "Lifecycle"
variable "expiration_days" {
type = number
default = 0
description = "Days before expiration"
}
# @runbooks:group "Lifecycle"
variable "transition_to_glacier_days" {
type = number
default = 0
description = "Days before Glacier transition"
}
```
Into this runbook input form:

Note that the OpenTofu/Terraform author can annotate some variables with `# @runbooks:group "Group Name"` comments to group them together under a common heading, in this case "Lifecycle".
### Why use TfModule?
The TfModule makes it possible to dynamically render a runbook based on an OpenTofu/Terraform module. For example:
```bash
runbooks open https://github.com/gruntwork-io/runbooks/tree/main/testdata/test-fixtures/tf-modules/s3-bucket
```
This means that every OpenTofu/Terraform module you've already defined can be used to generate a runbook that will collect values for the module's variables and then generate any file output format, such as a Terragrunt HCL file, Helm chart YAML file, CloudFormation template, or anything else.
The TfModule is a core building block for this use case, but you can also define a custom runbook that will be used to open the module. To learn more about opening runbooks based on OpenTofu/Terraform modules, see the [Opening Runbooks](/authoring/opening-runbooks/) docs.
## Basic Usage
The most common pattern for TfModule is a **generic runbook** that accepts any module URL from the CLI using `::cli_runbook_source`:
`````mdx
`````
The `::cli_runbook_source` keyword resolves to whatever module URL was passed to `runbooks open`. This lets a single runbook work with any OpenTofu/Terraform module. See [Opening Runbooks](/authoring/opening-runbooks/) for the full guide on custom templates and built-in templates.
You can also reference a **specific module** by path or URL:
`````mdx
`````
This is useful when you're writing a runbook for a known module. The `` block parses the `.tf` files at `../modules/rds` and renders a web form. The `` block imports the values via `inputsId` and generates files from the template at `templates/rds`.
> **Tip:** For quick prototyping you can also use [``](/authoring/blocks/templateinline/) to embed template text directly in the runbook. See [Using TemplateInline](#using-templateinline) below.
### vs. Inputs
Both `` and `` collect user values and publish them to context, but they serve different purposes.
- **``** parses `.tf` files (OpenTofu/Terraform modules) at runtime and auto-generates an input form from the module's variables. It provides a `_module` namespace with source, metadata, `inputs`, and `hcl_inputs`, enabling dynamic iteration over all variables without knowing their names upfront. Use it when you want to generate files from an existing OpenTofu/Terraform module.
- **``** defines variables explicitly in inline YAML or a `boilerplate.yml`. Each variable must be referenced by name in templates. Use it when you want to collect arbitrary user input that doesn't come from a `.tf` module.
## Props
| Prop | Type | Required | Description |
|------|------|----------|-------------|
| `id` | `string` | Yes | Unique identifier for this component.<br />Other blocks reference this ID via `inputsId` to access the collected values. |
| `source` | `string` | Yes | Path or URL to the OpenTofu/Terraform module directory containing `.tf` files.<br />See [Supported Source Formats](#supported-source-formats) below. |
### Supported Source Formats
The `source` prop accepts the same formats as the `runbooks open` CLI command:
| Format | Example |
|--------|---------|
| Local relative path | `../modules/rds` |
| Colocated (same directory) | `.` |
| Dynamic from CLI | `::cli_runbook_source` |
| GitHub shorthand | `github.com/org/repo//modules/rds?ref=v1.0.0` |
| Git prefix | `git::https://github.com/org/repo.git//modules/rds?ref=v1.0.0` |
| GitHub browser URL | `https://github.com/org/repo/tree/main/modules/rds` |
| GitLab browser URL | `https://gitlab.com/org/repo/-/tree/main/modules/rds` |
> **Tip:** For remote sources, use `?ref=v1.0.0` (or a branch/commit) to pin to a specific version. Without a ref, the default branch is used.
### Colocated Runbooks (`source="."`)
When a module author places a `runbook.mdx` alongside their `.tf` files, they can use `source="."` to reference the module in the same directory. When someone runs `runbooks open` pointing at that directory (locally or via a remote URL), the CLI detects the colocated `runbook.mdx` and serves it instead of auto-generating a generic one.
This lets module authors ship a custom, polished runbook experience alongside their module code:
```
modules/rds/
├── main.tf
├── variables.tf
├── outputs.tf
└── runbook.mdx ← custom runbook
```
Inside `runbook.mdx`:
`````mdx
# Configure RDS
```hcl
terraform {
source = "{{ .inputs._module.source }}"
}
inputs = {
{{- range $name, $hcl := .inputs._module.hcl_inputs }}
{{ $name }} = {{ $hcl }}
{{- end }}
}
```
`````
Anyone can then run this module's custom runbook:
```bash
runbooks open https://github.com/my-org/infra-modules/tree/main/modules/rds
```
### Dynamic Source from CLI
The `::cli_runbook_source` keyword resolves to whatever module URL was passed to the `runbooks open` command (or `runbooks watch` / `runbooks serve`). This enables a **generic runbook** that works with any OpenTofu/Terraform module without hardcoding a specific module path.
`````mdx
`````
If the runbook is opened without a module URL, `` renders a message explaining how to provide one.
> **Tip:** For details on built-in templates (`::terragrunt`, `::tofu`, `::terragrunt-github`), custom templates, and the `--tf-runbook` flag, see [Opening Runbooks](/authoring/opening-runbooks/).
## Block Outputs
`` publishes outputs that downstream blocks can consume. The `_module` namespace is accessed via `inputsId` in templates; the uppercase outputs are accessed via `{{ .outputs.<id>.<key> }}`.
| Output | Access | Description |
|--------|--------|-------------|
| `_module` | `inputsId` | Map containing module metadata, user inputs, and HCL-formatted values. See [structure](#structure) below. |
| `MODULE_NAME` | `outputs` | The module's folder name (same as `_module.folder_name`). <br/>Useful in `outputPath` expressions for naming generated directories. |
| `SOURCE` | `outputs` | The resolved module source URL. |
For example, the `::terragrunt-github` template uses `MODULE_NAME` to compose the output path:
```
outputPath="{{ .outputs.target_path.PATH }}/{{ .outputs.module_vars.MODULE_NAME }}/terragrunt.hcl"
```
### The `_module` Namespace
When `` registers its values, it outputs a `_module` value that is a map of key-value pairs. This enables both iteration over all variables and direct access to specific ones.
You can access the `_module` value in your templates just like any other value, using the `{{ .inputs._module }}` syntax.
#### Structure
```
_module:
source: "github.com/org/repo//modules/rds?ref=v1.0.0"
folder_name: "rds"
readme_title: "RDS Module"
output_names: ["db_endpoint", "db_name", "db_port"]
resource_names: ["aws_db_instance.this", "aws_db_parameter_group.this", "aws_db_subnet_group.this"]
inputs:
instance_class: "db.t3.micro"
engine_version: "16.3"
allocated_storage: 20
multi_az: true
hcl_inputs:
instance_class: "\"db.t3.micro\""
engine_version: "\"16.3\""
allocated_storage: "20"
multi_az: "true"
hcl_inputs_non_default:
instance_class: "\"db.t3.micro\""
allocated_storage: "20"
multi_az: "true"
```
- **`source`:** The module source from the `source` prop. Useful for embedding in generated config.
- **`folder_name`:** Name of the module's containing directory (e.g., `"rds"`).
- **`readme_title`:** The first `# Heading` from the module's README.md, if present. Empty string otherwise.
- **`output_names`:** List of output block names defined in the module (sorted alphabetically).
- **`resource_names`:** List of resource block names as `type.name` (sorted, excludes `data` sources).
- **`inputs`:** Map of all variable names to their raw values as entered by the user. Use when generating non-HCL formats (YAML, JSON, TOML) where you control the formatting.
- **`hcl_inputs`:** Map of all variable names to HCL-formatted string values: strings are quoted, booleans and numbers are raw, lists and maps are JSON-encoded. Use when generating HCL files (Terragrunt, Terraform). Values are pre-formatted with correct HCL quoting.
- **`hcl_inputs_non_default`:** Same as `hcl_inputs`, but only includes variables whose current value differs from the module's declared default. Required variables (no default) are always included.
## What Gets Parsed
`` reads every `.tf` file in the module directory and extracts the following from each `variable` block:
| Property | HCL Attribute | How It's Used |
|----------|--------------|---------------|
| **Name** | `variable "name"` | Becomes the form field name and the key in `_module.inputs`. |
| **Type** | `type` | Mapped to a form widget: <br/>`string` → text input <br/>`number` → numeric input <br/>`bool` → toggle <br/>`list` / `set` → list editor <br/>`map` / `object` → key-value editor <br/>`any` or empty → string <br/>`optional(T)` → unwraps to `T` |
| **Description** | `description` | Displayed as help text below the form field. May be enriched with validation context (see below). |
| **Default** | `default` | Pre-populates the field. Variables without a default are marked **required**. |
| **Sensitive** | `sensitive` | Sensitive variables are masked in the form. |
| **Nullable** | `nullable` | Nullable variables are treated as optional even when they lack a default. |
| **Source file** | *(derived)* | Which `.tf` file the variable was defined in. Used for filename-based grouping. |
| **Group comment** | `# @runbooks:group "Name"` | A comment placed directly above a variable block. Used for explicit grouping (see [Variable Grouping](#variable-grouping)). |
### Validation Mapping
`` also reads `validation` blocks and maps recognized patterns to client-side form validations, so the user gets instant feedback without submitting the form.
| HCL Pattern | Form Behavior |
|-------------|---------------|
| `can(regex("pattern", var.x))` | Validates the input against the regex pattern. |
| `contains(["a", "b", "c"], var.x)` | Renders a dropdown instead of a text input, populated with the listed options. <br/>If no `default`, the first option is auto-selected. |
| `length(var.x) >= N && length(var.x) <= M` | Enforces minimum and maximum character length. |
| `var.x != ""` or `length(var.x) > 0` | Marks the field as required. |
| `var.x >= N && var.x <= M` | Appends "(Must be between N and M)" to the description. |
Validation blocks that don't match any of the patterns above still contribute: if the block has an `error_message`, it is appended to the field's description as a constraint hint.
## Variable Grouping
When `` renders the input form, it automatically "groups" variables into collapsible sections. Grouping is purely a UI feature that organizes a set of variables together under a common heading to help reduce cognitive load on the end user. This is especially useful when a module has many variables and would otherwise render as an endless of form fields. Grouping has no effect on the generated output or template behavior.
So how does Runbooks know which variables to group together? The grouping strategy follows a priority order:
1. **`@runbooks:group` comments:** explicit groups defined in the `.tf` source
2. **Filename-based:** variables grouped by which `.tf` file they're defined in
3. **Prefix-based:** variables grouped by shared name prefixes (e.g., `db_*`, `vpc_*`)
4. **Required vs. Optional:** fallback grouping
### `@runbooks:group` Comments
Module authors can explicitly control grouping by adding `# @runbooks:group "Group Name"` comments directly above variable blocks in their `.tf` files:
```hcl
variable "bucket_name" {
type = string
description = "Name of the S3 bucket"
}
# @runbooks:group "Lifecycle"
variable "expiration_days" {
type = number
default = 0
description = "Days before expiration"
}
# @runbooks:group "Lifecycle"
variable "transition_to_glacier_days" {
type = number
default = 0
description = "Days before Glacier transition"
}
```
In this example, the two lifecycle variables are grouped together under a "Lifecycle" section. The `bucket_name` variable (with no annotation) appears in an unnamed default section.
> **Tip:** The `@runbooks:group` annotation is the recommended way to control variable grouping. It's explicit, portable (lives in the module source), and takes priority over all automatic grouping strategies.
## Template Patterns
### Using ``
[``](/authoring/blocks/templateinline/) embeds the template directly in the runbook. It's a good choice for generating a single file because everything is visible and editable in one place with no external files needed.
#### For Terragrunt HCL
One common pattern is generating a `terragrunt.hcl` that "instantiates" an OpenTofu/Terraform module. Here, we iterate over all module variables using `_module.hcl_inputs`:
`````mdx
```hcl
terraform {
source = "{{ .inputs._module.source }}"
}
include "root" {
path = find_in_parent_folders("root.hcl")
expose = true
}
inputs = {
{{- range $name, $hcl := .inputs._module.hcl_inputs }}
{{ $name }} = {{ $hcl }}
{{- end }}
}
```
`````
> **Note:** The `hcl_inputs` map handles type-aware formatting automatically: strings are quoted (`"value"`), booleans are raw (`true`/`false`), numbers are raw (`42`), and lists/maps are JSON-encoded.
#### Non-Default Inputs
The example above will render every module variable, even those left empty or matching the declared default. This could be quite verbose in some cases. Use `_module.hcl_inputs_non_default` instead to generate a minimal `inputs` block that excludes any variable whose value matches the default defined in the module's `.tf` file:
`````mdx
```hcl
terraform {
source = "{{ .inputs._module.source }}"
}
include "root" {
path = find_in_parent_folders("root.hcl")
expose = true
}
inputs = {
{{- range $name, $hcl := .inputs._module.hcl_inputs_non_default }}
{{ $name }} = {{ $hcl }}
{{- end }}
}
```
`````
#### Referencing Individual Variables
You can also reference specific variables by name instead of iterating:
`````mdx
```
Instance Class: {{ .inputs.instance_class }}
Engine Version: {{ .inputs.engine_version }}
Module Source: {{ .inputs._module.source }}
```
`````
### Using ``
[``](/authoring/blocks/template/) points to a separate directory containing a `boilerplate.yml` and one or more template files. Use it when you need multi-file scaffolding or extra variables beyond what the module defines.
#### Multiple Output Files
A single `` block can produce multiple files. To do this, add more template files to the template directory:
`````mdx
`````
Inside `templates/rds-scaffold/terragrunt.hcl`:
```hcl
terraform {
source = "{{ .inputs._module.source }}"
}
inputs = {
{{- range $name, $hcl := .inputs._module.hcl_inputs }}
{{ $name }} = {{ $hcl }}
{{- end }}
}
```
Inside `templates/rds-scaffold/README.md`:
```markdown
# {{ .inputs._module.readme_title }}
Source: `{{ .inputs._module.source }}`
## Outputs
{{- range .inputs._module.output_names }}
- `{{ . }}`
{{- end }}
```
#### Extra Variables with Template
Because `` is backed by a `boilerplate.yml`, it renders **its own input form** in addition to the form that `` renders. Any variables defined in `boilerplate.yml` that are *not* provided by TfModule appear as extra editable fields, giving you a way to collect additional user input (e.g., environment name, team owner) that the `.tf` files know nothing about.
> **Note:** Since both TfModule and Template can define variables, Runbooks merges the two sets. Template-only variables stay editable, TfModule-only variables (including `_module.*`) pass through invisibly, and overlapping variables become read-only. See [Inputs & Outputs — When Variables Overlap](/authoring/inputs-and-outputs/#when-variables-overlap) for the full explanation.
---
## Security
### Execution Security Model
## Overview
Runbooks executes commands and shell scripts defined your Runbook directly on your local computer with the full set of environment variables present when you launched the Runbooks binary. This is a mandate to take security seriously, and in this section we'll discuss the security measures Runbooks takes to protect users.
## Security measures
Runbooks implements specific techniques to make sure that you only execute "approved" code:
### Warning to only run Runbooks you trust
When Runbooks loads, it immediately shows a warning to users to confirm that they trust the Runbook they just opened. This warning will show on every Runbook you open until permanently hide it.
### Localhost-Only Binding for the API
The Runbooks backend server (which runs locally on your computer) only accepts connections from `localhost` (127.0.0.1). This prevents remote attacks where a malicious website could send requests to your local Runbooks server.
### Session Token Authentication
Even with localhost-only binding, an additional layer of protection prevents unauthorized script execution. When you open a runbook, the browser receives a cryptographically random session token that must be included with every execution request.
Without token authentication, any process on your machine could send requests to the local server and execute scripts. The token requirement ensures only browser tabs that loaded the Runbooks UI can trigger execution.
If you see "Invalid or expired session token" errors, try refreshing the page to obtain a new token.
**How it works:**
1. When a browser tab connects, it receives a unique token
2. The token is stored in memory only (not in cookies or localStorage)
3. Every `/api/exec` request must include this token in the `Authorization` header
4. Requests without a valid token are rejected with `401 Unauthorized`
**Multi-tab behavior:**
- Multiple browser tabs share the same session (environment state)
- Each tab receives its own token when it connects
- Up to 20 concurrent tokens are supported; older tokens are automatically pruned
- Closing a tab doesn't invalidate other tabs' tokens
### Executable Registry
By default, Runbooks uses an **executable registry,** which is a _registry_ of all _executable_ artifacts, to make sure that the backend server will only allow execution of scripts and commands defined directly in the Runbook you opened (versus running arbitrary scripts).
Here's how it works. When a user runs `runbooks open`, `runbooks watch`, or `runbooks serve`, Runbooks starts the backend server and populates the executable registry with all scripts or commands contained in the Runbook. To populate the executable registry, Runbooks reads your `runbook.mdx` file and scans for all `` and `` components. For each component, it extracts the script (either from the `command` prop for inline scripts or by reading the file specified in the `path` prop), assigns it a unique executable ID, and stores it in an in-memory registry. The registry maps each executable ID to its corresponding script content, component ID, and metadata like template variables.
When you click "Run" in the UI, the frontend sends an execution request containing only the executable ID and any template variable values, but _not the actual script content_. The backend validates that this executable ID exists in the registry (which was built from your Runbook at startup), retrieves the pre-approved script content, renders it with the given variables if needed, and executes it. This means even if an attacker could manipulate API requests, they cannot inject arbitrary code because the backend will only execute scripts that were present in your Runbook when the server started. Effectively, the registry acts as a whitelist of approved executables.
## Execution Modes
Runbooks has three execution modes with different security/convenience trade-offs:
1. Open/Serve (Executable Registry)
2. Watch (Live-File-Reload, default)
3. Watch with `--disable-live-file-reload` (Executable Registry)
### Open and Serve Modes
```bash
runbooks open path/to/runbook.mdx
runbooks serve path/to/runbook.mdx
```
**When to use:**
- Use `runbooks open` for Runbook consumers who want to guarantee that they are executing exactly what the Runbook author wrote.
- Use `runbooks serve` for Runbook developers who want to manually run the frontend and don't need hot reloading of executables.
**How it works:**
1. Server starts and scans the runbook file
2. Builds an **Executable Registry** containing all `` and `` components
3. Assigns each script a unique ID
4. At execution time, validates the ID exists in the registry
5. Executes only pre-approved scripts
**Security:** High
- All scripts pre-validated at startup
- Cannot execute arbitrary code via API manipulation
- Changes to scripts require server restart
**Convenience:** Medium
- When you make local file changes, the Runbook will not honor them automatically; you'll need to re-open the runbook to "activate" any new file changes.
### Watch (Default: Live-File-Reload)
```bash
runbooks watch path/to/runbook.mdx
```
**When to use:**
- Use `runbooks watch` (the default) for Runbook authors who want to auto-reload their runbook file _and_ all Runbook script files. Since they are actively editing files on their file system, they are presumably ok with having these hot-reloaded.
**How it works:**
1. Server starts _without building an executable registry_
2. Watches the Runbook file for changes and automatically reloads the UI
3. When user clicks "Run" on a script:
- Backend reads the runbook file _from disk at that moment_
- Parses the file to find the requested component
- Extracts and executes the script content _from the current file system state_
4. Essentially, every execution reads fresh from disk
**Security:** Medium
- No pre-validation of scripts at startup
- Scripts read from current file system state
- Still protected by localhost-only binding
- More vulnerable to file system manipulation
**Convenience:** High
- Script changes take effect immediately
- No server restart needed
- Perfect for rapid runbook development
### Watch with `--disable-live-file-reload`
```bash
runbooks watch --disable-live-file-reload path/to/runbook.mdx
```
**When to use:**
- Use `runbooks watch --disable-live-file-reload` if you want the extra security of executable registry validation while authoring runbooks, but be aware of the confusing UX where displayed scripts don't match executed scripts until server restart.
**How it works:**
1. Same as Open/Serve mode (uses Executable Registry)
2. Watches the Runbook file for changes
3. When the Runbook file does change, the frontend UI automatically reloads, but the executable registry does _not_ update.
4. All scripts -- including inline scripts -- are validated against the registry from startup.
For example, if the Runbook content changes to include an updated inline script, the Runbook will reload and display the new script text, but the Executable Registry will not update. Until you restart the server, clicking "Run" will execute the _old_ script!
**Security:** High
- Same security as Open/Serve mode
- File watching doesn't affect execution validation
**Convenience:** Low
- MDX content updates automatically
- Script changes require server restart (confusing UX)
## How Scripts Are Executed
Regardless of mode, the actual execution process is:
1. **Validate request**: Check that execution is authorized (via registry or on-demand parsing)
2. **Render templates**: If script contains template variables like `{{ .VarName }}`, substitute them
3. **Create temp file**: Write script content to a temporary file
4. **Make executable**: Set file permissions (`chmod 0700`)
5. **Detect interpreter**: Read shebang line (e.g., `#!/bin/bash`) or default to `bash`
6. **Execute**: Run script with detected interpreter in a non-interactive shell
7. **Stream output**: Send stdout/stderr back to browser via Server-Sent Events (SSE)
8. **Clean up**: Delete temporary file
**Security note:** Scripts run with your user's full environment variables and permissions. Runbooks is designed for **trusted runbooks only** - it's meant to streamline tasks you would otherwise run manually in your terminal.
For details on interpreter detection, shell limitations, and how environment changes persist across script executions, see [Shell Execution Context](/security/shell-execution-context/).
---
### Shell Execution Context
## Persistent Environment Model
**Think of Runbooks like a persistent terminal session.** When you run scripts in Check or Command blocks, environment changes carry forward to subsequent blocks — just like typing commands in a terminal.
| What persists | Example |
|---------------|---------|
| Environment variables | `export AWS_PROFILE=prod` stays set for later blocks |
| Working directory | `cd /path/to/project` changes where later scripts run |
| Unset variables | `unset DEBUG` removes the variable for later blocks |
This means you can structure your runbook like a workflow:
1. **Block 1**: Set up environment (`export AWS_REGION=us-east-1`)
2. **Block 2**: Run a command that uses `$AWS_REGION`
3. **Block 3**: Clean up (`unset AWS_REGION`)
### Bash Scripts Only
> **Environment persistence requires Bash:**
Environment variable changes **only persist for Bash scripts** (`#!/bin/bash` or `#!/bin/sh`). Non-Bash scripts like Python, Ruby, or Node.js can **read** environment variables from the session, but changes they make (e.g., `os.environ["VAR"] = "value"` in Python) will **not** persist to subsequent blocks.
| Script Type | Can read env vars | Can set persistent env vars |
|-------------|-------------------|----------------------------|
| Bash (`#!/bin/bash`) | ✅ Yes | ✅ Yes |
| Sh (`#!/bin/sh`) | ✅ Yes | ✅ Yes |
| Python (`#!/usr/bin/env python3`) | ✅ Yes | ❌ No |
| Ruby (`#!/usr/bin/env ruby`) | ✅ Yes | ❌ No |
| Node.js (`#!/usr/bin/env node`) | ✅ Yes | ❌ No |
| Other interpreters | ✅ Yes | ❌ No |
**Why?** Environment persistence works by wrapping your script in a Bash wrapper that captures environment changes after execution. This wrapper is Bash-specific and can't be applied to other interpreters. Additionally, environment changes in subprocesses (like a Python script) can't propagate back to the parent process — this is a fundamental limitation of how Unix processes work.
### Multiline Environment Variables
Environment variables can contain embedded newlines — RSA keys, JSON configs, multiline strings, etc. These values are correctly preserved across blocks:
```bash
#!/bin/bash
export SSH_KEY="-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA...
-----END RSA PRIVATE KEY-----"
export JSON_CONFIG='{
"database": "postgres",
"settings": { "timeout": 30 }
}'
```
Runbooks uses NUL-terminated output (`env -0`) when capturing environment variables, which correctly handles values containing newlines. This works on Linux, macOS, and Windows with Git Bash.
### User Trap Support
Your scripts can optionally use `trap` commands for cleanup.
```bash
#!/bin/bash
TEMP_DIR=$(mktemp -d)
trap "rm -rf $TEMP_DIR" EXIT
# Your script logic...
export RESULT="computed value"
```
Runbooks intercepts EXIT traps to ensure both your cleanup code **and** environment capture (capturing the environment variables that were set in this script and making those values available to other scripts) run correctly. When your script exits:
1. Your trap handler runs first (cleanup happens)
2. Runbooks captures the final environment state
3. The original exit code is preserved
This means you can write scripts with proper cleanup logic and still have environment changes persist to subsequent blocks.
### Multiple Browser Tabs
If you open the same runbook in multiple browser tabs, they all share the same environment. Changes made in one tab are visible in all others — like having multiple terminal windows connected to the same shell session.
### Concurrent Script Execution
> **Environment changes may be lost when scripts run concurrently:**
If you run multiple scripts at the same time (for example, clicking "Run" on two different blocks before the first completes), environment changes from one script may silently overwrite changes from the other.
**Why this happens:** When a script starts, it captures the current environment as a snapshot. When it finishes, it replaces the session environment with whatever the script ended with. If two scripts run concurrently:
1. Script A and Script B both start with environment `{X=1}`
2. Script A sets `X=2`
3. Script B sets `Y=3`
4. Whichever finishes last overwrites the other's changes
For example, if Script B finishes last, the session ends up with `{X=1, Y=3}` — losing Script A's change to `X`.
**Recommendation:** If your scripts depend on environment changes from previous scripts, wait for each script to complete before running the next one. The environment model is designed for sequential, step-by-step execution, similar to typing commands in a terminal one at a time.
### Implementation Notes
The Runbooks server maintains a single session per runbook instance. Each script execution captures environment changes and working directory updates, then applies them to the session state. This happens automatically — you don't need to do anything special in your scripts.
The session resets when you restart the Runbooks server. You can also manually reset the environment to its initial state using the session controls in the UI.
---
## Built-in Environment Variables
Runbooks exposes the following environment variables to all scripts:
| Variable | Description |
|----------|-------------|
| `GENERATED_FILES` | Path to a temporary directory where scripts can write files to be captured. Files written here appear in the **Generated** tab after successful execution. |
| `REPO_FILES` | Path to the active git worktree (set by the most recent `` block). Scripts can modify cloned repo files directly through this path. **Unset** if no repo has been cloned. |
| `RUNBOOK_OUTPUT` | Path to a file where scripts can write `key=value` pairs to produce [block outputs](/authoring/blocks/command/#block-outputs) for downstream blocks. |
### Capturing Output Files
To save files to the generated files directory, write them to `$GENERATED_FILES`:
```bash
#!/bin/bash
# Generate a config and capture it
tofu output -json > "$GENERATED_FILES/outputs.json"
# Create subdirectories as needed
mkdir -p "$GENERATED_FILES/config"
echo '{"env": "production"}' > "$GENERATED_FILES/config/settings.json"
```
Files are only captured after successful execution (exit code 0 or 2). If your script fails, any files written to `$GENERATED_FILES` are discarded.
See [Capturing Output Files](/authoring/blocks/command/#capturing-output-files) for more details.
### Modifying Cloned Repositories
If a `` block has cloned a repository, use `$REPO_FILES` to modify files in the cloned repo:
```bash
#!/bin/bash
if [ -n "${REPO_FILES:-}" ]; then
echo "Modifying files in cloned repo: $REPO_FILES"
echo "new config" >> "$REPO_FILES/settings.hcl"
else
echo "No git worktree available"
fi
```
Unlike `$GENERATED_FILES`, writes to `$REPO_FILES` happen directly on the filesystem — they are not captured to a temporary directory. Changes show up in the **Changed** tab via `git diff`.
---
## Non-Interactive Shell
Scripts run in a **non-interactive shell**, which affects what's available:
| Feature | Available? | Notes |
|---------|------------|-------|
| Environment variables | ✅ Yes | Inherited from Runbooks + changes from previous blocks |
| Binaries in `$PATH` | ✅ Yes | `git`, `aws`, `terraform`, etc. |
| Shell aliases | ❌ No | `ll`, `la`, custom aliases |
| Shell functions | ❌ No | `nvm`, `rvm`, `assume`, etc. |
| RC files | ❌ No | `.bashrc`, `.zshrc` are NOT sourced |
### Example: Aliases vs Binaries
```bash
# ❌ Will NOT work - ll is typically a bash alias for "ls -l"
# ✅ Will work - ls is an actual binary
```
### Why This Matters
Many developer tools are implemented as **shell functions** rather than standalone binaries. These functions are defined in your shell's RC files (`.bashrc`, `.zshrc`) and only exist in interactive shell sessions.
Common tools that are shell functions (not binaries):
- **nvm** — Node Version Manager
- **rvm** — Ruby Version Manager
- **pyenv** shell integration
- **conda activate**
- **assume** — Shell function from [Granted](https://docs.commonfate.io/granted/introduction)
These tools need to be shell functions because they modify your current shell's environment (e.g., changing `$PATH`), which can't be done from a subprocess.
### Workarounds
For tools that are shell functions, check for the underlying installation instead:
```bash
#!/bin/bash
# Instead of running "nvm --version" (won't work), check if nvm is installed:
if [ -d "$HOME/.nvm" ] && [ -s "$HOME/.nvm/nvm.sh" ]; then
echo "✅ nvm is installed"
exit 0
else
echo "❌ nvm is not installed"
exit 1
fi
```
If you absolutely need shell functions, source the RC file in your script (use with caution):
```bash
#!/bin/bash
# Source shell config to get functions (not recommended for portability)
source ~/.bashrc 2>/dev/null || source ~/.zshrc 2>/dev/null
# Now nvm should be available
nvm --version
```
---
## Interpreter Detection
Runbooks determines which interpreter to use for your script:
1. **Shebang line** — If your script starts with `#!/bin/bash`, `#!/usr/bin/env python3`, etc., that interpreter is used
2. **Default** — If no shebang is present, `bash` is used
### Common Shebangs
| Shebang | Interpreter |
|---------|-------------|
| `#!/bin/bash` | Bash shell |
| `#!/bin/zsh` | Zsh shell |
| `#!/usr/bin/env python3` | Python 3 |
| `#!/usr/bin/env node` | Node.js |
### Best Practice
Always include a shebang in your scripts to ensure predictable execution:
```bash
#!/bin/bash
set -e
# Your script here...
```
---
## Demo Runbooks
The Runbooks repository includes demo runbooks that showcase these execution features:
### Persistent Environment Demo
The [`runbook-execution-model`](https://github.com/gruntwork-io/runbooks/tree/main/testdata/feature-demos/runbook-execution-model) demo demonstrates:
- Setting and reading environment variables across blocks
- Working directory persistence
- Multiline environment variables (RSA keys, JSON)
- Non-bash scripts reading (but not setting) persistent env vars
### File Capture Demo
The [`capture-files-from-scripts`](https://github.com/gruntwork-io/runbooks/tree/main/testdata/feature-demos/capture-files-from-scripts) demo demonstrates:
- Using `$GENERATED_FILES` to capture generated files
- Combining environment persistence with file generation
- Creating OpenTofu configs from environment variables set in earlier blocks
### File Workspace Demo
The [`file-workspace`](https://github.com/gruntwork-io/runbooks/tree/main/testdata/feature-demos/file-workspace) demo demonstrates:
- Cloning a repository with `` and browsing its files
- Using `$REPO_FILES` to modify files in a cloned repo
- Writing templates directly into a worktree with `target="worktree"`
- Viewing changes in the "Changed files" diff view
---
### Telemetry
## Overview
Runbooks collects anonymous telemetry data to help us understand how the tool is used and prioritize improvements. We've designed our telemetry with privacy in mind: it's minimal, anonymous, and easy to disable.
**Telemetry is enabled by default**, but you can opt out at any time using the methods described below.
## What We Collect
We collect the following anonymous data:
| Category | Data Points | Purpose |
|----------|-------------|---------|
| Commands | `open`, `watch`, `serve` invocations | Understand which CLI commands are most used |
| Platform | Operating system, architecture | Ensure compatibility across platforms |
| Version | Runbooks version | Track adoption of new versions |
| Blocks | Block types in runbooks (Command, Check, Template, Inputs) | Prioritize feature development |
| Errors | Error types (not messages or content) | Improve reliability |
## What We Do NOT Collect
We take your privacy seriously. We **never** collect:
- **Runbook content** - Your runbook text, scripts, or commands
- **File paths** - The location of your runbooks on disk
- **Variable values** - Any input values you enter
- **Script output** - The results of running commands
- **Personal identifiable information** - No names, emails, or usernames
- **IP addresses** - We configure our analytics provider to discard IPs
## How We Anonymize Data
We generate an anonymous identifier for each user based on a SHA-256 hash of your machine's hostname and username. This means:
- **Stable**: The same ID is used across sessions on your machine
- **Anonymous**: The hash cannot be reversed to identify you
- **Unique**: Different machines/users have different IDs
We cannot determine who you are from this identifier.
## How to Disable Telemetry
You can disable telemetry using either of these methods:
### Environment Variable (Recommended)
Set the `RUNBOOKS_TELEMETRY_DISABLE` environment variable to `1`:
```bash
# For a single command
RUNBOOKS_TELEMETRY_DISABLE=1 runbooks open my-runbook
# Or add to your shell profile (~/.bashrc, ~/.zshrc, etc.) for permanent opt-out
export RUNBOOKS_TELEMETRY_DISABLE=1
```
### CLI Flag
Use the `--no-telemetry` flag with any command:
```bash
runbooks --no-telemetry open my-runbook
runbooks --no-telemetry watch my-runbook
```
## Telemetry Notice
When telemetry is enabled, Runbooks displays a notice at startup:
```
📊 Telemetry is enabled. Set RUNBOOKS_TELEMETRY_DISABLE=1 to opt out.
Learn more: https://runbooks.gruntwork.io/security/telemetry/
```
This notice appears every time you run a command to ensure transparency. When you disable telemetry, the notice will no longer appear.
## Data Storage and Retention
Telemetry data is sent to [Mixpanel](https://mixpanel.com/), a third-party analytics service. Data is:
- Transmitted securely over HTTPS
- Stored according to Mixpanel's data retention policies
- Accessible only to the Gruntwork team
## Open Source Transparency
Runbooks is open source, and our telemetry implementation is fully visible in the codebase:
- **Backend**: [`api/telemetry/telemetry.go`](https://github.com/gruntwork-io/runbooks/blob/main/api/telemetry/telemetry.go)
- **Frontend**: [`web/src/contexts/TelemetryContext.tsx`](https://github.com/gruntwork-io/runbooks/blob/main/web/src/contexts/TelemetryContext.tsx)
You can review exactly what data is collected and how it's sent.
## Why We Collect Telemetry
As an open source project, telemetry helps us:
1. **Prioritize features** - Understand which capabilities matter most to users
2. **Fix bugs faster** - Identify and address the most impactful issues
3. **Support platforms** - Know which operating systems and architectures to prioritize
4. **Measure adoption** - Track how new versions are being adopted
We're committed to building Runbooks in the open and respecting user privacy. If you have questions or concerns about our telemetry practices, please [open an issue](https://github.com/gruntwork-io/runbooks/issues) on GitHub.
---
## Development
### Development Workflow
If you're developing the Runbooks tool itself (working on the Go backend or React frontend), you'll want to run two separate processes in different terminals:
**Terminal 1 - Backend Server:**
```bash
go run main.go serve testdata/sample-runbooks/demo1/runbook.mdx
```
This starts the Go backend API server on the default port (7825). Use `--port` to pick a different port.
**Terminal 2 - Frontend Dev Server:**
```bash
cd web
bun dev
```
This starts the Vite dev server on port 5173 with hot-reloading.
### Making Changes
**Frontend Changes (React/TypeScript):**
- Edit files in `/web/src`
- Vite automatically hot-reloads the browser
- No restart needed
**Backend Changes (Go):**
- Edit files in `/api`, `/cmd`, etc.
- Restart the `serve` command (Ctrl+C and run again)
- Refresh the browser
**Runbook Changes:**
- Edit the runbook file you're testing with
- Refresh the browser
- No restart needed
### Testing Your Changes
Test different runbook features:
```bash
# Test with different demo runbooks
go run main.go serve testdata/sample-runbooks/demo1/runbook.mdx
go run main.go serve testdata/sample-runbooks/demo2/runbook.mdx
go run main.go serve testdata/sample-runbooks/lambda/runbook.mdx
```
### Building for Production
Build the frontend assets:
```bash
cd web
bun run build
```
This creates optimized files in `/web/dist` that are served by the Go backend in production.
Build the Go binary:
```bash
go build -o runbooks main.go
```
### Running Tests
Run Go tests:
```bash
go test ./...
```
Run frontend tests:
```bash
cd web
bun test
```
## Adding shadcn/ui Components
This project uses [shadcn/ui](https://ui.shadcn.com/) for UI components.
To add a new component:
```bash
cd web
bunx shadcn@latest add <component-name>
```
For example:
```bash
bunx shadcn@latest add dialog
bunx shadcn@latest add dropdown-menu
```
Components are added to `/web/src/components/ui/`.
---
## Runbooks Pro
### Overview
As an open source tool, Runbooks has some limitations. If you're looking for:
- A web-based way to browse your Runbooks
- A secure hosting environment for running Runbooks
- The ability to dynamically populate dropdowns based on data from your own environment (e.g. AWS accounts) or AWS (e.g. latest RDS engine versions)
- Centrally managed scripts you can distribute across your Runbooks
- Audit logs
- Guarantees around which Runbooks you can trust
- Security scanning for Runbooks
- First-class integration with your existing tools
- Something else that Runbooks open source doesn't support
...then [contact Gruntwork sales](https://www.gruntwork.io/contact) and tell us how you'd like to use Runbooks in a commercial or enterprise setting.
---