What is Script Runner?
Script Runner is an evidence collection method for some of Strike Graph's cloud-service integrations. It lets you collect compliance evidence by running Python scripts against your connected cloud environment. This is a powerful tool, as it opens up every programable data surface inside your cloud environment to evidence collection, and gives unprecedented control over how the data is fetched and displayed.
There are two ways to use Script Runner:
Pre-built scripts — Browse and select from our library of ready-to-use scripts, each designed to collect a specific type of evidence from your cloud environment. Most users can get what they need here without writing a single line of code.
Advanced mode (custom scripts) — Write your own Python script to collect evidence for specific configurations or resources that aren't covered by the pre-built library. This is useful when you have a unique environment or a niche compliance requirement.
Script Runner is available alongside the Terraform data blocks method for the following integrations:
Note: Script Runner must be enabled for your organization before it will appear. If you don't see the Scripts tab in your integration, reach out to your Customer Success Manager or contact support.
Using pre-built scripts
Step 1: Navigate to the evidence item
Go to the evidence item where you want to collect data. Click Attach Directly or Automated Collection (recommended), then select your cloud integration from the list of available integrations.
Step 2: Open the Scripts tab
Once inside the integration, click the Scripts tab. You'll see a search field pre-populated with the name of your evidence item.
Step 3: Find the right script
Use the search field to browse available scripts. Each script card displays:
Name — a short description of what the script collects
Description — more detail about the evidence it produces
Services — which cloud services or resources the script queries
Category — the compliance area the script targets
If you want to inspect a script before running it, click Preview to see the full source code and metadata. When you're ready to proceed, click Run.
Step 4: Select a region (AWS only)
If you're using the AWS integration, you'll also need to select the AWS region where your resources are located before running the script.
Step 5: Run and collect
After clicking Run, Strike Graph executes the script against your connected cloud credentials. This may take up to a few minutes. Once complete, the collected data is automatically attached to your evidence item as a JSON file. You can click on the attachment to review what was collected.
Using advanced mode: writing custom scripts
If the pre-built library doesn't have what you need, or you just want something a little different, you can write your own Python script using Advanced mode.
Opening Advanced mode
From the Scripts tab, click the Advanced button in the upper right of the panel. This opens a code editor where you can write or paste your script.
For AWS, you'll first select a region, then paste or write your script in the editor below.
Script requirements
Custom scripts run against your connected cloud credentials in a read-only sandbox. Because of this, there are some important requirements to keep in mind:
Scripts must be read-only. Write operations — such as creating, deleting, modifying, or tagging resources — are blocked and will fail validation before the script is submitted.
Dangerous Python patterns are not allowed. The following will fail validation: subprocess, os.system, eval(), exec(), and __import__().
Entry point function
The entry point function must be named main() and match the expected signature for your cloud provider.
Strike Graph runs your script by calling main() and automatically passing in pre-authenticated credentials from your connected integration. You don't need to authenticate, create sessions, or manage API keys yourself — just accept what's passed in, use it to query your cloud environment, and return your results as a Python dictionary. That dictionary becomes the JSON evidence attachment in Strike Graph.
The function signature is slightly different per provider:
AWS
import boto3
def main(session: boto3.Session):
iam = session.client("iam")
users = iam.get_paginator("list_users").paginate()
return {
"users": [u["UserName"] for page in users for u in page["Users"]]
}
Strike Graph passes in a boto3.Session already authenticated with read-only credentials for your AWS account. Use session.client("service_name") to create service clients as you normally would with boto3.
Azure
def main(credential=None):
from azure.mgmt.resource import SubscriptionClient
from azure.identity import DefaultAzureCredential
if credential is None:
credential = DefaultAzureCredential()
client = SubscriptionClient(credential)
subs = [s.display_name for s in client.subscriptions.list()]
return {"subscriptions": subs}
Strike Graph passes in a credential object you can use directly with any azure-mgmt-* SDK client. If credential is None for any reason, fall back to DefaultAzureCredential() as shown above. Note that azure-mgmt-* imports should go inside the function body, not at the top of the file.
GCP
import google.auth
from google.cloud import storage
def main(credentials=None, project_id=None):
if credentials is None or project_id is None:
credentials, project_id = google.auth.default()
client = storage.Client(credentials=credentials, project=project_id)
buckets = [b.name for b in client.list_buckets()]
return {"buckets": buckets}
Strike Graph passes in both credentials and project_id from your connected GCP integration. If either is None for any reason, fall back to google.auth.default(). GCP imports should go at the top of the file (module level), not inside the function. Your script only runs against this single project — do not attempt to enumerate multiple projects.
Scripts time out after 5 minutes (300 seconds), so make sure to cap large result sets in your code.
Running your script
Once you've written or pasted your script, click Run. Strike Graph will validate the script, then execute it against your cloud environment. The results are attached to your evidence item as JSON.
If your script fails validation, Strike Graph will show you the specific patterns that need to be corrected before you can run it.
Generating custom scripts with your own LLM
If you're not sure where to start with writing a custom script, you can use an LLM like Claude, Gemini, or similar to generate one for you. The prompts below are starter templates you can paste into any LLM. Edit the [FILL IN] section to describe the evidence you want to collect, then paste the generated script into the Advanced mode editor in Strike Graph.
If the script your LLM produces doesn't run successfully, paste the error message back into the same chat and ask it to fix the script — most LLMs will iterate to a working version in one or two rounds.
AWS prompt template
You are a software engineer writing a Python integration script that collects compliance evidence from AWS using boto3. The script will be executed in Strike Graph's sandbox with read-only credentials.
Requirements:
- Define a function `main(session)` as the entrypoint. `session` is a boto3. Session already authenticated with read-only access. Do not create your own session or credentials.
- Use ONLY read operations (list_*, describe_*, get_*). Never call create_*, put_*, delete_*, update_*, or attach_* — the sandbox will reject the script.
- Use only the Python standard library and boto3.
- Use `client.get_paginator(...)` for any API that returns lists. - Define a constant `MAX_RESOURCES_PER_SERVICE = 200` and stop iterating once a service hits that count. Set a `truncated: True` flag in the output if so.
- Wrap every API call in try/except and collect errors in a list rather than raising.
- Return a single dict with keys: `evidence_id`, `evidence_name`, `collection_timestamp` (ISO8601), `regions_scanned`, `services` (the actual collected data, grouped by service), and `summary` (counts + an `errors` list).
What I want the script to collect: [FILL IN — describe what evidence you want, what AWS services/resources, and any filters. Example: "List all S3 buckets and report whether each one has public access blocked, server-side encryption enabled, and versioning on."]
Return only the Python script in a single ```python ``` code block.
Azure prompt template
You are a software engineer writing a Python integration script that collects compliance evidence from Azure using the azure-mgmt-* SDK packages. The script will be executed in Strike Graph's sandbox with read-only credentials.
Requirements:
- Define a function `main(credential=None)` as the entrypoint. If `credential` is None, default to `DefaultAzureCredential()`. The caller may pass their own credential.
- Use ONLY read operations (list, get, etc.). Never call create_*, update_*, delete_*, begin_create_*, begin_delete_* — the sandbox will reject the script.
- Use only the Python standard library and `azure-*` SDK packages. `requests` is allowed for Microsoft Graph API calls.
- Import azure-mgmt-* packages INSIDE the functions that use them, not at the top of the file. Only `from datetime import datetime` goes at module level.
- Enumerate all enabled subscriptions via a helper `get_subscriptions(credential)` that returns `(subs, errors)`.
- Group results by subscription display name in a `by_subscription` dict.
- Define `MAX_RESOURCES_PER_SERVICE = 500` and stop iterating once any service hits that count. Set `truncated: True` in the output if so.
- Wrap every SDK call in try/except and collect errors in a list rather than raising.
- Return a single dict with keys: `evidence_id`, `evidence_name`, `collection_timestamp` (ISO8601), `subscriptions_scanned`, `services`, and `summary` (counts + an `errors` list).
What I want the script to collect: [FILL IN — describe what evidence you want, what Azure services/resources, and any filters. Example: "For every Azure SQL Database and Storage Account, report whether transparent data encryption / storage encryption is enabled and what key type (Microsoft-managed vs customer-managed)."]
Return only the Python script in a single ```python ``` code block.
GCP prompt template
You are a software engineer writing a Python integration script that collects compliance evidence from Google Cloud Platform. The script will be executed in Strike Graph's sandbox with read-only credentials.
Requirements:
- Define a function `main(credentials=None, project_id=None)` as the entrypoint. If either is None, default to `google.auth.default()`. The caller may pass their own.
- The script operates on a single GCP project — use `project_id` throughout, do not enumerate projects.
- Use ONLY read operations (list, get, getIamPolicy, etc.). Never call create, insert, update, patch, delete, or setIamPolicy — the sandbox will reject the script.
- Use the `google-cloud-*` client libraries where available (Storage, KMS, Compute, etc.) and `googleapiclient.discovery` for REST APIs that don't have a client library (Cloud Resource Manager, IAM admin). Imports go at module level.
- Define `MAX_RESOURCES_PER_SERVICE = 500` and stop iterating once any service hits that count. Set `truncated: True` in the output if so.
- Wrap `googleapiclient` calls in try/except for `HttpError`. Wrap `google-cloud-*` calls in try/except for generic `Exception`. Collect errors in a list rather than raising.
- Return a single dict with keys: `evidence_id`, `evidence_name`, `collection_timestamp` (ISO8601), `project_id`, `services`, and `summary` (counts + an `errors` list).
What I want the script to collect: [FILL IN — describe what evidence you want, what GCP services/resources, and any filters. Example: "For every Cloud SQL instance, report the database version, public-IP setting, SSL/TLS requirement, automated backup configuration, and whether deletion protection is enabled."]
Return only the Python script in a single ```python ``` code block.
Troubleshooting
We may not be able to fully help troubleshoot custom scripts, but below are some troubleshooting tips for basic or custom scripting:
Script fails validation before running
Check the error message — Strike Graph will tell you which pattern triggered the validation failure. Common causes are write-style method names (even in comments that weren't stripped) or dangerous Python patterns. Review the script requirements above and correct any flagged lines.
Script times out
Custom scripts have a 5-minute timeout. If your script is timing out, check that you've defined a MAX_RESOURCES_PER_SERVICE cap and that you're stopping iteration once that limit is reached. Large environments with many resources can take longer than expected.
Script runs but returns unexpected or empty results
If the script completes but the output isn't what you expected, review the returned JSON attachment to check for errors collected during execution. The summary.errors field will contain any API call failures. Verify that your connected integration has the necessary read permissions for the resources you're querying.
LLM-generated script doesn't run
Paste the error message from Strike Graph back into your LLM chat and ask it to fix the issue. Include the full error text — most LLMs can resolve common validation or runtime issues in one or two follow-up rounds.
Need more help?
If you have questions about Script Runner or need help getting a custom script to work, reach out through the in-app messenger. Our Customer Success team is available to help.
