This cookbook walks through deploying Prowler inside a Kubernetes cluster on a recurring schedule and automatically sending findings to Prowler Cloud via Import Findings. By the end, security scan results from the cluster appear in Prowler Cloud without any manual file uploads.Documentation Index
Fetch the complete documentation index at: https://prowler-prowler-1359-docs-improve-developer-documentation-f.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
- A Prowler Cloud account with an active subscription (see Prowler Cloud Pricing)
- A Prowler Cloud API key with the Manage Ingestions permission (see API Keys)
- Access to a Kubernetes cluster with
kubectlconfigured - Permissions to create ServiceAccounts, Roles, RoleBindings, Secrets, and CronJobs in the cluster
Step 1: Create the ServiceAccount and RBAC Resources
Prowler needs a ServiceAccount with read access to cluster resources. Apply the manifests from thekubernetes directory of the Prowler repository:
- A
prowler-saServiceAccount in theprowler-nsnamespace - A ClusterRole with the read permissions Prowler requires
- A ClusterRoleBinding linking the ServiceAccount to the role
Step 2: Store the Prowler Cloud API Key as a Secret
Create a Kubernetes Secret to hold the API key securely:pk_your_api_key_here with the actual API key from Prowler Cloud.
Step 3: Create the CronJob Manifest
The CronJob runs Prowler on a schedule, scanning the cluster and pushing findings to Prowler Cloud with the--push-to-cloud flag.
Create a file named prowler-cronjob.yaml:
Replace
my-cluster with a meaningful name for the cluster. This value appears in Prowler Cloud reports and helps identify the source of findings. See the --cluster-name flag documentation in Getting Started with Kubernetes for more details.Customizing the Schedule
Theschedule field uses standard cron syntax. Common examples:
"0 2 * * *"— daily at 02:00 UTC"0 */6 * * *"— every 6 hours"0 2 * * 1"— weekly on Mondays at 02:00 UTC
Scanning Specific Namespaces
To limit the scan to specific namespaces, add the--namespace flag to the args array:
Step 4: Deploy and Verify
Apply the CronJob to the cluster:Step 5: View Findings in Prowler Cloud
Once the job completes and findings are pushed:- Navigate to Prowler Cloud
- Open the “Scans” section to verify the ingestion job status
- Browse findings under the Kubernetes provider
Tips and Troubleshooting
-
Resource limits: For large clusters, consider setting
resources.requestsandresources.limitson the container to prevent the scan from consuming excessive cluster resources. -
Network policies: Ensure the Prowler pod can reach
api.prowler.comover HTTPS (port 443). Adjust NetworkPolicies or egress rules if needed. -
Job history: Kubernetes retains completed and failed jobs by default. Set
successfulJobsHistoryLimitandfailedJobsHistoryLimitin the CronJob spec to control cleanup: -
API key rotation: When rotating the API key, update the Secret and restart any running jobs:
- Failed uploads: If the push to Prowler Cloud fails, the scan still completes and findings are saved locally in the container. Check the Import Findings troubleshooting section for common error messages.

