The Prowler MCP Server brings the entire Prowler ecosystem to AI assistants through the Model Context Protocol (MCP). It enables seamless integration with AI tools like Claude Desktop, Cursor, and other MCP clients.The server follows a modular architecture with three independent sub-servers:
Sub-Server
Auth Required
Description
Prowler App
Yes
Full access to Prowler Cloud and Self-Managed features
Prowler Hub
No
Security checks catalog with over 1000 checks, fixers, and 70+ compliance frameworks
Prowler Documentation
No
Full-text search and retrieval of official documentation
For a complete list of tools and their descriptions, see the Tools Reference.
The MCP Server architecture is illustrated in the Overview documentation. AI assistants connect through the MCP protocol to access Prowler’s three main components.
Create a new file or add to an existing file in prowler_app/tools/:
# prowler_app/tools/new_feature.pyfrom typing import Anyfrom pydantic import Fieldfrom prowler_mcp_server.prowler_app.models.new_feature import ( FeatureListResponse, DetailedFeature,)from prowler_mcp_server.prowler_app.tools.base import BaseToolclass NewFeatureTools(BaseTool): """Tools for managing new features.""" async def list_features( self, status: str | None = Field( default=None, description="Filter by status (active, inactive, pending)" ), page_size: int = Field( default=50, description="Number of results per page (1-100)" ), ) -> dict[str, Any]: """List all features with optional filtering. Returns a lightweight list of features optimized for LLM consumption. Use get_feature for complete information about a specific feature. """ # Validate parameters self.api_client.validate_page_size(page_size) # Build query parameters params: dict[str, Any] = {"page[size]": page_size} if status: params["filter[status]"] = status # Make API request clean_params = self.api_client.build_filter_params(params) response = await self.api_client.get("/api/v1/features", params=clean_params) # Transform to LLM-friendly format return FeatureListResponse.from_api_response(response).model_dump() async def get_feature( self, feature_id: str = Field(description="The UUID of the feature"), ) -> dict[str, Any]: """Get detailed information about a specific feature. Returns complete feature details including configuration and metadata. """ try: response = await self.api_client.get(f"/api/v1/features/{feature_id}") return DetailedFeature.from_api_response(response["data"]).model_dump() except Exception as e: self.logger.error(f"Failed to get feature {feature_id}: {e}") return {"error": str(e), "status": "failed"}
No manual registration is needed. The tool_loader.py automatically discovers and registers all BaseTool subclasses. Verify your tool is loaded by checking the server logs:
INFO - Auto-registered 2 tools from NewFeatureToolsINFO - Loaded and registered: NewFeatureTools
Always implement from_api_response() for API transformation:
@classmethoddef from_api_response(cls, data: dict[str, Any]) -> "MyModel": """Transform API response to model. This method handles the JSON:API format used by Prowler API, extracting attributes and relationships as needed. """ attributes = data.get("attributes", {}) return cls( id=data["id"], name=attributes["name"], # ... map other fields )
Tool docstrings become description that is going to be read by the LLM. Provide clear usage instructions and common workflows:
async def search_items(self, status: str = Field(...)) -> dict: """Search items with advanced filtering. Returns a lightweight list optimized for LLM consumption. Use get_item for complete details about a specific item. Common workflows: - Find critical items: status="critical" - Find recent items: Use date_from parameter """
# Navigate to MCP server directorycd mcp_server# Run in STDIO mode (default)uv run prowler-mcp# Run in HTTP modeuv run prowler-mcp --transport http --host 0.0.0.0 --port 8000# Run with environment variablesPROWLER_APP_API_KEY="pk_xxx" uv run prowler-mcp
For complete installation and deployment options, see: