diff --git a/.ruru/modes/agent-mcp-manager/kb/install-vertex-ai.md b/.ruru/modes/agent-mcp-manager/kb/install-vertex-ai.md index 42e81762..c74a1f95 100644 --- a/.ruru/modes/agent-mcp-manager/kb/install-vertex-ai.md +++ b/.ruru/modes/agent-mcp-manager/kb/install-vertex-ai.md @@ -36,15 +36,15 @@ All configuration for the `vertex-ai-mcp-server` is now managed centrally within Locate or add the `vertex-ai-mcp-server` entry within the `servers` object in `.roo/mcp.json`. Configure it as follows, ensuring you replace placeholder values with your actual Google Cloud Platform (GCP) project details and credentials path. +**IMPORTANT:** The following JSON structure is **mandatory**. You **MUST** ensure the configuration entry for `vertex-ai-mcp-server` in `.roo/mcp.json` exactly matches this structure, including all specified keys within the `env` object. If an entry already exists, update it to conform to this example, replacing only the placeholder values (like `YOUR_PROJECT_ID`, `YOUR_GCP_REGION`, `/path/to/your/gcp_credentials.json`) with your actual details. + ```json // Example within .roo/mcp.json { - "servers": { + "mcpServers": { "vertex-ai-mcp-server": { "command": "node", "args": [ - // Ensure this path correctly points to the executable within the installed package - "node_modules/vertex-ai-mcp-server/build/index.js" ], "env": { // --- Required GCP/Vertex Config --- @@ -57,19 +57,19 @@ Locate or add the `vertex-ai-mcp-server` entry within the `servers` object in `. // --- Vertex AI Model Config --- // Specify the desired Vertex AI model ID - "VERTEX_AI_MODEL_ID": "gemini-2.5-pro-exp-03-25", // Or your preferred model like gemini-1.5-pro-preview-0409 etc. + "VERTEX_MODEL_ID": "gemini-2.5-pro-exp-03-25", // Or your preferred model like gemini-1.5-pro-preview-0409 etc. // Controls randomness (0.0 = deterministic) - "VERTEX_AI_TEMPERATURE": "0.0", + "AI_TEMPERATURE": "0.0", // Enable/disable streaming responses - "VERTEX_AI_USE_STREAMING": "true", + "AI_USE_STREAMING": "true", // Maximum tokens for the model's response - "VERTEX_AI_MAX_OUTPUT_TOKENS": "65535", // Adjust based on model limits/needs + "AI_MAX_OUTPUT_TOKENS": "65535", // Adjust based on model limits/needs // --- Optional Retry Config --- // Number of times to retry failed API calls - "VERTEX_AI_MAX_RETRIES": "3", + "AI_MAX_RETRIES": "3", // Initial delay between retries in milliseconds - "VERTEX_AI_RETRY_DELAY_MS": "1000" + "AI_RETRY_DELAY_MS": "1000" }, // Set to true to temporarily disable this server without removing the config "disabled": false, @@ -115,6 +115,9 @@ Locate or add the `vertex-ai-mcp-server` entry within the `servers` object in `. * **`alwaysAllow`**: Carefully consider which tools should bypass per-call user confirmation. * **`timeout`**: Adjust if you expect very long-running tool operations. -## 3. Restart Roo Commander +## 3. Activating + +To activate the server: -After modifying `.roo/mcp.json`, restart Roo Commander to ensure it picks up the new configuration and attempts to connect to the Vertex AI MCP server. Check the Roo Commander logs or MCP status indicators for successful connection. +1. Ensure the configuration in `.roo/mcp.json` is correct and `disabled` is set to `false`. +2. **Restart or Reload Roo Commander:** If Roo Commander is already running, you may need to restart it or trigger a configuration reload (if available) for it to pick up the changes in `.roo/mcp.json` and launch the server process.