-
-
Notifications
You must be signed in to change notification settings - Fork 93
Feat: Add agent worker And Claude Support #66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
Show all changes
34 commits
Select commit
Hold shift + click to select a range
a1422dc
Feat: Add agent-worker
baryhuang 88f6354
update config
baryhuang 786ea96
add shutdown for agent_worker
baryhuang 3db1830
cleanup
baryhuang bd9a232
add anthropic path
baryhuang f81653e
cleanup
baryhuang 74206e0
cleanup
baryhuang 4636082
cleanup
baryhuang e9756dc
cleanup
baryhuang e06429e
add support for returning images
baryhuang fd68e9d
add
baryhuang 853771e
cleanup
baryhuang 72ade83
cleanup
baryhuang 76c4780
fix passing of system prompt
baryhuang 6b75133
cleanup
baryhuang 0424e87
cleanup
baryhuang a8e6e77
initial cleanup of file structure
baryhuang 531698b
initial cleanup of openai vs claude
baryhuang 36fdc10
break handlers into files
baryhuang 8d9296f
cleanup anthropic handler to break down long loop
baryhuang 79c88db
update to use non-beta anthropic api
baryhuang 3390d01
cleanup anthropic chat completion and cleanup
baryhuang d47aaf8
enabled thinking support for Claude
baryhuang 20ac56a
added basic thinking support for Claude
baryhuang 3617411
add customer logger
baryhuang 59b1c03
Update mcp_bridge/agent_worker/run.py
baryhuang 1d7c771
add script entrypoint for agent runner
SecretiveShell 8df7ab0
add more logs and fix the process tool loop with thinking blocks
baryhuang 62029a6
switch to use Claude beta api with latest computer-use
baryhuang 9669d40
added support for calling AWS Bedrock
baryhuang 99d1dda
cleanup logggin
baryhuang b4ada00
add rate limit handling
baryhuang a1c6d42
re organized imports and typing
baryhuang f77af61
Update mcp_bridge/anthropic_clients/genericClient.py
baryhuang File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
{ | ||
"task": "Hi Olivia, tell me what's on your computer screen. You should include a response of 'the task has been completed' when the task is complete. Don't ask for what to do next.", | ||
"model": "claude-3-7-sonnet-20250219", | ||
"system_prompt": "You should include a response of 'the task has been completed' when the task is complete.\n\n You should include a response of 'the task has been completed' when you found yourself stuck in a loop with no progress.", | ||
"verbose": true, | ||
"max_iterations": 10 | ||
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,168 @@ | ||
#!/usr/bin/env python | ||
"""Agent worker module that provides standalone command-line agent execution""" | ||
|
||
import asyncio | ||
import os | ||
from typing import Dict, List, Optional, Tuple | ||
|
||
from loguru import logger | ||
|
||
from mcp_bridge.agent_worker.utils import is_anthropic_model | ||
from mcp_bridge.agent_worker.anthropic_handler import process_with_anthropic | ||
from mcp_bridge.agent_worker.openai_handler import process_with_openai | ||
from mcp_bridge.agent_worker.customer_logs import get_logger, CustomerMessageLogger | ||
from mcp_bridge.mcp_clients.McpClientManager import ClientManager | ||
from mcp_bridge.utils import force_exit | ||
from lmos_openai_types import ( | ||
ChatCompletionRequestMessage, | ||
ChatCompletionRequestSystemMessage, | ||
ChatCompletionRequestUserMessage, | ||
) | ||
|
||
class AgentWorker: | ||
"""A standalone worker that processes tasks using MCP clients and LLM completions""" | ||
|
||
def __init__( | ||
self, | ||
task: str, | ||
model: str = "anthropic.claude-3-haiku-20240307-v1:0", | ||
system_prompt: Optional[str] = None, | ||
max_iterations: int = 10, | ||
session_id: Optional[str] = None, | ||
): | ||
self.task = task | ||
self.model = model | ||
self.system_prompt = system_prompt or "You are a helpful assistant that completes tasks using available tools. Use the tools provided to you to help complete the user's task." | ||
self.messages: List[ChatCompletionRequestMessage] = [] | ||
self.max_iterations = max_iterations | ||
self.thinking_blocks: List[Dict[str, object]] = [] | ||
self.session_id = session_id | ||
# Initialize customer message logger | ||
self.customer_logger: CustomerMessageLogger = get_logger(initialize=True, session_id=self.session_id) | ||
self.customer_logger.log_system_event("initialization", { | ||
"task": task, | ||
"model": model, | ||
"max_iterations": max_iterations | ||
}) | ||
|
||
async def initialize(self) -> None: | ||
"""Initialize the MCP clients""" | ||
logger.info("Initializing MCP clients...") | ||
# Start the ClientManager to load all available MCP clients | ||
await ClientManager.initialize() | ||
|
||
# Wait a moment for clients to start up | ||
logger.info("Waiting for MCP clients to initialize...") | ||
await asyncio.sleep(2) | ||
|
||
# Check that at least one client is ready | ||
max_attempts = 3 | ||
for attempt in range(max_attempts): | ||
clients = ClientManager.get_clients() | ||
ready_clients = [name for name, client in clients if client.session is not None] | ||
|
||
if ready_clients: | ||
logger.info(f"MCP clients ready: {', '.join(ready_clients)}") | ||
# Log available clients to customer log | ||
self.customer_logger.log_system_event("clients_ready", { | ||
"clients": ready_clients | ||
}) | ||
break | ||
|
||
logger.warning(f"No MCP clients ready yet, waiting (attempt {attempt+1}/{max_attempts})...") | ||
await asyncio.sleep(2) | ||
|
||
# Initialize the conversation with system and user messages | ||
self.messages = [ | ||
ChatCompletionRequestSystemMessage( | ||
role="system", | ||
content=self.system_prompt | ||
), | ||
ChatCompletionRequestUserMessage( | ||
role="user", | ||
content=self.task | ||
) | ||
] | ||
|
||
# Log initial messages to customer log | ||
self.customer_logger.log_message("system", self.system_prompt) | ||
self.customer_logger.log_message("user", self.task) | ||
|
||
async def shutdown(self) -> None: | ||
"""Shutdown all MCP clients""" | ||
logger.info("Shutting down MCP clients...") | ||
# Log shutdown event | ||
self.customer_logger.log_system_event("shutdown", { | ||
"summary": self.customer_logger.get_summary() | ||
}) | ||
# Exit the program | ||
force_exit(0) | ||
|
||
async def run_agent_loop(self) -> List[ChatCompletionRequestMessage]: | ||
"""Run the agent loop to process the task until completion""" | ||
await self.initialize() | ||
logger.info("Starting agent loop...") | ||
self.customer_logger.log_system_event("agent_loop_start", {}) | ||
|
||
# Keep running until the task is complete | ||
for iteration in range(self.max_iterations): | ||
logger.info(f"Agent iteration {iteration+1}/{self.max_iterations}") | ||
self.customer_logger.log_system_event("iteration_start", { | ||
"iteration": iteration + 1, | ||
"max_iterations": self.max_iterations | ||
}) | ||
|
||
# Process with either Anthropic or OpenAI API based on model name | ||
task_complete = False | ||
if is_anthropic_model(self.model): | ||
# Use Anthropic processing | ||
_, updated_messages, thinking_blocks, task_complete = await process_with_anthropic( | ||
messages=self.messages, | ||
model=self.model, | ||
system_prompt=self.system_prompt, | ||
thinking_blocks=self.thinking_blocks, | ||
customer_logger=self.customer_logger | ||
) | ||
self.messages = updated_messages | ||
|
||
# Check for duplicate thinking blocks before adding | ||
# ThinkingBlock from Anthropic has a signature property | ||
existing_signatures = {block.signature for block in self.thinking_blocks | ||
if block.signature} | ||
unique_blocks = [block for block in thinking_blocks | ||
if not block.signature or block.signature not in existing_signatures] | ||
self.thinking_blocks.extend(unique_blocks) | ||
|
||
# Log thinking blocks to customer log | ||
for block in unique_blocks: | ||
if block.get("thinking"): | ||
self.customer_logger.log_thinking( | ||
block["thinking"], | ||
block.get("signature") | ||
) | ||
else: | ||
# Use OpenAI processing | ||
updated_messages, task_complete = await process_with_openai( | ||
messages=self.messages, | ||
model=self.model, | ||
customer_logger=self.customer_logger | ||
) | ||
self.messages = updated_messages | ||
|
||
# If task is complete, return the messages | ||
if task_complete: | ||
self.customer_logger.log_system_event("task_complete", { | ||
"iteration": iteration + 1, | ||
"summary": self.customer_logger.get_summary() | ||
}) | ||
return self.messages | ||
|
||
# If we reached max iterations without completion | ||
logger.warning(f"Reached maximum iterations ({self.max_iterations}) without task completion") | ||
self.customer_logger.log_system_event("max_iterations_reached", { | ||
"max_iterations": self.max_iterations, | ||
"summary": self.customer_logger.get_summary() | ||
}) | ||
|
||
# Return final messages for inspection | ||
return self.messages |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using force_exit here immediately terminates the process, which may bypass important cleanup steps; consider implementing a more graceful shutdown mechanism.
Copilot uses AI. Check for mistakes.