-
Notifications
You must be signed in to change notification settings - Fork 504
refactor: 使用Google官方SDK重构gemini_source #1228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Sourcery 评审指南此拉取请求重构了 Gemini 源代码实现,以使用官方 Google genai SDK。此更改提高了 Gemini 提供程序的集成性、可靠性和可维护性。它包括对错误处理、安全设置和会话有效负载准备的增强。这些更改还确保与 Gemini API 的多模态输出和工具调用功能兼容。 与 Google Gemini API 进行文本聊天的序列图sequenceDiagram
participant User
participant ProviderGoogleGenAI
participant GoogleGenAIClient
participant GeminiAPI
User->>ProviderGoogleGenAI: text_chat(prompt, image_urls, func_tool, contexts, system_prompt, tool_calls_result, **kwargs)
ProviderGoogleGenAI->>ProviderGoogleGenAI: assemble_context(prompt, image_urls)
ProviderGoogleGenAI->>ProviderGoogleGenAI: _prepare_query_config(tools, system_instruction, temperature, modalities)
ProviderGoogleGenAI->>ProviderGoogleGenAI: _prepare_conversation(payloads)
ProviderGoogleGenAI->>GoogleGenAIClient: models.generate_content(model, contents, config)
GoogleGenAIClient->>GeminiAPI: generateContent(model, contents, config)
GeminiAPI-->>GoogleGenAIClient: Response
GoogleGenAIClient-->>ProviderGoogleGenAI: GenerateContentResponse
ProviderGoogleGenAI->>ProviderGoogleGenAI: _process_content_parts(result, llm_response)
ProviderGoogleGenAI-->>User: LLMResponse
与 Google Gemini API 进行流式文本聊天的序列图sequenceDiagram
participant User
participant ProviderGoogleGenAI
participant GoogleGenAIClient
participant GeminiAPI
User->>ProviderGoogleGenAI: text_chat_stream(prompt, image_urls, func_tool, contexts, system_prompt, tool_calls_result, **kwargs)
ProviderGoogleGenAI->>ProviderGoogleGenAI: assemble_context(prompt, image_urls)
ProviderGoogleGenAI->>ProviderGoogleGenAI: _prepare_query_config(tools, system_instruction, temperature)
ProviderGoogleGenAI->>ProviderGoogleGenAI: _prepare_conversation(payloads)
ProviderGoogleGenAI->>GoogleGenAIClient: models.generate_content_stream(model, contents, config)
GoogleGenAIClient->>GeminiAPI: generateContentStream(model, contents, config)
GeminiAPI-->>GoogleGenAIClient: Stream of responses
loop For each chunk in stream
GoogleGenAIClient-->>ProviderGoogleGenAI: Chunk
ProviderGoogleGenAI->>ProviderGoogleGenAI: _process_content_parts(chunk, llm_response)
ProviderGoogleGenAI-->>User: LLMResponse (chunk)
end
ProviderGoogleGenAI-->>User: LLMResponse (final)
ProviderGoogleGenAI 的更新类图classDiagram
class ProviderGoogleGenAI {
-api_keys: List[str]
-chosen_api_key: str
-timeout: int
-api_base: Optional[str]
-client: genai.Client
-safety_settings: List[types.SafetySetting]
+__init__(provider_config: dict, db_helper: BaseDatabase, default_persona: Personality)
+_init_client() : void
+_init_safety_settings() : void
+_handle_api_error(e: APIError, keys: List[str]) : bool
+_prepare_query_config(tools: Optional[FuncCall], system_instruction: Optional[str], temperature: Optional[float], modalities: Optional[List[str]]) : types.GenerateContentConfig
+_prepare_conversation(payloads: Dict) : List[types.Content]
+_process_content_parts(result: types.GenerateContentResponse, llm_response: LLMResponse) : MessageChain
+_query(payloads: dict, tools: FuncCall, temperature: float) : LLMResponse
+_query_stream(payloads: dict, tools: FuncCall, temperature: float) : AsyncGenerator[LLMResponse, None]
+text_chat(prompt: str, session_id: str, image_urls: List[str], func_tool: FuncCall, contexts: List[Dict], system_prompt: str, tool_calls_result: ToolCallsResult, **kwargs) : LLMResponse
+text_chat_stream(prompt: str, session_id: str, image_urls: List[str], func_tool: FuncCall, contexts: List[Dict], system_prompt: str, tool_calls_result: ToolCallsResult, **kwargs) : AsyncGenerator[LLMResponse, None]
+get_models() : List[str]
+get_current_key() : str
+get_keys() : List[str]
+set_key(key: str) : void
+assemble_context(text: str, image_urls: List[str]) : Dict
+terminate() : void
}
note for ProviderGoogleGenAI "Refactored to use Google's official genai SDK"
文件级别更改
提示和命令与 Sourcery 互动
自定义您的体验访问您的 仪表板 以:
获得帮助Original review guide in EnglishReviewer's Guide by SourceryThis pull request refactors the Gemini source implementation to use the official Google genai SDK. This change improves the integration, reliability, and maintainability of the Gemini provider. It includes enhancements to error handling, safety settings, and conversation payload preparation. The changes also ensure compatibility with multi-modal outputs and tool calling features of the Gemini API. Sequence diagram for text chat with Google Gemini APIsequenceDiagram
participant User
participant ProviderGoogleGenAI
participant GoogleGenAIClient
participant GeminiAPI
User->>ProviderGoogleGenAI: text_chat(prompt, image_urls, func_tool, contexts, system_prompt, tool_calls_result, **kwargs)
ProviderGoogleGenAI->>ProviderGoogleGenAI: assemble_context(prompt, image_urls)
ProviderGoogleGenAI->>ProviderGoogleGenAI: _prepare_query_config(tools, system_instruction, temperature, modalities)
ProviderGoogleGenAI->>ProviderGoogleGenAI: _prepare_conversation(payloads)
ProviderGoogleGenAI->>GoogleGenAIClient: models.generate_content(model, contents, config)
GoogleGenAIClient->>GeminiAPI: generateContent(model, contents, config)
GeminiAPI-->>GoogleGenAIClient: Response
GoogleGenAIClient-->>ProviderGoogleGenAI: GenerateContentResponse
ProviderGoogleGenAI->>ProviderGoogleGenAI: _process_content_parts(result, llm_response)
ProviderGoogleGenAI-->>User: LLMResponse
Sequence diagram for streaming text chat with Google Gemini APIsequenceDiagram
participant User
participant ProviderGoogleGenAI
participant GoogleGenAIClient
participant GeminiAPI
User->>ProviderGoogleGenAI: text_chat_stream(prompt, image_urls, func_tool, contexts, system_prompt, tool_calls_result, **kwargs)
ProviderGoogleGenAI->>ProviderGoogleGenAI: assemble_context(prompt, image_urls)
ProviderGoogleGenAI->>ProviderGoogleGenAI: _prepare_query_config(tools, system_instruction, temperature)
ProviderGoogleGenAI->>ProviderGoogleGenAI: _prepare_conversation(payloads)
ProviderGoogleGenAI->>GoogleGenAIClient: models.generate_content_stream(model, contents, config)
GoogleGenAIClient->>GeminiAPI: generateContentStream(model, contents, config)
GeminiAPI-->>GoogleGenAIClient: Stream of responses
loop For each chunk in stream
GoogleGenAIClient-->>ProviderGoogleGenAI: Chunk
ProviderGoogleGenAI->>ProviderGoogleGenAI: _process_content_parts(chunk, llm_response)
ProviderGoogleGenAI-->>User: LLMResponse (chunk)
end
ProviderGoogleGenAI-->>User: LLMResponse (final)
Updated class diagram for ProviderGoogleGenAIclassDiagram
class ProviderGoogleGenAI {
-api_keys: List[str]
-chosen_api_key: str
-timeout: int
-api_base: Optional[str]
-client: genai.Client
-safety_settings: List[types.SafetySetting]
+__init__(provider_config: dict, db_helper: BaseDatabase, default_persona: Personality)
+_init_client() : void
+_init_safety_settings() : void
+_handle_api_error(e: APIError, keys: List[str]) : bool
+_prepare_query_config(tools: Optional[FuncCall], system_instruction: Optional[str], temperature: Optional[float], modalities: Optional[List[str]]) : types.GenerateContentConfig
+_prepare_conversation(payloads: Dict) : List[types.Content]
+_process_content_parts(result: types.GenerateContentResponse, llm_response: LLMResponse) : MessageChain
+_query(payloads: dict, tools: FuncCall, temperature: float) : LLMResponse
+_query_stream(payloads: dict, tools: FuncCall, temperature: float) : AsyncGenerator[LLMResponse, None]
+text_chat(prompt: str, session_id: str, image_urls: List[str], func_tool: FuncCall, contexts: List[Dict], system_prompt: str, tool_calls_result: ToolCallsResult, **kwargs) : LLMResponse
+text_chat_stream(prompt: str, session_id: str, image_urls: List[str], func_tool: FuncCall, contexts: List[Dict], system_prompt: str, tool_calls_result: ToolCallsResult, **kwargs) : AsyncGenerator[LLMResponse, None]
+get_models() : List[str]
+get_current_key() : str
+get_keys() : List[str]
+set_key(key: str) : void
+assemble_context(text: str, image_urls: List[str]) : Dict
+terminate() : void
}
note for ProviderGoogleGenAI "Refactored to use Google's official genai SDK"
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Raven95676 - I've reviewed your changes and found some issues that need to be addressed.
Blocking issues:
Overall Comments:
- Consider adding error handling for API calls, especially for network-related issues.
- The code could benefit from more comments explaining the purpose and functionality of different sections, especially the complex logic in
_query
.
Here's what I looked at during the review
- 🟡 General issues: 2 issues found
- 🔴 Security: 2 blocking issues
- 🟢 Testing: all looks good
- 🟡 Complexity: 1 issue found
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
@sourcery-ai review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Raven95676 - I've reviewed your changes and found some issues that need to be addressed.
Blocking issues:
Overall Comments:
- Consider adding a method to validate the provider configuration during initialization.
- The error handling and retry logic for API calls is good, but could be refactored into a separate utility function to reduce code duplication.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🔴 Security: 2 blocking issues
- 🟢 Testing: all looks good
- 🟡 Complexity: 1 issue found
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
llm_response.role = "tool" | ||
llm_response.tools_call_name.append(part.function_call.name) | ||
llm_response.tools_call_args.append(part.function_call.args) | ||
llm_response.tools_call_ids.append(part.function_call.id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里返回的 id 可能是 None,导致多轮函数调用的时候报错:
AstrBot 请求失败。
错误类型: ClientError
错误信息: 400 INVALID_ARGUMENT. {'error': {'code': 400, 'message': '* GenerateContentRequest.contents[32].parts[0].function_[response.name](https://response.name/): Name cannot be empty.\n', 'status': 'INVALID_ARGUMENT'}}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我修改了一下,这里如果 id 是 None 就直接存 name
43ee943#diff-b0a8d0933e85a0e991ee059844c35a11974fe6aed17b856f20330e7cb79343d1L275-L277
LGTM |
Motivation
优化gemini原生调用在astrbot的表现
Modifications
使用Google官方SDK重构gemini_source
Check
好的,这是翻译成中文的 pull request 总结:
Sourcery 总结
重构 Gemini 源代码实现,使用官方 Google SDK 以改进集成和可靠性
增强功能:
杂项:
Original summary in English
Summary by Sourcery
Refactor the Gemini source implementation to use the official Google SDK for improved integration and reliability
Enhancements:
Chores: