You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
name: Feature request - Add Truncation Parameter to OpenAIPromptExecutionSettings for Context Management
about: I would like to request the addition of a truncation parameter in the OpenAIPromptExecutionSettings class within the Semantic Kernel project. The idea is to allow developers to control how input conversation history is handled when it exceeds the model's context window. For example, similar to the OpenAI API, where the truncation parameter can be set to "auto" (which would automatically truncate conversation history by dropping input items from the middle) or "disabled" (to disable truncation and instead fail the request if the context is too large).
This would be particularly useful in scenarios where long conversation histories are used, as it would provide a built-in mechanism to prevent reaching the context token limit without manually pre-processing the conversation history. I believe that including a parameter that controls the truncation strategy would improve usability and flexibility for developers integrating AI functionality via the Semantic Kernel.
github-actionsbot
changed the title
New Feature: Add Truncation Parameter to OpenAIPromptExecutionSettings for Context Management
.Net: New Feature: Add Truncation Parameter to OpenAIPromptExecutionSettings for Context Management
Apr 11, 2025
name: Feature request - Add Truncation Parameter to OpenAIPromptExecutionSettings for Context Management
about: I would like to request the addition of a truncation parameter in the OpenAIPromptExecutionSettings class within the Semantic Kernel project. The idea is to allow developers to control how input conversation history is handled when it exceeds the model's context window. For example, similar to the OpenAI API, where the truncation parameter can be set to "auto" (which would automatically truncate conversation history by dropping input items from the middle) or "disabled" (to disable truncation and instead fail the request if the context is too large).
This would be particularly useful in scenarios where long conversation histories are used, as it would provide a built-in mechanism to prevent reaching the context token limit without manually pre-processing the conversation history. I believe that including a parameter that controls the truncation strategy would improve usability and flexibility for developers integrating AI functionality via the Semantic Kernel.
For reference, please see the OpenAI API documentation on the truncation parameter:
OpenAI API Truncation Parameter
The text was updated successfully, but these errors were encountered: