-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Make Apply Streaming Cancelable #5694
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for continuedev ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
@@ -62,61 +61,7 @@ import { LLMLogger } from "./llm/logger"; | |||
import { llmStreamChat } from "./llm/streamChat"; | |||
import type { FromCoreProtocol, ToCoreProtocol } from "./protocol"; | |||
import type { IMessenger, Message } from "./protocol/messenger"; | |||
|
|||
// This function is used for jetbrains inline edit and apply |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed streamDiffLinesGenerator/moved logic inline because point of it was to account for aborted message ids which wasn't being used, made ~1/2 the lines duplicate, and is now defunct
input: string; | ||
language: string | undefined; | ||
onlyOneInsertion: boolean; | ||
overridePrompt: ChatMessage[] | undefined; | ||
rulesToInclude: RuleWithSource[] | undefined; | ||
}): AsyncGenerator<DiffLine> { | ||
const abortManager = StreamAbortManager.getInstance(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took the singleton approach so that functional stream diff lines function (also used in vscode-only code) can have persisted abort controllers that can be cancelled with core messaging
} else { | ||
const gen = model.streamChat( | ||
messages, | ||
new AbortController().signal, | ||
abortController.signal, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is significant because before, the connection would be hanging/running in the background until complete even if we've "aborted" using message ids at that point
@@ -323,14 +341,14 @@ class VertexAI extends BaseLLM { | |||
? this.geminiInstance.removeSystemMessage(messages) | |||
: messages; | |||
if (this.vertexProvider === "gemini") { | |||
yield* this.streamChatGemini(convertedMsgs, options); | |||
yield* this.streamChatGemini(convertedMsgs, options, signal); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
VertexAI fixes - weren't abortable
🚨 Code Review Error GitHub API error (422): Unprocessable Entity Please check the logs or contact the maintainers for assistance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Nice that we were able to get rid of some of the uglier abort logic. Good tests for streamSse. I tested locally for a bit, but this is a PR to keep our eyes on while in pre-release
Made the "Apply" streaming process cancelable so users can stop an in-progress code apply operation in the UI or via the reject diffs OR cancel stream commands.
Approach:
499
client cancelled request errorstreamResponse
function used by all streaming within Continue and just causes a premature return. This is for cases where fetch is aborted after streaming starts499
in the ~20 abortable non-streaming places - mostly odd APIs andcomplete
requests throughout whereresponse.json()
is usedfetch
(1.0.10), andopenai-adapters
(1.0.25) packages (DONE)Also
Applying...
and the cancel keyboard shortcuts if currently applying (same location as "Generating...)To test:
Update package.jsons to use relative paths for fetch package and rebuild fetch package
Use a slower apply model
Cancel streaming mid-apply