You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the deepseek API via the ai-sdk, developers might encounter issues with chunk errors due to incompatibilities in how the response data is streamed and processed on the client side. Specifically, deepseek sends responses in discrete chunks that aren't automatically aggregated by the default ai-sdk implementation, resulting in incomplete or corrupted data processing. This incompatibility necessitates a custom fetch mechanism that properly reads, decodes, and filters the incoming data chunks so that they are correctly combined into a complete response, ensuring compatibility with the ai-sdk and preventing errors during runtime.
customFetch: This function intercepts the fetch call. It checks if the response header indicates a streaming response (via "x-vercel-ai-data-stream"). If so, it creates a reader to process the response body as a stream. It decodes each chunk, filters out any unwanted prefixes such as "f:", "e:", or "d:" from each line, re-encodes the cleaned text, and then reassembles a complete response stream. This mechanism ensures that erroneous prefixes in chunked data from deepseek are removed before further processing.
adaptMessages: This helper function normalizes messages coming from the API. It ensures that each message has a proper ID, coerces content into a string (joining arrays if needed), and maps message roles correctly. This serves to resolve type incompatibilities and standardize the message format for the UI.
handleSubmitWithErrorHandling: This function wraps the original handleSubmit provided by the useChat hook. It resets any stream error state before submitting the request. In case an error is detected during streaming, the error state is set (and shown via a toast notification), ensuring that the UI reflects any issues with chunked responses from deepseek.
setMessages Wrapper: A custom wrapper around the raw setMessages function ensures that any updates to messages go through adaptMessages for type normalization. This guarantees that all message updates are compatible with the expected message format.
useChat Hook Integration: Within the Chat component, the useChat hook is configured with custom callbacks (onFinish, onError) and uses the customFetch. The onFinish callback clears any stream errors and triggers an SWR cache update, while onError captures and sets stream errors to reflect them in the state. The hasError state is then passed as a prop to the MultimodalInput component, allowing it to react accordingly.
`'use client';
import type { Attachment, Message, ChatRequestOptions } from 'ai';
import { useChat } from '@ai-sdk/react';
import { useState, useEffect, useCallback } from 'react';
import useSWR, { useSWRConfig } from 'swr';
import { ChatHeader } from '@/components/chat-header';
import type { Vote } from '@/lib/db/schema';
import { fetcher, generateUUID } from '@/lib/utils';
import { Artifact } from './artifact';
import { MultimodalInput } from './multimodal-input';
import { Messages } from './messages';
import { VisibilityType } from './visibility-selector';
import { useArtifactSelector } from '@/hooks/use-artifact';
import { toast } from 'sonner';
// Optimized message adapter to handle type incompatibilities
const adaptMessages = (messages: any[]): Message[] => {
if (!Array.isArray(messages)) return [];
return messages.map(msg => {
if (!msg) return { id: generateUUID(), role: 'system', content: "" } as Message;
The text was updated successfully, but these errors were encountered:
syptor
changed the title
Deepseek Stream stream from ai-sdk/deepseek to ai-sdk/react in chat.tsx.
Deepseek Stream error from ai-sdk/deepseek to ai-sdk/react in chat.tsx.
Mar 13, 2025
When using the deepseek API via the ai-sdk, developers might encounter issues with chunk errors due to incompatibilities in how the response data is streamed and processed on the client side. Specifically, deepseek sends responses in discrete chunks that aren't automatically aggregated by the default ai-sdk implementation, resulting in incomplete or corrupted data processing. This incompatibility necessitates a custom fetch mechanism that properly reads, decodes, and filters the incoming data chunks so that they are correctly combined into a complete response, ensuring compatibility with the ai-sdk and preventing errors during runtime.
customFetch: This function intercepts the fetch call. It checks if the response header indicates a streaming response (via "x-vercel-ai-data-stream"). If so, it creates a reader to process the response body as a stream. It decodes each chunk, filters out any unwanted prefixes such as "f:", "e:", or "d:" from each line, re-encodes the cleaned text, and then reassembles a complete response stream. This mechanism ensures that erroneous prefixes in chunked data from deepseek are removed before further processing.
adaptMessages: This helper function normalizes messages coming from the API. It ensures that each message has a proper ID, coerces content into a string (joining arrays if needed), and maps message roles correctly. This serves to resolve type incompatibilities and standardize the message format for the UI.
handleSubmitWithErrorHandling: This function wraps the original handleSubmit provided by the useChat hook. It resets any stream error state before submitting the request. In case an error is detected during streaming, the error state is set (and shown via a toast notification), ensuring that the UI reflects any issues with chunked responses from deepseek.
setMessages Wrapper: A custom wrapper around the raw setMessages function ensures that any updates to messages go through adaptMessages for type normalization. This guarantees that all message updates are compatible with the expected message format.
useChat Hook Integration: Within the Chat component, the useChat hook is configured with custom callbacks (onFinish, onError) and uses the customFetch. The onFinish callback clears any stream errors and triggers an SWR cache update, while onError captures and sets stream errors to reflect them in the state. The hasError state is then passed as a prop to the MultimodalInput component, allowing it to react accordingly.
`'use client';
import type { Attachment, Message, ChatRequestOptions } from 'ai';
import { useChat } from '@ai-sdk/react';
import { useState, useEffect, useCallback } from 'react';
import useSWR, { useSWRConfig } from 'swr';
import { ChatHeader } from '@/components/chat-header';
import type { Vote } from '@/lib/db/schema';
import { fetcher, generateUUID } from '@/lib/utils';
import { Artifact } from './artifact';
import { MultimodalInput } from './multimodal-input';
import { Messages } from './messages';
import { VisibilityType } from './visibility-selector';
import { useArtifactSelector } from '@/hooks/use-artifact';
import { toast } from 'sonner';
// Optimized message adapter to handle type incompatibilities
const adaptMessages = (messages: any[]): Message[] => {
if (!Array.isArray(messages)) return [];
return messages.map(msg => {
if (!msg) return { id: generateUUID(), role: 'system', content: "" } as Message;
});
};
// Optimized fetch function that filters problematic prefixes
const customFetch = async (input: RequestInfo | URL, init?: RequestInit) => {
const response = await fetch(input, init);
if (!response.headers.get('x-vercel-ai-data-stream')) return response;
const reader = response.body?.getReader();
if (!reader) return response;
const decoder = new TextDecoder();
const encoder = new TextEncoder();
return new Response(
new ReadableStream({
async start(controller) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
);
};
export function Chat({
id,
initialMessages,
selectedChatModel,
selectedVisibilityType,
isReadonly,
isGuest = false,
}: {
id: string;
initialMessages: Array;
selectedChatModel: string;
selectedVisibilityType: VisibilityType;
isReadonly: boolean;
isGuest?: boolean;
}) {
const { mutate } = useSWRConfig();
const [streamError, setStreamError] = useState<Error | null>(null);
const [attachments, setAttachments] = useState<Array>([]);
const isArtifactVisible = useArtifactSelector((state) => state.isVisible);
const {
messages: rawMessages,
setMessages: rawSetMessages,
handleSubmit,
input,
setInput,
append,
isLoading,
stop,
reload,
error,
data: responseData,
} = useChat({
id,
body: { id, selectedChatModel, isGuest },
initialMessages,
sendExtraMessageFields: true,
generateId: generateUUID,
fetch: customFetch,
onFinish: () => {
if (!isGuest) mutate('/api/chats');
setStreamError(null);
},
onError: (err) => {
setStreamError(err instanceof Error ? err : new Error(String(err)));
toast.error('An error occurred, please try again');
}
});
// Use adapted messages to handle type incompatibilities
const messages = adaptMessages(rawMessages || []);
// Create a wrapper for setMessages that handles type conversion
const setMessages = useCallback((messagesOrFn: Message[] | ((messages: Message[]) => Message[])) => {
if (typeof messagesOrFn === 'function') {
rawSetMessages(prevMessages => {
const adaptedPrevMessages = adaptMessages(prevMessages);
return messagesOrFn(adaptedPrevMessages);
});
} else {
rawSetMessages(messagesOrFn);
}
}, [rawSetMessages]);
// Only fetch votes if not a guest
const { data: votes } = useSWR<Array>(
!isGuest ?
/api/vote?chatId=${id}
: null,fetcher,
);
// Submit handler with error handling
const handleSubmitWithErrorHandling = useCallback((
event?: { preventDefault?: () => void },
chatRequestOptions?: ChatRequestOptions,
) => {
setStreamError(null);
return handleSubmit(event, chatRequestOptions);
}, [handleSubmit]);
return (
<>
);
}`
The text was updated successfully, but these errors were encountered: