You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am asking users to enter some text and submit it and I am using vercel ai sdk to generate response from chatgpt to that text, but I also want to check if the text contains any profanity or not. How can I achieve this?
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const checkProfanity = async (text: string) => {
try {
// **what to do about this..->-> open.moderation.create**
const response = await openai.moderation.create({
input: text,
});
const { results } = response.data;
return results[0].flagged;
} catch (error) {
console.error("Error checking profanity:", error);
return false;
}
};
const isProfane = await checkProfanity(text);
if (isProfane) {
throw new Error("The text contains inappropriate content. Please modify your text and try again.");
}
const aiResponse = await generateText({
model: openai('gpt-3.5-turbo'),
prompt: `my prompt comes here....
Is there any way of achieving this functionality?
Use Case
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
Feature Description
I am asking users to enter some text and submit it and I am using vercel ai sdk to generate response from chatgpt to that text, but I also want to check if the text contains any profanity or not. How can I achieve this?
Is there any way of achieving this functionality?
Use Case
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: