Skip to content

Commit 552627a

Browse files
AlexeyMatveev686AlexeyMatveev686
AlexeyMatveev686
authored and
AlexeyMatveev686
committed
[chatGPT] Update translations.
1 parent 17d3e4a commit 552627a

File tree

5 files changed

+5
-5
lines changed

5 files changed

+5
-5
lines changed

sdkjs-plugins/content/openai/index.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@
109109
<div id="modal_error" class="div_row hidden lb_err">
110110
<label class="i18n">This model can only process maximum of</label>
111111
<label id="modal_err_len">4000</label>
112-
<label class="i18n">tokens in a single request, please reduce your prompt or response length.</label>
112+
<label class="i18n">tokens in a single request, please reduce your prompt or maximum length.</label>
113113
</div>
114114
</div>
115115
<div id="div_tokens" class="form-control div_tokens">

sdkjs-plugins/content/openai/translations/cs-CS.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,5 +22,5 @@
2222
"Up to" : "",
2323
"tokens in response." : "tokenů v reakci.",
2424
"This model can only process maximum of" : "Tento model může zpracovat maximálně",
25-
"tokens in a single request, please reduce your prompt or response length." : "tokenů v jednom požadavku, zkraťte prosím délku výzvy nebo odpovědi."
25+
"tokens in a single request, please reduce your prompt or maximum length." : "tokenů v jednom požadavku, prosím snížit výzvu nebo maximální délku."
2626
}

sdkjs-plugins/content/openai/translations/de-DE.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,5 +22,5 @@
2222
"Up to" : "Bis zu",
2323
"tokens in response." : "Token als Antwort.",
2424
"This model can only process maximum of" : "Dieses Modell kann nur maximal",
25-
"tokens in a single request, please reduce your prompt or response length." : "Token in einer einzigen Anfrage verarbeiten.Bitte reduzieren Sie Ihre Aufforderungs- oder Antwortlänge."
25+
"tokens in a single request, please reduce your prompt or maximum length." : "Token in einer einzigen Anfrage verarbeiten.Bitte reduzieren Sie Ihre Aufforderung oder maximale Länge."
2626
}

sdkjs-plugins/content/openai/translations/es-ES.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,5 +22,5 @@
2222
"Up to" : "Hasta",
2323
"tokens in response." : "fichas en respuesta.",
2424
"This model can only process maximum of" : "Este modelo solo puede procesar un máximo de",
25-
"tokens in a single request, please reduce your prompt or response length." : "tokens en una sola solicitud, reduzca la duración de su solicitud o respuesta."
25+
"tokens in a single request, please reduce your prompt or maximum length." : "tokens en una sola solicitud, reduzca su solicitud o la longitud máxima."
2626
}

sdkjs-plugins/content/openai/translations/fr-FR.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,5 +22,5 @@
2222
"Up to" : "Jusqu'à",
2323
"tokens in response." : "jetons en réponse.",
2424
"This model can only process maximum of" : "Ce modèle ne peut traiter qu'un maximum de",
25-
"tokens in a single request, please reduce your prompt or response length." : "jetons dans une seule demande, veuillez réduire la longueur de votre invite ou de votre réponse."
25+
"tokens in a single request, please reduce your prompt or maximum length." : "jetons dans une seule demande, veuillez réduire la longueur de votre invite ou longueur maximale."
2626
}

0 commit comments

Comments
 (0)