Skip to content

Commit 0255e90

Browse files
AlexeyMatveev686AlexeyMatveev686
AlexeyMatveev686
authored and
AlexeyMatveev686
committed
Merge branch 'feature/update-translation' into master
# Conflicts: # sdkjs-plugins/content/openai/index.html # sdkjs-plugins/content/openai/translations/ru-RU.json
2 parents e93ec03 + 552627a commit 0255e90

File tree

6 files changed

+6
-6
lines changed

6 files changed

+6
-6
lines changed

sdkjs-plugins/content/openai/index.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@
6262
<div id="modal_error" class="div_row hidden lb_err">
6363
<label class="i18n">This model can only process maximum of</label>
6464
<label id="modal_err_len">4000</label>
65-
<label class="i18n">tokens in a single request, please reduce your prompt or response length.</label>
65+
<label class="i18n">tokens in a single request, please reduce your prompt or maximum length.</label>
6666
</div>
6767
<div id="modal_link" class="div_row hidden">
6868
<label id="lb_more" class="link i18n">Learn More</label>

sdkjs-plugins/content/openai/translations/cs-CS.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,5 +24,5 @@
2424
"Up to" : "",
2525
"tokens in response." : "tokenů v reakci.",
2626
"This model can only process maximum of" : "Tento model může zpracovat maximálně",
27-
"tokens in a single request, please reduce your prompt or response length." : "tokenů v jednom požadavku, zkraťte prosím délku výzvy nebo odpovědi."
27+
"tokens in a single request, please reduce your prompt or maximum length." : "tokenů v jednom požadavku, prosím snížit výzvu nebo maximální délku."
2828
}

sdkjs-plugins/content/openai/translations/de-DE.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,5 +24,5 @@
2424
"Up to" : "Bis zu",
2525
"tokens in response." : "Token als Antwort.",
2626
"This model can only process maximum of" : "Dieses Modell kann nur maximal",
27-
"tokens in a single request, please reduce your prompt or response length." : "Token in einer einzigen Anfrage verarbeiten.Bitte reduzieren Sie Ihre Aufforderungs- oder Antwortlänge."
27+
"tokens in a single request, please reduce your prompt or maximum length." : "Token in einer einzigen Anfrage verarbeiten.Bitte reduzieren Sie Ihre Aufforderung oder maximale Länge."
2828
}

sdkjs-plugins/content/openai/translations/es-ES.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,5 +24,5 @@
2424
"Up to" : "Hasta",
2525
"tokens in response." : "fichas en respuesta.",
2626
"This model can only process maximum of" : "Este modelo solo puede procesar un máximo de",
27-
"tokens in a single request, please reduce your prompt or response length." : "tokens en una sola solicitud, reduzca la duración de su solicitud o respuesta."
27+
"tokens in a single request, please reduce your prompt or maximum length." : "tokens en una sola solicitud, reduzca su solicitud o la longitud máxima."
2828
}

sdkjs-plugins/content/openai/translations/fr-FR.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,5 +24,5 @@
2424
"Up to" : "Jusqu'à",
2525
"tokens in response." : "jetons en réponse.",
2626
"This model can only process maximum of" : "Ce modèle ne peut traiter qu'un maximum de",
27-
"tokens in a single request, please reduce your prompt or response length." : "jetons dans une seule demande, veuillez réduire la longueur de votre invite ou de votre réponse."
27+
"tokens in a single request, please reduce your prompt or maximum length." : "jetons dans une seule demande, veuillez réduire la longueur de votre invite ou longueur maximale."
2828
}

sdkjs-plugins/content/openai/translations/ru-RU.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,5 +24,5 @@
2424
"Up to" : "Вплоть до",
2525
"tokens in response." : "токенов в ответе.",
2626
"This model can only process maximum of" : "Эта модель может обрабатывать только",
27-
"tokens in a single request, please reduce your prompt or response length." : "токенов в одном запросе, пожалуйста уменьшите запрос или максимальное количество токенов."
27+
"tokens in a single request, please reduce your prompt or maximum length." : "токенов в одном запросе, пожалуйста, уменьшите запрос или максимальное количество токенов."
2828
}

0 commit comments

Comments
 (0)