You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I tried to use groq in ran into the following error: Provider List: https://docs.litellm.ai/docs/providers 20:33:02 - LiteLLM:ERROR: main.py:370 - litellm.acompletion(): Exception occured - Error code: 400 - {'error': {'message': 'The model llama-3.1-70b-versatilehas been decommissioned and is no longer supported. Please refer to https://console.groq.com/docs/deprecations for a recommendation on which model to use instead.', 'type': 'invalid_request_error', 'code': 'model_decommissioned'}} Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, uselitellm.set_verbose=True'.
Traceback (most recent call last):
File "/workspace/.venv/lib/python3.11/site-packages/litellm/llms/openai.py", line 942, in async_streaming
response = await openai_aclient.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^`
Changing ChatModel.LLAMA_3_70B: "groq/llama-3.1-70b-versatile", in src/backend/constants.py to ChatModel.LLAMA_3_70B: "groq/llama-3.3-70b-versatile", fixed it.
The text was updated successfully, but these errors were encountered:
The error response from Groq's API provides the solution_
Model Decommissioning: The API states that "llama-3.1-70b-versatile" has been decommissioned.
Replacement Recommendation: It advises checking Groq's deprecation documentation for recommended alternatives: https://console.groq.com/docs/deprecations.
3.The Fix: As the user andpalme found, changing the model to "groq/llama-3.3-70b-versatile" resolved the issue.
When I tried to use groq in ran into the following error:
Provider List: https://docs.litellm.ai/docs/providers 20:33:02 - LiteLLM:ERROR: main.py:370 - litellm.acompletion(): Exception occured - Error code: 400 - {'error': {'message': 'The model
llama-3.1-70b-versatilehas been decommissioned and is no longer supported. Please refer to https://console.groq.com/docs/deprecations for a recommendation on which model to use instead.', 'type': 'invalid_request_error', 'code': 'model_decommissioned'}} Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use
litellm.set_verbose=True'.Traceback (most recent call last):
File "/workspace/.venv/lib/python3.11/site-packages/litellm/llms/openai.py", line 942, in async_streaming
response = await openai_aclient.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^`
Changing
ChatModel.LLAMA_3_70B: "groq/llama-3.1-70b-versatile",
in src/backend/constants.py toChatModel.LLAMA_3_70B: "groq/llama-3.3-70b-versatile",
fixed it.The text was updated successfully, but these errors were encountered: