Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama-3.1-70b-versatile has been decommissioned #107

Open
Strandpalme opened this issue Feb 12, 2025 · 1 comment
Open

llama-3.1-70b-versatile has been decommissioned #107

Strandpalme opened this issue Feb 12, 2025 · 1 comment

Comments

@Strandpalme
Copy link

When I tried to use groq in ran into the following error:
Provider List: https://docs.litellm.ai/docs/providers 20:33:02 - LiteLLM:ERROR: main.py:370 - litellm.acompletion(): Exception occured - Error code: 400 - {'error': {'message': 'The model llama-3.1-70b-versatilehas been decommissioned and is no longer supported. Please refer to https://console.groq.com/docs/deprecations for a recommendation on which model to use instead.', 'type': 'invalid_request_error', 'code': 'model_decommissioned'}} Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, uselitellm.set_verbose=True'.
Traceback (most recent call last):
File "/workspace/.venv/lib/python3.11/site-packages/litellm/llms/openai.py", line 942, in async_streaming
response = await openai_aclient.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^`

Changing ChatModel.LLAMA_3_70B: "groq/llama-3.1-70b-versatile", in src/backend/constants.py to ChatModel.LLAMA_3_70B: "groq/llama-3.3-70b-versatile", fixed it.

@mauseoluwasegun
Copy link

The error response from Groq's API provides the solution_

  1. Model Decommissioning: The API states that "llama-3.1-70b-versatile" has been decommissioned.
  2. Replacement Recommendation: It advises checking Groq's deprecation documentation for recommended alternatives: https://console.groq.com/docs/deprecations.
    3.The Fix: As the user andpalme found, changing the model to "groq/llama-3.3-70b-versatile" resolved the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants