Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unknown error was encountered while running the model #5800

Closed
gurwinderintel opened this issue Sep 20, 2024 · 1 comment
Closed

Unknown error was encountered while running the model #5800

gurwinderintel opened this issue Sep 20, 2024 · 1 comment

Comments

@gurwinderintel
Copy link

Ubuntu 24.04 LTS, Nvidia RTX 2070
Ollama Deepseekcoder-v2

Logs:

14:35:44 - LiteLLM Proxy:ERROR: _common.py:120 - Giving up chat_completion(...) after 1 tries (litellm.proxy._types.ProxyException)
INFO: 127.0.0.1:38880 - "POST /chat/completions HTTP/1.1" 500 Internal Server Error
14:36:27 - LiteLLM Proxy:ERROR: proxy_server.py:3313 - litellm.proxy.proxy_server.chat_completion(): Exception occured - litellm.APIConnectionError: OllamaException - {"error":"an unknown error was encountered while running the model "}
Traceback (most recent call last):
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/main.py", line 425, in acompletion
response = await init_response
^^^^^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/llms/ollama.py", line 495, in ollama_acompletion
raise e # don't use verbose_logger.exception, if exception is raised
^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/llms/ollama.py", line 440, in ollama_acompletion
raise OllamaError(status_code=resp.status, message=text)
litellm.llms.ollama.OllamaError: {"error":"an unknown error was encountered while running the model "}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 3202, in chat_completion
responses = await llm_responses
^^^^^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 1595, in wrapper_async
raise e
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 1415, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/main.py", line 447, in acompletion
raise exception_type(
^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 8196, in exception_type
raise e
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 8161, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: OllamaException - {"error":"an unknown error was encountered while running the model "}
14:36:27 - LiteLLM Proxy:ERROR: _common.py:120 - Giving up chat_completion(...) after 1 tries (litellm.proxy._types.ProxyException)
INFO: 127.0.0.1:38880 - "POST /chat/completions HTTP/1.1" 500 Internal Server Error

@krrishdholakia
Copy link
Contributor

. ^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/llms/ollama.py", line 440, in ollama_acompletion
raise OllamaError(status_code=resp.status, message=text)
litellm.llms.ollama.OllamaError: {"error":"an unknown error was encountered while running the model "}

this error is from Ollama not litellm @gurwinderintel

check your ollama logs to debug this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants