You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
14:35:44 - LiteLLM Proxy:ERROR: _common.py:120 - Giving up chat_completion(...) after 1 tries (litellm.proxy._types.ProxyException)
INFO: 127.0.0.1:38880 - "POST /chat/completions HTTP/1.1" 500 Internal Server Error
14:36:27 - LiteLLM Proxy:ERROR: proxy_server.py:3313 - litellm.proxy.proxy_server.chat_completion(): Exception occured - litellm.APIConnectionError: OllamaException - {"error":"an unknown error was encountered while running the model "}
Traceback (most recent call last):
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/main.py", line 425, in acompletion
response = await init_response
^^^^^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/llms/ollama.py", line 495, in ollama_acompletion
raise e # don't use verbose_logger.exception, if exception is raised
^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/llms/ollama.py", line 440, in ollama_acompletion
raise OllamaError(status_code=resp.status, message=text)
litellm.llms.ollama.OllamaError: {"error":"an unknown error was encountered while running the model "}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 3202, in chat_completion
responses = await llm_responses
^^^^^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 1595, in wrapper_async
raise e
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 1415, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/main.py", line 447, in acompletion
raise exception_type(
^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 8196, in exception_type
raise e
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 8161, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: OllamaException - {"error":"an unknown error was encountered while running the model "}
14:36:27 - LiteLLM Proxy:ERROR: _common.py:120 - Giving up chat_completion(...) after 1 tries (litellm.proxy._types.ProxyException)
INFO: 127.0.0.1:38880 - "POST /chat/completions HTTP/1.1" 500 Internal Server Error
The text was updated successfully, but these errors were encountered:
. ^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/llms/ollama.py", line 440, in ollama_acompletion
raise OllamaError(status_code=resp.status, message=text)
litellm.llms.ollama.OllamaError: {"error":"an unknown error was encountered while running the model "}
Ubuntu 24.04 LTS, Nvidia RTX 2070
Ollama Deepseekcoder-v2
Logs:
14:35:44 - LiteLLM Proxy:ERROR: _common.py:120 - Giving up chat_completion(...) after 1 tries (litellm.proxy._types.ProxyException)
INFO: 127.0.0.1:38880 - "POST /chat/completions HTTP/1.1" 500 Internal Server Error
14:36:27 - LiteLLM Proxy:ERROR: proxy_server.py:3313 - litellm.proxy.proxy_server.chat_completion(): Exception occured - litellm.APIConnectionError: OllamaException - {"error":"an unknown error was encountered while running the model "}
Traceback (most recent call last):
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/main.py", line 425, in acompletion
response = await init_response
^^^^^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/llms/ollama.py", line 495, in ollama_acompletion
raise e # don't use verbose_logger.exception, if exception is raised
^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/llms/ollama.py", line 440, in ollama_acompletion
raise OllamaError(status_code=resp.status, message=text)
litellm.llms.ollama.OllamaError: {"error":"an unknown error was encountered while running the model "}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 3202, in chat_completion
responses = await llm_responses
^^^^^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 1595, in wrapper_async
raise e
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 1415, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/main.py", line 447, in acompletion
raise exception_type(
^^^^^^^^^^^^^^^
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 8196, in exception_type
raise e
File "/home/gursingh/cuda/lib/python3.12/site-packages/litellm/utils.py", line 8161, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: OllamaException - {"error":"an unknown error was encountered while running the model "}
14:36:27 - LiteLLM Proxy:ERROR: _common.py:120 - Giving up chat_completion(...) after 1 tries (litellm.proxy._types.ProxyException)
INFO: 127.0.0.1:38880 - "POST /chat/completions HTTP/1.1" 500 Internal Server Error
The text was updated successfully, but these errors were encountered: