Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LiteLLM Minor Fixes and Improvements (11/09/2024) #5634

Merged
merged 16 commits into from
Sep 12, 2024

Conversation

krrishdholakia
Copy link
Contributor

@krrishdholakia krrishdholakia commented Sep 11, 2024

Title

  • fix(caching.py): set ttl for async_increment cache
  • fix(router.py): allow setting retry policy via config.yaml s/o @arvinxx @eladsegal
  • fix(router.py): don't cooldown single deployments (prevents cooldown errors when just 1 model set for a 'model_name') s/o @dkondoetsy
  • fix(litellm_pre_call_utils.py): fix dynamic key logging when team id is set s/o @oz-elhassid
  • fix(secret_managers/main.py): load environment variables correctly s/o @oz-elhassid
  • feat(spend_tracking_utils.py): support logging additional usage params - e.g. prompt caching values for deepseek s/o @arvinxx
  • feat(user_api_key_auth.py): support setting allowed email domains on jwt tokens s/o @andrewbolster

Relevant issues

Fixes #5609

Type

🆕 New Feature
🐛 Bug Fix
🧹 Refactoring
📖 Documentation
🚄 Infrastructure
✅ Test

Changes

[REQUIRED] Testing - Attach a screenshot of any new tests passing locall

If UI changes, send a screenshot/GIF of working UI fixes

fixes issue where ttl for redis client was not being set on increment_cache

Fixes #5609
Copy link

vercel bot commented Sep 11, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Sep 12, 2024 5:36am

No point, as there's no other deployment to loadbalance with.
@dkondoetsy
Copy link

Thank you, Krrish!

@dkondoetsy
Copy link

Will this patch resolve this error?

litellm.proxy.proxy_server.chat_completion(): Exception occured - No deployments available for selected model, Try again in 60 seconds. Passed model=***. pre-call-checks=False, cooldown_list=['c9d48646f355a0ca27eecefe772512a8f609690dfbe31d4c60f5b591ebe1cd2b']

@krrishdholakia
Copy link
Contributor Author

Will this patch resolve this error?

yes, precisely

…is set

Fixes issue where key logging would not be set if team metadata was not none
Fixes issue where os.environ/ was not being loaded correctly
@krrishdholakia krrishdholakia merged commit 98c34a7 into main Sep 12, 2024
2 of 6 checks passed
@krrishdholakia krrishdholakia deleted the litellm_dev_11_09_2024 branch September 12, 2024 05:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: TPM and RPM accumulate with Redis Cache as keys don't have TTL
2 participants