-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Insights: BerriAI/litellm
September 17, 2024 – September 24, 2024
Overview
Could not load contribution data
Please try again later
15 Releases published by 1 person
-
v1.46.1.dev2
published
Sep 17, 2024 -
v1.46.2
published
Sep 18, 2024 -
v1.46.4
published
Sep 18, 2024 -
v1.46.5
published
Sep 18, 2024 -
v1.46.6
published
Sep 19, 2024 -
v1.46.7
published
Sep 20, 2024 -
v1.46.8
published
Sep 20, 2024 -
v1.47.0
published
Sep 21, 2024 -
v1.47.1
published
Sep 22, 2024 -
v1.47.2
published
Sep 23, 2024 -
v1.47.2.dev1
published
Sep 23, 2024 -
v1.47.2.dev4
published
Sep 23, 2024 -
v1.47.2.dev5
published
Sep 24, 2024 -
v1.48.0
published
Sep 24, 2024 -
v1.48.0.dev1
published
Sep 24, 2024
59 Pull requests merged by 9 people
-
[Admin UI - Proxy] Add Deepseek as a provider
#5857 merged
Sep 24, 2024 -
Update the dockerignore file
#5863 merged
Sep 24, 2024 -
LiteLLM Minor Fixes & Improvements (09/23/2024)
#5842 merged
Sep 24, 2024 -
[Docker-Security Fix] - handle debian issue on docker builds
#5752 merged
Sep 24, 2024 -
[Feat] Admin UI - Add Service Accounts
#5855 merged
Sep 24, 2024 -
[Feat UI sso] store 'provider' in user metadata
#5856 merged
Sep 24, 2024 -
[Feat-Proxy] add service accounts backend
#5852 merged
Sep 23, 2024 -
[Feat] SSO - add
provider
in the OpenID field for custom sso#5849 merged
Sep 23, 2024 -
[UI Fix] List all teams on UI when user is Admin
#5851 merged
Sep 23, 2024 -
[Testing-Proxy] Add E2E Admin UI testing
#5845 merged
Sep 23, 2024 -
feat(vertex): Use correct provider for response_schema support check
#5815 merged
Sep 22, 2024 -
Cost tracking improvements
#5828 merged
Sep 22, 2024 -
LiteLLM Minor Fixes & Improvements (09/21/2024)
#5819 merged
Sep 22, 2024 -
[Fix] virtual key auth checks on vertex ai pass through endpoints
#5827 merged
Sep 22, 2024 -
[fix-sso] Allow internal user viewer to view usage routes
#5825 merged
Sep 21, 2024 -
Fix premium user check on key creation
#5826 merged
Sep 21, 2024 -
[SSO-UI] Set new sso users as internal_view role users
#5824 merged
Sep 21, 2024 -
[Feat] Allow setting custom arize endpoint
#5709 merged
Sep 21, 2024 -
[Feat] Prometheus - show status code and class type on prometheus
#5806 merged
Sep 21, 2024 -
[Feat] Add testing for prometheus failure metrics
#5823 merged
Sep 21, 2024 -
[Feat] Allow setting
supports_vision
for Custom OpenAI endpoints + Added testing#5821 merged
Sep 21, 2024 -
Litellm disable keys
#5814 merged
Sep 21, 2024 -
Fixed DeepSeek input and output tokens
#5718 merged
Sep 21, 2024 -
Correct casing
#5817 merged
Sep 21, 2024 -
[Feat] Add fireworks AI embedding
#5812 merged
Sep 21, 2024 -
LiteLLM Minor Fixes & Improvements (09/20/2024)
#5807 merged
Sep 21, 2024 -
refactor: cleanup root of repo
#5813 merged
Sep 21, 2024 -
[Feat-Proxy] Allow using custom sso handler
#5809 merged
Sep 21, 2024 -
[Fix] log update_db statement in .debug() mode
#5810 merged
Sep 21, 2024 -
[Fix] Tag Based Routing not work with wildcard routing
#5805 merged
Sep 20, 2024 -
LiteLLM Minor Fixes & Improvements (09/19/2024)
#5793 merged
Sep 20, 2024 -
ui fix correct team not loading
#5804 merged
Sep 20, 2024 -
[Feat] Add Error Handling for /key/list endpoint
#5787 merged
Sep 20, 2024 -
[ Proxy - User Management]: If user assigned to a team don't show Default Team
#5791 merged
Sep 20, 2024 -
[Feat] Add proxy level prometheus metrics
#5789 merged
Sep 20, 2024 -
[Chore-Docs] fix curl on /get team info swagger
#5792 merged
Sep 19, 2024 -
test: replace gpt-3.5-turbo-0613 (deprecated model)
#5794 merged
Sep 19, 2024 -
[Feat] Add Azure gpt-35-turbo-0301 pricing
#5790 merged
Sep 19, 2024 -
LiteLLM Minor Fixes & Improvements (09/18/2024)
#5772 merged
Sep 19, 2024 -
[Feat] add Groq gemma2 9b pricing
#5788 merged
Sep 19, 2024 -
[Fix-Bedrock] use Bedrock converse for
"meta.llama3-8b-instruct-v1:0", "meta.llama3-70b-instruct-v1:0"
#5729 merged
Sep 19, 2024 -
feat(prometheus_api.py): support querying prometheus metrics for all-up + key-level spend on UI
#5782 merged
Sep 19, 2024 -
[Feat- prometheus] track input and output tokens
#5780 merged
Sep 19, 2024 -
[Fix-Proxy] Enforce Virtual Key Auth on /vertex-ai/, /bedrock, passthrough endpoints
#5779 merged
Sep 18, 2024 -
[Chore-Proxy] enforce jwt auth as enterprise feature
#5770 merged
Sep 18, 2024 -
[Chore LiteLLM Proxy] enforce prometheus metrics as enterprise feature
#5769 merged
Sep 18, 2024 -
[Feat-Proxy] Add Azure Assistants API - Create Assistant, Delete Assistant Support
#5777 merged
Sep 18, 2024 -
[Prometheus] track requested model
#5774 merged
Sep 18, 2024 -
[Feat - GCS Bucket Logger] Use StandardLoggingPayload
#5771 merged
Sep 18, 2024 -
Additional Fixes (09/17/2024)
#5759 merged
Sep 18, 2024 -
LiteLLM Minor Fixes & Improvements (09/17/2024)
#5742 merged
Sep 18, 2024 -
[Feat] Log Request metadata on gcs bucket logging
#5743 merged
Sep 18, 2024 -
[Feat-Proxy-DataDog] Log Redis, Postgres Failure events on DataDog
#5750 merged
Sep 18, 2024 -
[Fix] o1-mini causes pydantic warnings on
reasoning_tokens
#5754 merged
Sep 18, 2024 -
Bump next from 14.1.1 to 14.2.10 in /ui/litellm-dashboard
#5753 merged
Sep 18, 2024 -
Litellm fix router testing
#5748 merged
Sep 18, 2024 -
Fix hardcoding of schema in view check
#5749 merged
Sep 17, 2024
16 Pull requests opened by 8 people
-
Log assistants API calls to cloudwatch
#5761 opened
Sep 18, 2024 -
Merge: #5815- feat(vertex): Use correct provider for response_schema support check
#5829 opened
Sep 22, 2024 -
Add REST API examples to Vision documentation
#5844 opened
Sep 23, 2024 -
fix deserialization error when using gpt-4o-mini
#5848 opened
Sep 23, 2024 -
LiteLLM Minor Fixes & Improvements (09/23/2024) (#5842)
#5858 opened
Sep 24, 2024 -
Upgrade dependencies in dockerfile
#5862 opened
Sep 24, 2024 -
Update some of the python dependencies
#5864 opened
Sep 24, 2024 -
Update some of the python dependencies connected to llms
#5865 opened
Sep 24, 2024 -
Upgrade prism lib
#5866 opened
Sep 24, 2024 -
Upgrade python packages
#5867 opened
Sep 24, 2024 -
Upgrade poetry lock file
#5868 opened
Sep 24, 2024 -
Install curl to be used for AWS ECS health check
#5869 opened
Sep 24, 2024 -
Update litellm helm envconfigmap
#5872 opened
Sep 24, 2024 -
Add new Gemini models
#5874 opened
Sep 24, 2024 -
[Fix] OTEL - Don't log messages when callback settings disable message logging
#5875 opened
Sep 24, 2024
41 Issues closed by 8 people
-
[Feature]: Cloudflare AI Gateway support for Google Vertex AI
#3732 closed
Sep 24, 2024 -
[Bug]: Ollama provider missing api_key
#5832 closed
Sep 22, 2024 -
[Feature]: `supports_prompt_caching` property for LMs
#5776 closed
Sep 22, 2024 -
[Feature]: Add enable and disable features for the model and Virtual Keys
#5328 closed
Sep 22, 2024 -
[Bug]: Latest release broke virtual key creation
#5820 closed
Sep 21, 2024 -
[Bug]: Vertex gemini-pro-vision error using LiteLLM SDK
#5768 closed
Sep 21, 2024 -
[Feature]: Add Auto-discovery of "*" models when calling /models and /v1/model/info
#5818 closed
Sep 21, 2024 -
[Feature]: Add Fireworks AI Embedding
#5797 closed
Sep 21, 2024 -
[Bug]: `stream_options` with fake streaming
#5803 closed
Sep 21, 2024 -
Support CONFIG_FILE_PATH for proxy (Easier Azure container deployment)
#5744 closed
Sep 21, 2024 -
[Bug]: incorrect kwarg in Routing Strategies docs
#5808 closed
Sep 21, 2024 -
[Bug]: Break for google gemini
#5798 closed
Sep 21, 2024 -
[Bug]:
#5811 closed
Sep 21, 2024 -
[Bug]: Tag Based Routing not work with wildcard routing
#5801 closed
Sep 20, 2024 -
Unknown error was encountered while running the model
#5800 closed
Sep 20, 2024 -
[Bug]: Visiting "Usage" in the UI causes a server error
#5756 closed
Sep 20, 2024 -
[Proxy-UI]:For Each API Key show how many errors/success API calls
#2455 closed
Sep 20, 2024 -
[Proxy-UI]:For Each API Key show cache hits
#2454 closed
Sep 20, 2024 -
[11/03/2024 - 18/03/2024] New Models/Endpoints/Providers/Improvements
#2449 closed
Sep 20, 2024 -
[Feature]: ability to see how much capacity is remaining before hitting quota
#3323 closed
Sep 20, 2024 -
[Bug]: Prometheus monitors the number of requests collected in Azure OpenAI
#4617 closed
Sep 20, 2024 -
[Feature]: Proxy - User Management: If user assigned to a team don't show Default Team
#5696 closed
Sep 20, 2024 -
[Bug]: Broken `More details` links on main documentation page
#5760 closed
Sep 19, 2024 -
[Bug]: model list api_key leak
#5762 closed
Sep 19, 2024 -
LiteLLM does redis sentinel support?
#4381 closed
Sep 19, 2024 -
[Bug]: azure_ad_token not passed in header when stream=True
#5767 closed
Sep 19, 2024 -
[Feature]: Convert OpenAI list-content into open-source compatible string type
#5755 closed
Sep 19, 2024 -
[Bug]: Can't use image urls that redirects (with Anthropic models)
#5763 closed
Sep 19, 2024 -
[Bug]: Docker Compose docs unclear in terms of config.yaml if NOT using DB
#5739 closed
Sep 19, 2024 -
[Bug]: Fireworks models are missing from `model_prices_and_context_window.json`
#4570 closed
Sep 19, 2024 -
[Bug]: Gemma 2 on Groq missing in pricing
#5785 closed
Sep 19, 2024 -
[Bug]: LlamaIndex call OpenAI Like Api got err : NotFoundError 404
#5784 closed
Sep 19, 2024 -
[Bug]: deepcopy Causes Error with pydantic_core._pydantic_core.SerializationIterator Object
#5684 closed
Sep 19, 2024 -
[Bug]: Docs don't appear to include the /fallback/login endpoint
#5778 closed
Sep 18, 2024 -
[Bug]: The 429 error code hides the real cause.
#5764 closed
Sep 18, 2024 -
LiteLLM Proxy Startup Error: TypeError in check_view_exists()
#5702 closed
Sep 18, 2024 -
[Bug]: Azure_AI not supported for rerank models
#5667 closed
Sep 18, 2024 -
[Feature]: Langchain / Smith Logging Integration doesn't appear to expose errors/metadata
#5738 closed
Sep 18, 2024 -
[Bug]: o1-mini causes pydantic warnings on `reasoning_tokens`
#5669 closed
Sep 18, 2024
29 Issues opened by 22 people
-
[Bug]: LiteLLM ProxyConfig behaves abnormally when using vllm OpenAI-Compatible Endpoints
#5876 opened
Sep 24, 2024 -
Error while using ollama llama3.1
#5871 opened
Sep 24, 2024 -
[Feature]: Azure ai studio support embedding models
#5861 opened
Sep 24, 2024 -
[Feature]: Health Endpoint to Power Latency-Based Routing
#5860 opened
Sep 24, 2024 -
[Feature]: Time To First Token Timeout
#5859 opened
Sep 24, 2024 -
[Bug]: `openai==1.47` breaks CI
#5854 opened
Sep 23, 2024 -
[Bug]: Proxy: Constant "Provider NOT provided" errors in log on invalid model name, no way to stop them
#5853 opened
Sep 23, 2024 -
[Bug]: inconsistent `max_tokens` for Bedrock Anthropic Claude 3.5 Sonnet
#5850 opened
Sep 23, 2024 -
[Bug]: CompletionTokensDetails is not JSON serializable
#5847 opened
Sep 23, 2024 -
[Feature]: Allow creating service account keys
#5846 opened
Sep 23, 2024 -
[Feature]: Allow `configurable_clientside_parameters` for switching b/w finetuning models easily
#5843 opened
Sep 23, 2024 -
[Bug]: AttributeError: 'LangFuseLogger' object has no attribute 'upstream_langfuse_debug'
#5840 opened
Sep 23, 2024 -
[Bug]: Groq - Tool / Function Calling Example throws function_call is not nullable error
#5839 opened
Sep 23, 2024 -
[Bug]: Prisma trying to write to read-only `site-packages/prisma` in non_root image
#5838 opened
Sep 23, 2024 -
[Bug]: Missing pricing for mistral/pixtral-12b-2409
#5837 opened
Sep 23, 2024 -
[Feature]: improved diagnostic logging for invalid requests (litellm-proxy)
#5836 opened
Sep 23, 2024 -
Dual GPU dual models load balancing with liteLLM, but performance can't be doubled
#5835 opened
Sep 23, 2024 -
[Bug]: Helm Chart doesn't work
#5830 opened
Sep 22, 2024 -
[Bug]: Need to reduce parallel requests for health check based on model provider (i.e. ollama)
#5816 opened
Sep 21, 2024 -
[Feature]: Add support for CRUD operations on AzureOpenAI Vector Stores
#5799 opened
Sep 20, 2024 -
[Feature]: Add logging callbacks for assistants API
#5796 opened
Sep 19, 2024 -
[Bug]: Router not respecting TPM limits in concurrent async calls
#5783 opened
Sep 19, 2024 -
[Bug]: wildcard model vendor weight bug
#5781 opened
Sep 19, 2024 -
[Feature]: Support cloud zero cost tracking
#5773 opened
Sep 18, 2024 -
[Feature]: JSON can be passed to the Gemini API
#5766 opened
Sep 18, 2024 -
[Feature]: Allow setting guardrails default on
#5758 opened
Sep 18, 2024 -
[Feature]: Add unit test for Gemini / Google AI Studio + Cloudflare AI Gateway
#5757 opened
Sep 18, 2024 -
[Feature]: Use stream_options on Azure OpenAI
#5751 opened
Sep 17, 2024
115 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Keywords AI Integration
#5130 commented on
Sep 20, 2024 • 5 new comments -
Upgrade dependencies
#5665 commented on
Sep 24, 2024 • 2 new comments -
Update ollama.py
#4752 commented on
Sep 19, 2024 • 0 new comments -
feat: read chat-template from tokenizer files for vllm
#4737 commented on
Sep 19, 2024 • 0 new comments -
fix(factory.py): Filter out empty messages before making llm api call
#4678 commented on
Sep 19, 2024 • 0 new comments -
Proxy (health endpoints): Add `/health/db endpoint` w/ Prisma metrics
#4660 commented on
Sep 19, 2024 • 0 new comments -
Solving the return value format issue during multiple function calls with the LLaMA 3 model.
#4636 commented on
Sep 19, 2024 • 0 new comments -
fix(azure.py): Allow using Cloudflare AI gateway for embedding
#4629 commented on
Sep 19, 2024 • 0 new comments -
fix(parallel_request_limiter.py): support spend tracking caching across multiple instances
#4396 commented on
Sep 19, 2024 • 0 new comments -
ci(config.yml): add pytest-xdist
#4343 commented on
Sep 19, 2024 • 0 new comments -
Linting Refactor: New `ModelResponseChunk` for streaming
#4219 commented on
Sep 19, 2024 • 0 new comments -
Use presigned urls for S3 cache
#4190 commented on
Sep 19, 2024 • 0 new comments -
Clarifai: Fixed model name error and streaming
#4170 commented on
Sep 19, 2024 • 0 new comments -
Fix black at circle ci
#4161 commented on
Sep 19, 2024 • 0 new comments -
astra-assistants api support
#4118 commented on
Sep 19, 2024 • 0 new comments -
`assistants.md`: Add `user_api_end_user_max_budget` metadata to `litellm_params`
#4113 commented on
Sep 19, 2024 • 0 new comments -
Fix: Trim message break possible infinite loop
#4090 commented on
Sep 19, 2024 • 0 new comments -
Added type hints for model_list parameter in RouterConfig
#4074 commented on
Sep 19, 2024 • 0 new comments -
Improve prediction response method
#4073 commented on
Sep 19, 2024 • 0 new comments -
Fix the workflow to update the price
#4045 commented on
Sep 19, 2024 • 0 new comments -
feat(router.py): set default priority
#3998 commented on
Sep 19, 2024 • 0 new comments -
fix(http_handler.py): fix async client ssl verify
#3985 commented on
Sep 19, 2024 • 0 new comments -
Code duplication in Handling Responses
#3960 commented on
Sep 19, 2024 • 0 new comments -
Fix function call arg
#3917 commented on
Sep 19, 2024 • 0 new comments -
Feature/improved semantic cache
#3907 commented on
Sep 19, 2024 • 0 new comments -
Support DashScope Compatible API For Qwen Series Models
#3758 commented on
Sep 19, 2024 • 0 new comments -
Fixes #542 allow system messages + chat for palm api
#3718 commented on
Sep 19, 2024 • 0 new comments -
Add support for upserting users automatically to a default team based on JWT key
#3717 commented on
Sep 19, 2024 • 0 new comments -
Clean up prod prints - Convert print to log
#3667 commented on
Sep 19, 2024 • 0 new comments -
Fix base_url for replicate's http api
#2434 commented on
Sep 19, 2024 • 0 new comments -
drop imghdr
#5736 commented on
Sep 19, 2024 • 0 new comments -
Add Support for Custom Providers in Vision and Function Call Utils
#5688 commented on
Sep 24, 2024 • 0 new comments -
[Feat] Added Opik integration for logging and evaluation
#5680 commented on
Sep 19, 2024 • 0 new comments -
Bump the github-actions group across 1 directory with 6 updates
#5670 commented on
Sep 23, 2024 • 0 new comments -
Updates and improvements to watsonx provider
#5654 commented on
Sep 19, 2024 • 0 new comments -
Feat: Add Literal AI Integration
#5653 commented on
Sep 23, 2024 • 0 new comments -
[Pricing] Adjust Ollama models to chat instead of completions
#5595 commented on
Sep 19, 2024 • 0 new comments -
Fixed #5559 (asyncio tasks get detroyed while pending sometimes)
#5561 commented on
Sep 19, 2024 • 0 new comments -
Bump cryptography from 42.0.7 to 43.0.1
#5496 commented on
Sep 19, 2024 • 0 new comments -
Solving budget info update if the budget id exists
#5465 commented on
Sep 19, 2024 • 0 new comments -
Litellm azure ad token common helper
#5440 commented on
Sep 24, 2024 • 0 new comments -
Use patch instead of apatch for instructor
#5404 commented on
Sep 19, 2024 • 0 new comments -
Litellm current branch
#5398 commented on
Sep 19, 2024 • 0 new comments -
[Feat] add google ai studio ft models
#5373 commented on
Sep 19, 2024 • 0 new comments -
Fix regression ignoring SSL_VERIFY boolean values being set through e…
#5361 commented on
Sep 19, 2024 • 0 new comments -
Feat: Add Langtrace integration
#5341 commented on
Sep 19, 2024 • 0 new comments -
Update team_endpoints.py
#5269 commented on
Sep 19, 2024 • 0 new comments -
Fixes priority queue comparison to work with Redis cache enabled
#5268 commented on
Sep 19, 2024 • 0 new comments -
Add support for getAssistant endpoint
#5155 commented on
Sep 19, 2024 • 0 new comments -
fix: PII output parsing for multiple entities of same type
#5068 commented on
Sep 19, 2024 • 0 new comments -
Optimize Alpine Dockerfile by removing redundant apk commands
#5016 commented on
Sep 19, 2024 • 0 new comments -
Add `extra_headers` support for Databricks completion requests
#5006 commented on
Sep 19, 2024 • 0 new comments -
fix(spend_tracking): `/spend/logs` with no filter
#4998 commented on
Sep 19, 2024 • 0 new comments -
Integrating Not Diamond with LiteLLM
#4971 commented on
Sep 19, 2024 • 0 new comments -
fix parsing multi tool calls in stream_chunk_builder
#4936 commented on
Sep 19, 2024 • 0 new comments -
Print each model only once on startup
#4867 commented on
Sep 19, 2024 • 0 new comments -
Control running lakera prompt checks - pre api call OR in parallel
#4832 commented on
Sep 19, 2024 • 0 new comments -
Add `--config` arg to k8s Deployment example in docs
#4795 commented on
Sep 19, 2024 • 0 new comments -
chore(helm-chart): use default environment variable for master key
#2432 commented on
Sep 19, 2024 • 0 new comments -
fix(proxy_server.py): more efficient verification_token GET request
#2392 commented on
Sep 19, 2024 • 0 new comments -
[WIP] fix claude alternating messages
#2374 commented on
Sep 19, 2024 • 0 new comments -
fix(proxy_server.py): add better debug logging for sso callbacks
#1965 commented on
Sep 19, 2024 • 0 new comments -
allow users to create their own keys
#1870 commented on
Sep 19, 2024 • 0 new comments -
fix(caching.py): add more debug statements for caching
#1858 commented on
Sep 19, 2024 • 0 new comments -
ci: added typechecking
#1537 commented on
Sep 24, 2024 • 0 new comments -
fix(ollama): metrics handling
#1514 commented on
Sep 19, 2024 • 0 new comments -
feat(proxy_server.py): new /user/export endpoint
#1486 commented on
Sep 19, 2024 • 0 new comments -
Litellm user budget fix
#1479 commented on
Sep 19, 2024 • 0 new comments -
chore: sort imports using isort
#1405 commented on
Sep 19, 2024 • 0 new comments -
Format using black, change black within circleci to --check
#1363 commented on
Sep 19, 2024 • 0 new comments -
fix(utils.py): support complete_response=true for text completion streaming
#1358 commented on
Sep 19, 2024 • 0 new comments -
(feat) added experiemental guidance function calling
#1258 commented on
Sep 19, 2024 • 0 new comments -
feat: add aphrodite support
#1153 commented on
Sep 19, 2024 • 0 new comments -
Added CLOVA studio Hyperclova X API support
#853 commented on
Sep 19, 2024 • 0 new comments -
add function call response parser for non openai models
#768 commented on
Sep 19, 2024 • 0 new comments -
router.py fixes
#721 commented on
Sep 19, 2024 • 0 new comments -
[Bug]: Re-add/fix upstream Langfuse support
#3731 commented on
Sep 24, 2024 • 0 new comments -
[Feature]: Upgrade python & general dependencies
#5630 commented on
Sep 24, 2024 • 0 new comments -
New Models/Endpoints/Providers
#4922 commented on
Sep 24, 2024 • 0 new comments -
[Bug]: Missing mode when adding a model via UI
#5270 commented on
Sep 20, 2024 • 0 new comments -
[Feature]: Add support for reading secrets from Hashicorp vault
#2815 commented on
Sep 20, 2024 • 0 new comments -
[Bug]: In litellm version 1.46.1, AnthropicException thrown when making calls with tools
#5747 commented on
Sep 19, 2024 • 0 new comments -
Add aws knowledgebase support
#4840 commented on
Sep 18, 2024 • 0 new comments -
[Feature-Master List]: O-1 Support
#5672 commented on
Sep 18, 2024 • 0 new comments -
[Feature]: support dbrx client for getting credentials in spark notebooks
#5732 commented on
Sep 17, 2024 • 0 new comments -
🎅 I WISH LITELLM HAD...
#361 commented on
Sep 17, 2024 • 0 new comments -
[Optimize] Optimize the code for remove time complexity in llms bedro…
#3665 commented on
Sep 19, 2024 • 0 new comments -
Adding multiple public keys test
#3649 commented on
Sep 19, 2024 • 0 new comments -
Fixed JWT public key finding
#3648 commented on
Sep 19, 2024 • 0 new comments -
fix: remove --accept_data_loss flag
#3565 commented on
Sep 19, 2024 • 0 new comments -
fix(router.py): fix default cooldown time to be 60s
#3529 commented on
Sep 19, 2024 • 0 new comments -
Fix exception handling gemini
#3493 commented on
Sep 19, 2024 • 0 new comments -
feat(bedrock.py): Add Cloudflare AI Gateway support
#3467 commented on
Sep 19, 2024 • 0 new comments -
fixes #3264 and adds team_alias to /global/spend/teams
#3454 commented on
Sep 19, 2024 • 0 new comments -
Supporting api key from the headers as well
#3418 commented on
Sep 19, 2024 • 0 new comments -
fix(main.py): use model_api_key determined from get_api_key
#3348 commented on
Sep 19, 2024 • 0 new comments -
Support dashscope API for Qwen models
#3344 commented on
Sep 19, 2024 • 0 new comments -
OpenAI chat completion message type annotation
#3284 commented on
Sep 19, 2024 • 0 new comments -
fix(router.py): check cache hits before making router.completion calls
#3227 commented on
Sep 19, 2024 • 0 new comments -
Litellm fix async text completions
#3215 commented on
Sep 19, 2024 • 0 new comments -
fix(main.py): support 'custom_llm_provider' in acompletion
#3121 commented on
Sep 19, 2024 • 0 new comments -
The Spark API supports the completion method from the Litellm
#3058 commented on
Sep 19, 2024 • 0 new comments -
fix: fix embedding response to return pydantic object
#2784 commented on
Sep 19, 2024 • 0 new comments -
Fix bug where 'custom_llm_provider' argument is not passed correctly in 'acompletion'
#2758 commented on
Sep 19, 2024 • 0 new comments -
fix(utils.py): default usage tokens to 0
#2736 commented on
Sep 19, 2024 • 0 new comments -
[NEW-MODEL] Add Solar model
#2717 commented on
Sep 19, 2024 • 0 new comments -
feat(main.py): support calling text completione endpoint for openai compatible providers
#2709 commented on
Sep 19, 2024 • 0 new comments -
(feat) allow users to opt out of message merge - Anthropic
#2671 commented on
Sep 19, 2024 • 0 new comments -
fix(main.py): Correctly route to `/completions` (if supported) when called for openai-compatible endpoints
#2595 commented on
Sep 19, 2024 • 0 new comments -
add Lite llm docker proxy (Gemini ver)
#2574 commented on
Sep 19, 2024 • 0 new comments -
add JP readme & docker minimal
#2569 commented on
Sep 19, 2024 • 0 new comments -
support ZhipuAI models
#2514 commented on
Sep 19, 2024 • 0 new comments -
Integration with Canonical Neural Cache
#2504 commented on
Sep 19, 2024 • 0 new comments -
:wrench: add credentials parameter to completion
#2463 commented on
Sep 19, 2024 • 0 new comments -
Litellm update redisvl
#2444 commented on
Sep 19, 2024 • 0 new comments