-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(frontend): Improve models input UI/UX in settings #3530
feat(frontend): Improve models input UI/UX in settings #3530
Conversation
@neubig Could confirm the verified providers and models we want to display?
|
Is OpenRouter still selectable as provider? |
@tobitege Yes, looks like we do: The thing I can't be certain is if all the models OpenRouter offers through LiteLLM are under this provider. It could be the case that there are additional models LiteLLM does NOT list/return, or they do but without the provider (e.g., |
Thanks! Yes, OpenRouter could add new models any day, but usually litellm is usually pretty fast in adding support for new ones. |
Yes I'm trying not to restrict anything here. Actually the thing that this PR does remove is the ability to input a custom model given a provider (e.g., |
I think it'd be ok. The typing in the box always felt wonky anyways. |
So users could still enter anything they want via the Custom option, right? Like for Ollama etc.? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Just tried it out, works great, thanks!
…3530) * Create helper functions * Add map according to litellm docs * Create ModelSelector * Extend model selector * use autocomplete from nextui * Improve keys without providers * Handle models without a provider * Add verified section and some empty handling * Add support for default or previously set models * Update tests * Lint * Remove modifier * Fix typescript error * Functionality for switching to custom model * Add verified models * Respond to resetting to default * Comment
@amanape There is this tricky thing where liteLLM interprets the model names like "openai/something" as a hint to route the I don't know if that has relevance for openrouter. I know our users needed it in some cases, like CommandR and/or Cohere where the prefix worked, its missing did not. My point is that typing in the box was necessary to add /openai to some prefixes. Maybe it's fine if it's still possible via custom model, though. Sorry for typing in a closed issue, just a quick thought and I'll take it these days elsewhere for further thinking/testing. |
What is the problem that this fixes or functionality that this introduces? Does it fix any open issues?
The models input dumps all the raw LiteLLM IDs that cause slight performance issues and overwhelm the user with options.
Give a summary of what the PR does, explaining any non-trivial design decisions
Default behaviour
This PR breaks down the input into a Provider input and Models input. Given the provider, a filtered models list will be available for the user to choose.
LLM_MODEL
toopenai/gpt-4o
fromgpt-4o
The user can see the "Verified" providers and models that we recommend and know work well with OpenHands.
utils/verified-models.ts
If preferred, the user can set a custom LLM model.
CUSTOM_LLM_MODEL
andUSING_CUSTOM_LLM
. That way, the socket class sends the appropriate LLM with theLLM_MODEL
key ifsettings.USING_CUSTOM_LLM
istrue
.Tests
Notes
some-provider/
), most are dumped in another
provider. Need to confirm if it is OK, which ones do we want to "pull out", and if there are alternative ways to handle this (TBH considering going to the LiteLLM repo and making the changes for consistency myself)TODO
Investigate providers that offer custom modelsAccidentally removed #3514 after re-forking