generated from abrenneke/rivet-plugin-example
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The ollama plugin does not work well in the latest version. #2
Comments
You have to either run ollama with |
Solved. |
This comment was marked as resolved.
This comment was marked as resolved.
abrenneke
pushed a commit
that referenced
this issue
Apr 22, 2024
feat: Implement batch processing for Ollama embeddings
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
hello
I would like to inquire because I have encountered a problem when using rivet with ollama.
I recently installed rivet and ollama on my Apple Silicon MacBook Pro.
After confirming that ollama serve is working well at http://127.0.0.1:11434
Enter the address in the plugin settings and restart rivet.
Added ollama chat node.
The LLM model used llama2
I tried entering a simple prompt and seeing the results, but only the message "Error from Ollama: Load failed" appeared and it no longer ran.
After confirming that it works well by running "ollama run llama2" in the terminal,
Add the GET OLLAMA MODEL node, enter "llama2" as the model name, and run it.
All I get is an error saying “TypeError: Load failed”.
The text was updated successfully, but these errors were encountered: