Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

koboldai: init #3

Merged
merged 4 commits into from
Feb 25, 2023
Merged

koboldai: init #3

merged 4 commits into from
Feb 25, 2023

Conversation

MatthewCroughan
Copy link
Member

kobold-ai is a web interface for interacting with "transformers" models https://huggingface.co/docs/transformers/index

The wrapper script will automatically put its state in ~/.koboldai/state when ran, and it will also take care of setting LD_LIBRARY_PATH if the program is being ran in the Windows Subsystem for Linux so that it can make use of the GPU in that context

I've also updated InvokeAI in this PR to do the same, which removes the invokeai-wsl-nvidia and invokeai-wsl-amd outputs as they are no longer required, as running the program in WSL now just automatically work.

A GPU is not required, and either the NVIDIA or AMD output will operate in CPU mode when a GPU is not detected

Below is an example of how to launch it, as well as a screenshot of the web interface working with a simple GPT2 model, it took 3 seconds to compute on a Ryzen 5950x. The generated text is highlighted, whereas the input text is not highlighted

❯ nix run github:nixified-ai/flake/mc/kobold-ai#koboldai-nvidia -- --host
INIT       | Starting   | Flask
INIT       | OK         | Flask
INIT       | Starting   | Webserver
INIT       | Starting   | LUA bridge
INIT       | OK         | LUA bridge
INIT       | Starting   | LUA Scripts
INIT       | OK         | LUA Scripts
INIT       | OK         | Webserver
MESSAGE    | Webserver has started, you can now connect to this machine at port: 5000

image

This PR also adds an AUTHORS file which specifies the current maintainers, I'm copying the format from https://github.com/openzfs/zfs/blob/master/AUTHORS

This removes the need to have multiple output variations like
invokeai-wsl-amd and instead just uses a simple bash wrapper to discover
whether the program is being ran in the WSL
Copy link
Member

@max-privatevoid max-privatevoid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM on AMD

@max-privatevoid max-privatevoid merged commit f57ff94 into master Feb 25, 2023
@max-privatevoid max-privatevoid deleted the mc/kobold-ai branch February 25, 2023 22:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants