- Taipei, Taiwan
-
04:15
(UTC +08:00) - http://ycc.idv.tw
LLM
Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
Awesome-LLM: a curated list of Large Language Model
Universal LLM Deployment Engine with ML Compilation
LangChain 的中文入门教程
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
🦜🔗 Build context-aware reasoning applications
Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
A Traditional-Chinese instruction-following model with datasets based on Alpaca.
Code and documentation to train Stanford's Alpaca models, and generate the data.
ChatGPT 中文调教指南。各种场景使用指南。学习怎么让它听你的话。
This repo includes ChatGPT prompt curation to use ChatGPT better.
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
An Open-Source Framework for Prompt-Learning.
A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
Instruction Tuning with GPT-4
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Large Language Model Text Generation Inference
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
Easily share permanent links to ChatGPT conversations with your friends
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
the first library to let you embed a developer agent in your own app!
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.