-
Peking University / WICT
- Beijing
Highlights
- Pro
Starred repositories
Open-source Windows and Office activator featuring HWID, Ohook, KMS38, and Online KMS activation methods, along with advanced troubleshooting.
A high-throughput and memory-efficient inference and serving engine for LLMs
Image restoration with neural networks but without learning.
Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"
Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"
Repository for Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions, ACL23
Vector (and Scalar) Quantization, in Pytorch
Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03299)
This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.
[ACL'24] MC^2: A Multilingual Corpus of Minority Languages in China (Tibetan, Uyghur, Kazakh, and Mongolian)
Panda项目是于2023年5月启动的开源海外中文大语言模型项目,致力于大模型时代探索整个技术栈,旨在推动中文自然语言处理领域的创新和合作。
Interact with your documents using the power of GPT, 100% privately, no data leaks
The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
A Gradio web UI for Large Language Models.
Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"
LLM Zoo collects information of various open- and close-sourced LLMs
Aligning pretrained language models with instruction data generated by themselves.
A library for advanced large language model reasoning
原理解析及代码实战,推荐算法也可以很简单 🔥 想要系统的学习推荐算法的小伙伴,欢迎 Star 或者 Fork 到自己仓库进行学习🚀 有任何疑问欢迎提 Issues,也可加文末的联系方式向我询问!
Code and Checkpoints for "Generate rather than Retrieve: Large Language Models are Strong Context Generators" in ICLR 2023.
Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合
This repository contains a collection of papers and resources on Reasoning in Large Language Models.
Benchmarking large language models' complex reasoning ability with chain-of-thought prompting