Stars
A one stop repository for generative AI research updates, interview resources, notebooks and much more!
A collection of Vietnamese Natural Language Processing resources.
Human-free quality estimation of document summaries
Open-Domain Question Answering Goes Conversational via Question Rewriting
Underthesea - Vietnamese NLP Toolkit
PKU-DAIR / RAG-Survey
Forked from hymie122/RAG-SurveyCollecting awesome papers of RAG for AIGC. We propose a taxonomy of RAG foundations, enhancements, and applications in paper "Retrieval-Augmented Generation for AI-Generated Content: A Survey".
Anime Girls Holding Programming Books
To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
Retrieval and Retrieval-augmented LLMs
A collection of architectural patterns leveraging Large Language Models (LLMs) for efficient Text-to-SQL generation.
Adding guardrails to large language models.
A flexible free and unlimited python tool to translate between different languages in a simple way using multiple translators.
Retrieval Augmented Generation (RAG) chatbot powered by Weaviate
Ingest files for retrieval augmented generation (RAG) with open-source Large Language Models (LLMs), all without 3rd parties or sensitive data leaving your network.
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
Sharing the learning along the way we been gathering to enable Azure OpenAI at enterprise scale in a secure manner. GPT-RAG core is a Retrieval-Augmented Generation pattern running in Azure, using …
🤖 Chat with your SQL database 📊. Accurate Text-to-SQL Generation via LLMs using RAG 🔄.
UnSupervised and Semi-Supervise Anomaly Detection / IsolationForest / KernelPCA Detection / ADOA / etc.
Scene Text Recognition with Permuted Autoregressive Sequence Models (ECCV 2022)
MAC-SQL: A Multi-Agent Collaborative Framework for Text-to-SQL
A docker-compose stack for Prometheus monitoring
Notebook for INSA Lyon teams at ACM ICPC 2017. Ideas and sources are mainly from Razvan Stancioiu and the Stanford University ACM team.
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.