Stars
使用 Qwen2ForSequenceClassification 简单实现文本分类任务。
《开源大模型食用指南》基于Linux环境快速部署开源大模型,更适合中国宝宝的部署教程
Repository hosting code used to reproduce results in "Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations" (https://arxiv.org/abs/2402.17152).
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
Pre-trained Chinese ELECTRA(中文ELECTRA预训练模型)
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Library for fast text representation and classification.
Hackable and optimized Transformers building blocks, supporting a composable construction.
TensorFlow code and pre-trained models for BERT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts…
🦜🔗 Build context-aware reasoning applications
A series of large language models developed by Baichuan Intelligent Technology
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
A free guide for learning to create ChatGPT3 Prompts
An Open-Source Package for Textual Adversarial Attack.
General code to convert a trained keras model into an inference tensorflow model
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
Visualizer for neural network, deep learning and machine learning models
Code for Paper: “Low-Resource” Text Classification: A Parameter-Free Classification Method with Compressors
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
中文文本分类,TextCNN,TextRNN,FastText,TextRCNN,BiLSTM_Attention,DPCNN,Transformer,基于pytorch,开箱即用。