Stars
Build resilient language agents as graphs.
Llama中文社区,实时汇总最新Llama学习资料,构建最好的中文Llama大模型开源生态,完全开源可商用
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
中文nlp解决方案(大模型、数据、模型、训练、推理)
A complete training process based bloom for LLM training, including pretraining, SFT, lora, qlora, ppo
⭐️ NLP Algorithms with transformers lib. Supporting Text-Classification, Text-Generation, Information-Extraction, Text-Matching, RLHF, SFT etc.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
list of efficient attention modules
专注于可解释的NLP技术 An NLP Toolset With A Focus on Explainable Inference
PyTorch package for the discrete VAE used for DALL·E.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Hierarchical unsupervised and semi-supervised topic models for sparse count data with CorEx
keras implement of transformers for humans
EasyTransfer is designed to make the development of transfer learning in NLP applications easier.
multi-label,classifier,text classification,多标签文本分类,文本分类,BERT,ALBERT,multi-label-classification
Conversational Toolkit. An Open-Source Toolkit for Fast Development and Fair Evaluation of Text Generation
An Open-Source Package for Information Retrieval.
albert-vi-as-service: A Fork of bert-as-service to deploy albert_vi
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
NLP models and codes for BAAI-JD joint project.