Stars
The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word prediction language models.
chatglm 6b finetuning and alpaca finetuning
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
An easy-to-use federated learning platform
A plug-and-play library for parameter-efficient-tuning (Delta Tuning)
Must-read papers on prompt-based tuning for pre-trained language models.
A flexible Federated Learning Framework based on PyTorch, simplifying your Federated Learning research.
A natural language interface for computers
A PyTorch Implementation of Federated Learning
Attentive Federated Learning for Private NLM
AGE animation official website URL release page(AGE动漫官网网址发布页)
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
Breaching privacy in federated learning scenarios for vision and text
This is code of book "Learn Deep Learning with PyTorch"
An open source implementation of OpenAI's ChatGPT Code interpreter
Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)
Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)
[arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"
A high-throughput and memory-efficient inference and serving engine for LLMs
Federated learning with text DNNs for DATA 591 at University of Washington.
Using GPT to select from possible Chinese chess moves and play with simple chess programs, GPT's performance in the field of Chinese chess is very poor