-
LILAB @CAU_AIGS
- Chung-Ang University, Seoul, South Korea
- https://sites.google.com/view/cau-li/home?authuser=0
- in/yumin-kim-05a371191
More
Lists (1)
Sort Name ascending (A-Z)
Stars
[NeurIPS'24] "Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration"
Official repository of "HARE: Explainable Hate Speech Detection with Step-by-Step Reasoning", Findings of EMNLP 2023
π€« Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory"
Awesome Reasoning LLM Tutorial/Survey/Guide
A JavaScript library that brings vector search and RAG to your browser!
A modular graph-based Retrieval-Augmented Generation (RAG) system
Repository for "KocoSa: Korean Context-aware Sarcasm Detection Dataset" accepted in COLING 2024.
This is the reading list mainly on adversarial examples (attacks, defenses, etc.) I try to keep and update regularly.
Papers and resources related to the security and privacy of LLMs π€
Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-of-use, backed by research.
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch
0οΈβ£1οΈβ£π€ BitNet-Transformers: Huggingface Transformers Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch with Llama(2) Architecture
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
Ranked in both 36th/495 (1st round) and 13th/34 (final round) in Anomaly Detection Competition hosted by LG AI Research, South Korea.
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Code and data for the paper "Exploiting Biased Models to De-bias Text: A Gender-fair Rewriting Model"
μ΄κ±°λAIμλκ°λ°μλ‘ μ΄μκ°κΈ° μμ
QLoRA: Efficient Finetuning of Quantized LLMs
βοΈ κ΅¬λ¦(KULLM): κ³ λ €λνκ΅μμ κ°λ°ν, νκ΅μ΄μ νΉνλ LLM
Construct a vector database through sentence embedding. And make your LLM respond based on this database.