-
AI Lab, ByteDance Inc.
- Rochester, NY
- https://www.huaweilin.net/
Stars
📖 This is a repository for organizing papers, codes, and other resources related to unified multimodal models.
This repository provides the official implementation of VTBench, a benchmark designed to evaluate the performance of visual tokenizers (VTs) in the context of autoregressive (AR) image generation.
[CVPR 2025 Highlight] OmniManip: Towards General Robotic Manipulation via Object-Centric Interaction Primitives as Spatial Constraints
Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation
GPT-ImgEval: Evaluating GPT-4o’s state-of-the-art image generation capabilities
Liquid: Language Models are Scalable and Unified Multi-modal Generators
Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation
The implementation for paper "UniGuardian: A Unified Defense for Detecting Prompt Injection, Backdoor Attacks and Adversarial Attacks in Large Language Models".
PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838
Recommend new arxiv papers of your interest daily according to your Zotero libarary.
Implementation for "DMin: Scalable Training Data Influence Estimation for Diffusion Models". Influence Function, Influence Estimation and Training Data Attribution for Diffusion Models
Code for "Unlearning Traces the Influential Training Data of Language Models"
RapidIn: Scalable Influence Estimation for Large Language Models (LLMs). The implementation for paper "Token-wise Influential Training Data Retrieval for Large Language Models" (Accepted on ACL 2024).
This repo implrements an easy-deployed assistant that can help you to understand research paper, especially for scholar paper, supporting English, Chinese and multiple languages. We provide a web U…
The Open-Source code for My Personal Portfolio! A minimal and ambient portfolio template for Developers! ⚡
⚡ Dynamically generated stats for your github readmes
T-GATE: Temporally Gating Attention to Accelerate Diffusion Model for Free!
[NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Chongyu Fan, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng …
An easy-to-run implementation for finetuning large language models (LLMs) such as llama and gemma, supporting full parameter finetuning, LoRA, and QLoRA.
The implementation for paper Machine Unlearning in Gradient Boosting Decision Trees (Accepted on KDD 2023), supporting training and unlearning.
This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang.
High-speed download of LLaMA, Facebook's 65B parameter GPT model
Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.