Highlights
- Pro
Starred repositories
Finetune Qwen3, Llama 4, TTS, DeepSeek-R1 & Gemma 3 LLMs 2x faster with 70% less memory! π¦₯
EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL
Witness the aha moment of VLM with less than $3.
Fully open reproduction of DeepSeek-R1
Collection of papers and repos for multimodal chain-of-thought
Let your Claude able to think
A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 π and reasoning techniques.
An open source implementation of CLIP.
[ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
A curated list of trustworthy deep learning papers. Daily updating...
Anthropic's Interactive Prompt Engineering Tutorial
A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.
A collection of ZSH frameworks, plugins, themes and tutorials.
RichHF-18K dataset contains rich human feedback labels we collected for our CVPR'24 paper: https://arxiv.org/pdf/2312.10240, along with the file name of the associated labeled images (no urls or imβ¦
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
A library for efficient similarity search and clustering of dense vectors.
π¦π Build context-aware reasoning applications
π€ Real-time type-ahead completion for Zsh. Asynchronous find-as-you-type autocompletion.
tmlr-group / PART
Forked from JiachengZ01/PART[ICML 2024] "Improving Accuracy-robustness Trade-off via Pixel Reweighted Adversarial Training"
tmlr-group / WCA
Forked from JinhaoLee/WCA[ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"
tmlr-group / SMM
Forked from caichengyi/SMM[ICML 2024 Spotlight] "Sample-specific Masks for Visual Reprogramming-based Prompting"
[ICML 2024] Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models
A curated list of papers in Test-time Adaptation, Test-time Training and Source-free Domain Adaptation
[NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"
π€ The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.