Starred repositories
[ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning
tiktoken is a fast BPE tokeniser for use with OpenAI's models.
Diffusers-Interpret 🤗🧨🕵️♀️: Model explainability for 🤗 Diffusers. Get explanations for your generated images.
The standard data-centric AI package for data quality and machine learning with messy, real-world data and labels.
The Cradle framework is a first attempt at General Computer Control (GCC). Cradle supports agents to ace any computer task by enabling strong reasoning abilities, self-improvment, and skill curatio…
A curated list of Large Language Model (LLM) Interpretability resources.
MiniCPM-o 2.6: A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone
Understand Human Behavior to Align True Needs
llama3 implementation one matrix multiplication at a time
A MIT-licensed, deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize.
A large-scale face dataset for face parsing, recognition, generation and editing.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Representation Engineering: A Top-Down Approach to AI Transparency
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
Humanoid Agents: Platform for Simulating Human-like Generative Agents
A programming framework for agentic AI 🤖 PyPi: autogen-agentchat Discord: https://aka.ms/autogen-discord Office Hour: https://aka.ms/autogen-officehour
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
🦜🔗 Build context-aware reasoning applications
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama mode…
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Attributing predictions made by the Inception netw 5D9B ork using the Integrated Gradients method
A game theoretic approach to explain the output of any machine learning model.
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
High-performance In-browser LLM Inference Engine