llava
Here are 159 public repositories matching this topic...
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
-
Updated
Aug 12, 2024 - Python
Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
-
Updated
Dec 2, 2024 - Python
SUPIR aims at developing Practical Algorithms for Photo-Realistic Image Restoration In the Wild. Our new online demo is also released at suppixel.ai.
-
Updated
Jul 30, 2024 - Python
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
-
Updated
Nov 8, 2024 - Python
中文nlp解决方案(大模型、数据、模型、训练、推理)
-
Updated
Oct 29, 2024 - Jupyter Notebook
Making data higher-quality, juicier, and more digestible for foundation models! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
-
Updated
Dec 2, 2024 - Python
Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks
-
Updated
Dec 2, 2024 - Python
[ACL 2024 🔥] Video-ChatGPT is a video conversation model capable of generating meaningful conversation about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted for spatiotemporal video representation. We also introduce a rigorous 'Quantitative Evaluation Benchmarking' for video-based conversational models.
-
Updated
Aug 27, 2024 - Python
Pocket-Sized Multimodal AI for content understanding and generation across multilingual texts, images, and 🔜 video, up to 5x faster than OpenAI CLIP and LLaVA 🖼️ & 🖋️
-
Updated
Oct 1, 2024 - Python
🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)
-
Updated
Jul 10, 2024 - Python
Tag manager and captioner for image datasets
-
Updated
Nov 1, 2024 - Python
A Framework of Small-scale Large Multimodal Models
-
Updated
Dec 1, 2024 - Python
👁️ + 💬 + 🎧 = 🤖 Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
-
Updated
Feb 29, 2024 - Python
MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.
-
Updated
Nov 28, 2024 - Python
Improve this page
Add a description, image, and links to the llava topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llava topic, visit your repo's landing page and select "manage topics."