Efficient Triton Kernels for LLM Training
-
Updated
Jun 24, 2025 - Python
8000
Efficient Triton Kernels for LLM Training
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)
[ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data generation pipeline!
An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.
LLM Interview Preparation Assistant using RAG, ElasticSearch and Ollama/ChatGPT
ArchNetAI is a Python library that leverages the Ollama API for generating AI-powered content.
A sophisticated AI-powered debate platform that integrates Large Language Models with Genetic Algorithms and Adversarial Search to create a dynamic and adaptive debating experience.
Retrieval-Augmented Generation model application with Hugging Face inference API for embeddings and LangChain ChromaDB for data storage.
Generates AI-driven responses using the Ollama API with the Phi3 mini model from Microsoft. Lightweight implementation without memory for language generation
Add a description, image, and links to the phi3 topic page so that developers can more easily learn about it.
To associate your repository with the phi3 topic, visit your repo's landing page and select "manage topics."