Stars
LLM
Large Language Model
5 repositories
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…
Code and documentation to train Stanford's Alpaca models, and generate the data.
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct