-
ClibrAIn
- Murcia, Spain
-
02:44
(UTC +02:00) - mrm8488.github.io
- @mrm8488
Lists (1)
Sort Name ascending (A-Z)
Starred repositories
From the Transistor to the Web Browser, a rough outline for a 12 week course
Train a Language Model with GRPO to create a schedule from a list of events and priorities
A repository consisting of paper/architecture replications of classic/SOTA AI/ML papers in pytorch
Revisiting Mid-training in the Era of RL Scaling
What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
Lightweight coding agent that runs in your terminal
MCP Server to Use HuggingFace spaces, easy configuration and Claude Desktop mode.
Learn how to use CUA (our Computer Using Agent) via the API on multiple computer environments.
Autonomously train research-agent LLMs on custom data using reinforcement learning and self-verification.
Simple and readable code for training and sampling from diffusion models
No fortress, purely open ground. OpenManus is Coming.
FULL v0, Cursor, Manus, Same.dev, Lovable, Devin, Replit Agent, Windsurf Agent, VSCode Agent, Dia Browser & Trae AI (And other Open Sourced) System Prompts, Tools & AI Models.
Fully open reproduction of DeepSeek-R1
The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention
Fine-tune ModernBERT on a large Dataset with Custom Tokenizer Training
Large Concept Models: Language modeling in a sentence representation space
Official code for "F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching"
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.
Codebase for Instruction Following without Instruction Tuning
AdalFlow: The library to build & auto-optimize LLM applications.
data-to-paper: Backward-traceable AI-driven scientific research
A 4-hour coding workshop to understand how LLMs are implemented and used
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)