-
@2B2TMCBE
- https://www.mxpersonal.com/
Highlights
- Pro
Stars
No fortress, purely open ground. OpenManus is Coming.
A lightweight, powerful framework for multi-agent workflows
Fully open reproduction of DeepSeek-R1
This is official code for the NAACL 2021 paper: "MelBERT: Metaphor Detection via Contextualized Late Interaction usingMetaphorical Identification Theories".
Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.
clash节点、免费clash节点、免费节点、免费梯子、clash科学上网、clash翻墙、clash订阅链接、clash for Windows、clash教程、免费公益节点、最新clash免费节点订阅地址、clash免费节点每日更新
Sample code and notebooks for Generative AI on Google Cloud, with Gemini on Vertex AI
Instruct-tune LLaMA on consumer hardware
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
Making text a first-class citizen in TensorFlow.
DirectML PluggableDevice plugin for TensorFlow 2
Fork of TensorFlow accelerated by DirectML
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Code Transformer neural network components piece by piece
Implement Transformers (and Deep Learning) from scratch in NumPy
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep lear…
Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
JAX - A curated list of resources https://github.com/google/jax
The purpose of this repo is to make it easy to get started with JAX, Flax, and Haiku. It contains my "Machine Learning with JAX" series of tutorials (YouTube videos and Jupyter Notebooks) as well a…
Flax is a neural network library for JAX that is designed for flexibility.
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset