Stars
🪐 Markdown with superpowers — from ideas to presentations, articles and books.
A native macOS app that allows users to chat with a local LLM that can respond with information from files, folders and websites on your Mac without installing any other software. Powered by llama.…
Your browser anime experience from the terminal
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
Core contracts for the Doppler Protocol
Code for "In-Context Former: Lightning-fast Compressing Context for Large Language Model" (Findings of EMNLP 2024)
ELO calculation as a solidity library (algorithm of 400 / chess ELO)
Sample Artemis bot to fill UniswapX orders using on-chain liquidity
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
Official Code for Stable Cascade
Implementation of a co-joined ERC20 and ERC721 pair.
A control center for world computer operators.
WasmEdge is a lightweight, high-performance, and extensible WebAssembly runtime for cloud native, edge, and decentralized applications. It powers serverless apps, embedded functions, microservices,…
[ICLR 2024] Efficient Streaming Language Models with Attention Sinks
Fork of Foundry tailored for zkSync environment
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
antimatter15 / alpaca.cpp
Forked from ggml-org/llama.cppLocally run an Instruction-Tuned Chat-Style LLM