-
The University of Queensland
- Brisbane
-
15:36
(UTC -12:00) - wshuai190.github.io
- https://wshuai190.github.io
- @dylan_wangs
- in/shuai-wang-33125b196
Highlights
- Pro
Stars
Starbucks: Improved Training for 2D Matryoshka Embeddings
DSPy: The framework for programming—not prompting—language models
A large-scale information-rich web dataset, featuring millions of real clicked query-document labels
The official repo for "LLoCo: Learning Long Contexts Offline"
Unified Learned Sparse Retrieval Framework
A high-throughput and memory-efficient inference and serving engine for LLMs
This repo contains information about FeB4RAG collection
This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi.
Exploring Instruction-tuning Large Language Models for Systematic Review Automation
Forward-Looking Active REtrieval-augmented generation (FLARE)
The original implementation of Min et al. "Nonparametric Masked Language Modeling" (paper https//arxiv.org/abs/2212.01349)
Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving Product Search
A huggingface transformers implementation of "Transformer Memory as a Differentiable Search Index"
LLM based autonomous agent that conducts deep local and web research on any topic and generates a long report with citations.
HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RN…
🦜🔗 Build context-aware reasoning applications
An Open-Source Framework for Prompt-Learning.