Stars
Agent Laboratory is an end-to-end autonomous research workflow meant to assist you as the human researcher toward implementing your research ideas
A framework for evaluating Machine Translation models.
Critical Error Detection in Machine Translation - Project for Computational Semantics for NLP, ETH Zurich, 2021.
HunyuanVideo: A Systematic Framework For Large Video Generation Model
MetaMetrics is a calibrated meta-metric designed to evaluate generation tasks across different modalities aligned with alignment with human preferences.
explainable-machine-translation-metrics
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
🎁[ChatGPT4MTevaluation] ErrorAnalysis Prompt for MT Evaluation in ChatGPT
xCOMET-lite: Bridging the Gap Between Efficiency and Quality in Learned MT Evaluation Metrics
Must-read Papers on Knowledge Editing for Large Language Models.
Awesome coreset/core-set/subset/sample selection works.
TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models
This repo 7391 sitory collects awesome survey, resource, and paper for Lifelong Learning with Large Language Models. (Updated Regularly)
Code for experiments of the paper "Self-generated Replay Memories for Continual Neural Machine Translation"
[ICLR 2022] Towards Continual Knowledge Learning of Language Models
Continual Learning of Large Language Models: A Comprehensive Survey
Must-read Papers on Large Language Model (LLM) Continual Learning
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT p…
A Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation, Levy et al., Findings of EMNLP 2021
This dataset is intended as an evaluation benchmark for gender issues in Machine Translation. We consider the challenges in modeling and handling gendered language in the context of machine transla…