- Shenzhen, china
-
08:30
(UTC -12:00)
Stars
Formatron empowers everyone to control the format of language models' output with minimal overhead.
Repo of ACL 2025 main Paper "Quantification of Large Language Model Distillation"
[ACL'25 Main] Can MLLMs Understand the Deep Implication Behind Chinese Images?
Official completion of “Training on the Benchmark Is Not All You Need”.
Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。
心理健康大模型 (LLM x Mental Health), Pre & Post-training & Dataset & Evaluation & Depoly & RAG, with InternLM / Qwen / Baichuan / DeepSeek / Mixtral / LLama / GLM series models
Official github repo for E-Eval, a Chinese K12 education evaluation benchmark for LLMs.
Measuring Massive Multitask Language Understanding | ICLR 2021
An open-source educational chat model from ICALK, East China Normal University. 开源中英教育对话大模型。(通用基座模型,GPU部署,数据清理) 致敬: LLaMA, MOSS, BELLE, Ziya, vLLM
Source code and data in paper "MDFEND: Multi-domain Fake News Detection (CIKM'21)"
A series of large language models trained from scratch by developers @01-ai
Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]
Benchmarking Legal Knowledge of Large Language Models
UeCore wow game server c++ 开源魔兽世界 http://uecore.org