Stars
🍭 Sequence Tagging Models for Information Extraction such as NER
paper note, including personal comments, introduction, code etc
Use Google's BERT for named entity recognition (CoNLL-2003 as the dataset).
repository for Publicly Available Clinical BERT Embeddings
The code for 2018 Tencent College Algorithm Contest, and the online result ranks 7th.
The code for 2019 Tencent College Algorithm Contest, and the online result ranks 1st in the preliminary.
An Open-source Neural Hierarchical Multi-label Text Classification Toolkit
Simple and Efficient Tensorflow implementations of NER models with tf.estimator and tf.data
《动手学深度学习》:面向中文读者、能运行、可讨论。中英文版被70多个国家的500多所大学用于教学。
This toolkit was designed for the fast and efficient development of modern machine comprehension models, including both published models and original prototypes.
A high performance and generic framework for distributed DNN training
MASS: Masked Sequence to Sequence Pre-training for Language Generation
2019 Baidu Machine Reading Comprehension Competition!
A large annotated semantic parsing corpus for developing natural language interfaces.
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
Source code and dataset for ACL 2019 paper "Cognitive Graph for Multi-Hop Reading Comprehension at Scale"
tuetschek / e2e-metrics
Forked from tylin/coco-captionE2E NLG Challenge Evaluation metrics
Code for CEDR: Contextualized Embeddings for Document Ranking, accepted at SIGIR 2019.
Implements pytorch code for the Accelerated SGD algorithm.
An optimizer that trains as fast as Adam and as good as SGD.