- Beijing, China
Highlights
- Pro
More
Stars
Automate the process of making money online.
The source code of NeurIPS 2020 paper "CogLTX: Applying BERT 10000 to Long Texts"
DeepIE: Deep Learning for Information Extraction
ACL2020 Tutorial: Open-Domain Question Answering
XBM: Cross-Batch Memory for Embedding Learning
A library for efficient similarity search and clustering of dense vectors.
Code for the paper "Are Sixteen Heads Really Better than One?"
tensorboard for pytorch (and chainer, mxnet, numpy, ...)
EMNLP'19: Bridging the Gap between Relevance Matching and Semantic Matching for Short Text Similarity Modeling
an Open Course Platform for Stanford CS224n (2020 Winter)
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
推荐、广告工业界经典以及最前沿的论文、资料集合/ Must-read Papers on Recommendation System and CTR Prediction
ALBERT model Pretraining and Fine Tuning using TF2.0
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Take neural networks as APIs for human-like AI.
Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translatio…
The commitizen command line utility. #BlackLivesMatter
The Schema-Guided Dialogue Dataset
飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
Facilitating the design, comparison and sharing of deep text matching models.
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
a Fast, Flexible, Extensible and Easy-to-use NLP Large-scale Pretraining and Multi-task Learning Framework.
XLNet: Generalized Autoregressive Pretraining for Language Understanding