- Seattle, Washington
- http://nik0spapp.github.io
Stars
🦜🔗 Build context-aware reasoning applications
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
This repository contains demos I made with the Transformers library by HuggingFace.
An annotated implementation of the Transformer paper.
Rich-cli is a command line toolbox for fancy output in the terminal
🐦 Quickly annotate data from the comfort of your Jupyter notebook
A collection of modern/faster/saner alternatives to common unix commands.
Lightweight Experiment & Resource Monitoring 📺
Model interpretability and understanding for PyTorch
⚡ A Fast, Extensible Progress Bar for Python and CLI
A PyTorch implementation of the Transformer model in "Attention is All You Need".
A simple tool to update bib entries with their official information (e.g., DBLP or the ACL anthology).
Reference BLEU implementation that auto-downloads test sets and reports a version string to facilitate cross-lab comparisons
Implementation of the paper 'Plug and Play Autoencoders for Conditional Text Generation'
Document-Level Neural Machine Translation with Hierarchical Attention Networks
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
A comprehensive reference for all topics related to Natural Language Processing
A library to generate LaTeX expression from Python code.
Fast, general, and tested differentiable structured prediction in PyTorch
Pytorch library for fast transformer implementations
Papers & presentation materials from Hugging Face's internal science day
A Greek edition of BERT pre-trained language model
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
A library for efficient similarity search and clustering of dense vectors.