8000 GitHub - lu839684437/GloVe: GloVe model for distributed word representation
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

lu839684437/GloVe

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GloVe: Global Vectors for Word Representation

frog nearest neighbors Litoria Leptodactylidae Rana Eleutherodactylus
  • frogs
  • toad
  • litoria
  • leptodactylidae
  • rana
  • lizard
  • eleutherodactylus | ![](http://nlp.stanford.edu/projects/glove/images/litoria.jpg) | ![](http://nlp.stanford.edu/projects/glove/images/leptodactylidae.jpg) | ![](http://nlp.stanford.edu/projects/glove/images/rana.jpg) | ![](http://nlp.stanford.edu/projects/glove/images/eleutherodactylus.jpg)

    We provide an implementation of the GloVe model for learning word representations. Please see the project page for more information.

    man -> woman | city -> zip | comparative -> superlative :-------------------------:|:-------------------------:|:-------------------------:|:-------------------------:| | |

    Download pre-trained word vectors

    Pre-trained word vectors are made available under the Public Domain Dedication and License

    Train word vectors on a new corpus

    $ git clone http://github.com/stanfordnlp/glove
    $ cd glove && make
    $ ./demo.sh
    

    The demo.sh scipt downloads a small corpus, consisting of the first 100M characters of Wikipedia. It collects unigram counts, constructs and shuffles cooccurrence data, and trains a simple version of the GloVe model. It also runs a word analogy evaluation script in python. Continue reading for further usage details and instructions for how to run on your own corpus.

    Package Contents

    This package includes four main tools:

    1) vocab_count

    Constructs unigram counts from a corpus, and optionally thresholds the resulting vocabulary based on total vocabulary size or minimum frequency count. This file should already consist of whitespace-separated tokens. Use something like the Stanford Tokenizer (http://nlp.stanford.edu/software/tokenizer.shtml) first on raw text.

    2) cooccur

    Constructs word-word cooccurrence statistics from a corpus. The user should supply a vocabulary file, as produced by 'vocab_count', and may specify a variety of parameters, as described by running './build/cooccur'.

    3) shuffle

    Shuffles the binary file of cooccurrence statistics produced by 'cooccur'. For large files, the file is automatically split into chunks, each of which is shuffled and stored on disk before being merged and shuffled togther. The user may specify a number of parameters, as described by running './build/shuffle'.

    4) glove

    Train the GloVe model on the specified cooccurrence data, which typically will be the output of the 'shuffle' tool. The user should supply a vocabulary file, as given by 'vocab_count', and may specify a number of other parameters, which are described by running './build/glove'.

    License

    All work contained in this package is licensed under the Apache License, Version 2.0. See the include LICENSE file.

  • About

    GloVe model for distributed word representation

    Resources

    License

    Stars

    Watchers

    Forks

    Packages

    No packages published

    Languages

    • C 71.4%
    • Python 12.9%
    • MATLAB 12.4%
    • Shell 2.4%
    • Makefile 0.9%
    0