8000 GitHub - thomashacker/spacy-span-analyzer: Simple tool to analyze spans in your dataset. Implementation of Papay et al's work (EMNLP 2020) on span performance prediction
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Simple tool to analyze spans in your dataset. Implementation of Papay et al's work (EMNLP 2020) on span performance prediction

License

Notifications You must be signed in to change notification settings
8000

thomashacker/spacy-span-analyzer

 
 

Repository files navigation

spacy-span-analyzer

A simple tool to analyze the Spans in your dataset. It's tightly integrated with spaCy, so you can easily incorporate it to existing NLP pipelines. This is also a reproduction of Papay, et al's work on Dissecting Span Identification Tasks with Performance Prediction (EMNLP 2020).

⏳ Install

Using pip:

pip install spacy-span-analyzer

Directly from source (I highly recommend running this within a virtual environment):

git clone git@github.com:ljvmiranda921/spacy-span-analyzer.git
cd spacy-span-analyzer
pip install .

⏯ Usage

You can use the Span Analyzer as a command-line tool:

spacy-span-analyzer ./path/to/dataset.spacy

Or as an imported library:

import spacy
from spacy.tokens import DocBin
from spacy_span_analyzer import SpanAnalyzer

nlp = spacy.blank("en")  # or any Language model

# Ensure that your dataset is a DocBin
doc_bin = DocBin().from_disk("./path/to/data.spacy")
docs = list(doc_bin.get_docs(nlp.vocab))

# Run SpanAnalyzer and get span characteristics
analyze = SpanAnalyzer(docs)
analyze.frequency  
analyze.length
analyze.span_distinctiveness
analyze.boundary_distinctiveness

Inputs are expected to be a list of spaCy Docs or a DocBin (if you're using the command-line tool).

Working with Spans

In spaCy, you'd want to store your Spans in the doc.spans property, under a particular spans_key (sc by default). Unlike the doc.ents property, doc.spans allows overlapping entities. This is useful especially for downstream tasks like Span Categorization.

A common way to do this is to use char_span to define a slice from your Doc:

doc = nlp(text)
spans = []
from annotation in annotations:
    span = doc.char_span(
        annotation["start"],
        annotation["end"],
        annotation["label"],
    )
    spans.append(span)

# Put all spans under a spans_key
doc.spans["sc"] = spans

You can also achieve the same thing by using set_ents or by creating a SpanGroup.

About

Simple tool to analyze spans in your dataset. Implementation of Papay et al's work (EMNLP 2020) on span performance prediction

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%
0