[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Other

Scorer

class
Compute evaluation scores

The Scorer computes evaluation scores. It’s typically created by Language.evaluate. In addition, the Scorer provides a number of evaluation methods for evaluating Token and Doc attributes.

Scorer.__init__ method

Create a new Scorer.

NameDescription
nlpThe pipeline to use for scoring, where each pipeline component may provide a scoring method. If none is provided, then a default pipeline is constructed using the default_lang and default_pipeline settings. Optional[Language]
default_langThe language to use for a default pipeline if nlp is not provided. Defaults to xx. str
default_pipelineThe pipeline components to use for a default pipeline if nlp is not provided. Defaults to ("senter", "tagger", "morphologizer", "parser", "ner", "textcat"). Iterable[string]
keyword-only
**kwargsAny additional settings to pass on to the individual scoring methods. Any

Scorer.score method

Calculate the scores for a list of Example objects using the scoring methods provided by the components in the pipeline.

The returned Dict contains the scores provided by the individual pipeline components. For the scoring methods provided by the Scorer and used by the core pipeline components, the individual score names start with the Token or Doc attribute being scored:

  • token_acc, token_p, token_r, token_f
  • sents_p, sents_r, sents_f
  • tag_acc
  • pos_acc
  • morph_acc, morph_micro_p, morph_micro_r, morph_micro_f, morph_per_feat
  • lemma_acc
  • dep_uas, dep_las, dep_las_per_type
  • ents_p, ents_r ents_f, ents_per_type
  • spans_sc_p, spans_sc_r, spans_sc_f
  • cats_score (depends on config, description provided in cats_score_desc), cats_micro_p, cats_micro_r, cats_micro_f, cats_macro_p, cats_macro_r, cats_macro_f, cats_macro_auc, cats_f_per_type, cats_auc_per_type
NameDescription
examplesThe Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
keyword-only
per_component v3.6Whether to return the scores keyed by component name. Defaults to False. bool

Scorer.score_tokenization staticmethodv3.0

Scores the tokenization:

  • token_acc: number of correct tokens / number of predicted tokens
  • token_p, token_r, token_f: precision, recall and F-score for token character spans

Docs with has_unknown_spaces are skipped during scoring.

| Name | Description | | ----------- | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ | | examples | The Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example] | | RETURNS | Dict | A dictionary containing the scores token_acc, token_p, token_r, token_f. Dict[str, float]] |

Scorer.score_token_attr staticmethodv3.0

Scores a single token attribute. Tokens with missing values in the reference doc are skipped during scoring.

NameDescription
examplesThe Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
attrThe attribute to score. str
keyword-only
getterDefaults to getattr. If provided, getter(token, attr) should return the value of the attribute for an individual Token. Callable[[Token, str], Any]
missing_valuesAttribute values to treat as missing annotation in the reference annotation. Defaults to {0, None, ""}. Set[Any]

Scorer.score_token_attr_per_feat staticmethodv3.0

Scores a single token attribute per feature for a token attribute in the Universal Dependencies FEATS format. Tokens with missing values in the reference doc are skipped during scoring.

NameDescription
examplesThe Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
attrThe attribute to score. str
keyword-only
getterDefaults to getattr. If provided, getter(token, attr) should return the value of the attribute for an individual Token. Callable[[Token, str], Any]
missing_valuesAttribute values to treat as missing annotation in the reference annotation. Defaults to {0, None, ""}. Set[Any]

Scorer.score_spans staticmethodv3.0

Returns PRF scores for labeled or unlabeled spans.

NameDescription
examplesThe Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
attrThe attribute to score. str
keyword-only
getterDefaults to getattr. If provided, getter(doc, attr) should return the Span objects for an individual Doc. Callable[[Doc, str], Iterable[Span]]
has_annotationDefaults to None. If provided, has_annotation(doc) should return whether a Doc has annotation for this attr. Docs without annotation are skipped for scoring purposes. str
labeledDefaults to True. If set to False, two spans will be considered equal if their start and end match, irrespective of their label. bool
allow_overlapDefaults to False. Whether or not to allow overlapping spans. If set to False, the alignment will automatically resolve conflicts. bool

Scorer.score_deps staticmethodv3.0

Calculate the UAS, LAS, and LAS per type scores for dependency parses. Tokens with missing values for the attr (typically dep) are skipped during scoring.

NameDescription
examplesThe Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
attrThe attribute to score. str
keyword-only
getterDefaults to getattr. If provided, getter(token, attr) should return the value of the attribute for an individual Token. Callable[[Token, str], Any]
head_attrThe attribute containing the head token. str
head_getterDefaults to getattr. If provided, head_getter(token, attr) should return the head for an individual Token. Callable[[Doc, str],Token]
ignore_labelsLabels to ignore while scoring (e.g. "punct"). Iterable[str]
missing_valuesAttribute values to treat as missing annotation in the reference annotation. Defaults to {0, None, ""}. Set[Any]

Scorer.score_cats staticmethodv3.0

Calculate PRF and ROC AUC scores for a doc-level attribute that is a dict containing scores for each label like Doc.cats. The returned dictionary contains the following scores:

  • {attr}_micro_p, {attr}_micro_r and {attr}_micro_f: each instance across each label is weighted equally
  • {attr}_macro_p, {attr}_macro_r and {attr}_macro_f: the average values across evaluations per label
  • {attr}_f_per_type and {attr}_auc_per_type: each contains a dictionary of scores, keyed by label
  • A final {attr}_score and corresponding {attr}_score_desc (text description)

The reported {attr}_score depends on the classification properties:

  • binary exclusive with positive label: {attr}_score is set to the F-score of the positive label
  • 3+ exclusive classes, macro-averaged F-score: {attr}_score = {attr}_macro_f
  • multilabel, macro-averaged AUC: {attr}_score = {attr}_macro_auc
NameDescription
examplesThe Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
attrThe attribute to score. str
keyword-only
getterDefaults to getattr. If provided, getter(doc, attr) should return the cats for an individual Doc. Callable[[Doc, str], Dict[str, float]]
labelsThe set of possible labels. Defaults to []. Iterable[str]
multi_labelWhether the attribute allows multiple labels. Defaults to True. When set to False (exclusive labels), missing gold labels are interpreted as 0.0 and the threshold is set to 0.0. bool
positive_labelThe positive label for a binary task with exclusive classes. Defaults to None. Optional[str]
thresholdCutoff to consider a prediction “positive”. Defaults to 0.5 for multi-label, and 0.0 (i.e. whatever’s highest scoring) otherwise. float

get_ner_prf v3.0

Compute micro-PRF and per-entity PRF scores.

NameDescription
examplesThe Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]

score_coref_clusters experimental

Returns LEA (Moosavi and Strube, 2016) PRF scores for coreference clusters.

NameDescription
examplesThe Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
keyword-only
span_cluster_prefixThe prefix used for spans representing coreference clusters. str

score_span_predictions experimental

Return accuracy for reconstructions of spans from single tokens. Only exactly correct predictions are counted as correct, there is no partial credit for near answers. Used by the SpanResolver.

NameDescription
examplesThe Example objects holding both the predictions and the correct gold-standard annotations. Iterable[Example]
keyword-only
output_prefixThe prefix used for spans representing the final predicted spans. str