Lemmatizer
Component for assigning base forms to tokens using rules based on part-of-speech
tags, or lookup tables. Different Language
subclasses can
implement their own lemmatizer components via
language-specific factories.
The default data used is provided by the
spacy-lookups-data
extension package.
For a trainable lemmatizer, see EditTreeLemmatizer
.
Assigned Attributes
Lemmas generated by rules or predicted will be saved to Token.lemma
.
Location | Value |
---|---|
Token.lemma | The lemma (hash). int |
Token.lemma_ | The lemma. str |
Config and implementation
The default config is defined by the pipeline component factory and describes
how the component should be configured. You can override its settings via the
config
argument on nlp.add_pipe
or in your
config.cfg
for training. For examples of the lookups
data format used by the lookup and rule-based lemmatizers, see
spacy-lookups-data
.
Setting | Description |
---|---|
mode | The lemmatizer mode, e.g. "lookup" or "rule" . Defaults to lookup if no language-specific lemmatizer is available (see the following table). str |
overwrite | Whether to overwrite existing lemmas. Defaults to False . bool |
model | Not yet implemented: the model to use. Model |
keyword-only | |
scorer | The scoring method. Defaults to Scorer.score_token_attr for the attribute "lemma" . Optional[Callable] |
Many languages specify a default lemmatizer mode other than lookup
if a better
lemmatizer is available. The lemmatizer modes rule
and pos_lookup
require
token.pos
from a previous pipeline component (see example
pipeline configurations in the
pretrained pipeline design details) or rely on third-party
libraries (pymorphy3
).
Language | Default Mode |
---|---|
bn | rule |
ca | pos_lookup |
el | rule |
en | rule |
es | rule |
fa | rule |
fr | rule |
it | pos_lookup |
mk | rule |
nb | rule |
nl | rule |
pl | pos_lookup |
ru | pymorphy3 |
sv | rule |
uk | pymorphy3 |
explosion/spaCy/master/spacy/pipeline/lemmatizer.py
Lemmatizer.__init__ method
Create a new pipeline instance. In your application, you would normally use a
shortcut for this and instantiate the component using its string name and
nlp.add_pipe
.
Name | Description |
---|---|
vocab | The shared vocabulary. Vocab |
model | Not yet implemented: The model to use. Model |
name | String name of the component instance. Used to add entries to the losses during training. str |
keyword-only | |
mode | The lemmatizer mode, e.g. "lookup" or "rule" . Defaults to "lookup" . str |
overwrite | Whether to overwrite existing lemmas. bool |
Lemmatizer.__call__ method
Apply the pipe to one document. The document is modified in place, and returned.
This usually happens under the hood when the nlp
object is called on a text
and all pipeline components are applied to the Doc
in order.
Name | Description |
---|---|
doc | The document to process. Doc |
RETURNS | The processed document. Doc |
Lemmatizer.pipe method
Apply the pipe to a stream of documents. This usually happens under the hood
when the nlp
object is called on a text and all pipeline components are
applied to the Doc
in order.
Name | Description |
---|---|
stream | A stream of documents. Iterable[Doc] |
keyword-only | |
batch_size | The number of documents to buffer. Defaults to 128 . int |
YIELDS | The processed documents in order. Doc |
Lemmatizer.initialize method
Initialize the lemmatizer and load any data resources. This method is typically
called by Language.initialize
and lets you
customize arguments it receives via the
[initialize.components]
block in the
config. The loading only happens during initialization, typically before
training. At runtime, all data is loaded from disk.
Name | Description |
---|---|
get_examples | Function that returns gold-standard annotations in the form of Example objects. Defaults to None . Optional[Callable[[], Iterable[Example]]] |
keyword-only | |
nlp | The current nlp object. Defaults to None . Optional[Language] |
lookups | The lookups object containing the tables such as "lemma_rules" , "lemma_index" , "lemma_exc" and "lemma_lookup" . If None , default tables are loaded from spacy-lookups-data . Defaults to None . Optional[Lookups] |
Lemmatizer.lookup_lemmatize method
Lemmatize a token using a lookup-based approach. If no lemma is found, the original string is returned.
Name | Description |
---|---|
token | The token to lemmatize. Token |
RETURNS | A list containing one or more lemmas. List[str] |
Lemmatizer.rule_lemmatize method
Lemmatize a token using a rule-based approach. Typically relies on POS tags.
Name | Description |
---|---|
token | The token to lemmatize. Token |
RETURNS | A list containing one or more lemmas. List[str] |
Lemmatizer.is_base_form method
Check whether we’re dealing with an uninflected paradigm, so we can avoid lemmatization entirely.
Name | Description |
---|---|
token | The token to analyze. Token |
RETURNS | Whether the token’s attributes (e.g., part-of-speech tag, morphological features) describe a base form. bool |
Lemmatizer.get_lookups_config classmethod
Returns the lookups configuration settings for a given mode for use in
Lemmatizer.load_lookups
.
Name | Description |
---|---|
mode | The lemmatizer mode. str |
RETURNS | The required table names and the optional table names. Tuple[List[str], List[str]] |
Lemmatizer.to_disk method
Serialize the pipe to disk.
Name | Description |
---|---|
path | A path to a directory, which will be created if it doesn’t exist. Paths may be either strings or Path -like objects. Union[str,Path] |
keyword-only | |
exclude | String names of serialization fields to exclude. Iterable[str] |
Lemmatizer.from_disk method
Load the pipe from disk. Modifies the object in place and returns it.
Name | Description |
---|---|
path | A path to a directory. Paths may be either strings or Path -like objects. Union[str,Path] |
keyword-only | |
exclude | String names of serialization fields to exclude. Iterable[str] |
RETURNS | The modified Lemmatizer object. Lemmatizer |
Lemmatizer.to_bytes method
Serialize the pipe to a bytestring.
Name | Description |
---|---|
keyword-only | |
exclude | String names of serialization fields to exclude. Iterable[str] |
RETURNS | The serialized form of the Lemmatizer object. bytes |
Lemmatizer.from_bytes method
Load the pipe from a bytestring. Modifies the object in place and returns it.
Name | Description |
---|---|
bytes_data | The data to load from. bytes |
keyword-only | |
exclude | String names of serialization fields to exclude. Iterable[str] |
RETURNS | The Lemmatizer object. Lemmatizer |
Attributes
Name | Description |
---|---|
vocab | The shared Vocab . Vocab |
lookups | The lookups object. Lookups |
mode | The lemmatizer mode. str |
Serialization fields
During serialization, spaCy will export several data fields used to restore
different aspects of the object. If needed, you can exclude them from
serialization by passing in the string names via the exclude
argument.
Name | Description |
---|---|
vocab | The shared Vocab . |
lookups | The lookups. You usually don’t want to exclude this. |