This is a work in progress you can expect things to break while using this stage.

This stage review tokens using Elasticsearch suggestions functionality and creates a new token with a "suggestion" for word it does not recognize.The process takes all the available tokens (usually already tokenized by the "WhitespaceTokenizerStage") for the stage (using the highest confidence route), flags like "STOP_WORD" or "ALL_UPPER_CASE" can be used as filters by including them in the "Skip Flags" list.

Operates On:  Lexical Items with TOKEN and possibly other flags as specified below.

Stage is a Recognizer for Saga Solution, and can also be used as part of a manual pipeline or a base pipeline


This recognizer requires a dictionary to work, so it must be loaded either from a dataset or a file before using it.  Validate your Elasticsearch version to ensure this stage is compatible.

Generic Configuration Parameters

  • boundaryFlags ( type=string array | optional ) - List of vertex flags that indicate the beginning and end of a text block.
    Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"])
  • skipFlags ( type=string array | optional ) - Flags to be skipped by this stage.
    Tokens marked with this flag will be ignored by this stage, and no processing will be performed.
  • requiredFlags ( type=string array | optional ) - Lex items flags required by every token to be processed.
    Tokens need to have all of the specified flags in order to be processed.
  • atLeastOneFlag ( type=string array | optional ) - Lex items flags needed by every token to be processed.
    Tokens will need at least one of the flags specified in this array.
  • confidenceAdjustment ( type=double | default=1 | required ) - Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
    • 0.0 to < 1.0  decreases confidence value
    • 1.0 confidence value remains the same
    • > 1.0 to  2.0 increases confidence value
  • debug ( type=boolean | default=false | optional ) - Enable all debug log functionality for the stage, if any.
  • enable ( type=boolean | default=true | optional ) - Indicates if the current stage should be consider for the Pipeline Manager
    • Only applies for automatic pipeline building

Configuration Parameters

  • index ( type=string | default=spellcheck_dictionary | optional ) - Index used by the stage to store dictionary data.
    • This is an Elasticsearch index.
  • schema ( type=string | default=http | optional ) - Schema used by Elasticsearch connection
  • host ( type=string | default=localhost | optional ) - Hostname
  • port ( type=integer | default=9200 | optional ) - Port


Example Configuration
{
	"index": "saga_spellchecker_dictionary",
	"schema": "http",
	"host": "localhost",
	"port": "9200"
}


Example Output


V--------------[abraham lincoln likes makaroni and cheese]--------------------V
^--[abraham]--V--[lincoln]--V--[likes]--V--[makaroni]--V--[and]--V--[cheese]--^
                                        ^--[macaroni]--^


Output Flags

Lex-Item Flags:

  • MISSPELL- Identifies a token as potential misspelling.
  • SUGGESTION - Added to the newly created token to identify it as a generated token and coming from the dictionary.

Vertex Flags:

No vertices are created in this stage

Resource Data

The data used by the dictionary may come from 2 sources:

  • Dataset
  • Plain text file

Both options are accessed through Saga Server or the endpoints of the service.  To create a dictionary from a dataset, select the one you are interested in and select the pipeline to process it, remember that the pipeline must end with a Spellchecker Stage.  To create a dictionary from a file you only need a plain text file with terms separated by new line.

Resource Format

Dictionary Plain Text File
abraham
lincoln
likes
macaroni
and
cheese