This stage review tokens using Elasticsearch suggestions functionality and creates a new token with a "suggestion" for word it does not recognize.The process takes all the available tokens (usually already tokenized by the "WhitespaceTokenizerStage") for the stage (using the highest confidence route), flags like "STOP_WORD" or "ALL_UPPER_CASE" can be used as filters by including them in the "Skip Flags" list.
Operates On: Lexical Items with TOKEN and possibly other flags as specified below.
Generic Configuration Parameters
-
boundaryFlags ( type=string array
| optional
)
- List of vertex flags that indicate the beginning and end of a text block.
Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"]) -
skipFlags ( type=string array
| optional
)
- Flags to be skipped by this stage.
Tokens marked with this flag will be ignored by this stage, and no processing will be performed. -
requiredFlags ( type=string array
| optional
)
- Lex items flags required by every token to be processed.
Tokens need to have all of the specified flags in order to be processed. -
atLeastOneFlag ( type=string array
| optional
)
- Lex items flags needed by every token to be processed.
Tokens will need at least one of the flags specified in this array. -
confidenceAdjustment ( type=double
| default=1
| required
)
- Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
- 0.0 to < 1.0 decreases confidence value
- 1.0 confidence value remains the same
- > 1.0 to 2.0 increases confidence value
-
debug ( type=boolean
| default=false
| optional
)
- Enable all debug log functionality for the stage, if any.
-
enable ( type=boolean
| default=true
| optional
)
- Indicates if the current stage should be consider for the Pipeline Manager
- Only applies for automatic pipeline building
Configuration Parameters
-
index ( type=string
| default=spellcheck_dictionary
| optional
)
- Index used by the stage to store dictionary data.
- This is an Elasticsearch index.
-
schema ( type=string
| default=http
| optional
)
- Schema used by Elasticsearch connection
-
host ( type=string
| default=localhost
| optional
)
- Hostname
-
port ( type=integer
| default=9200
| optional
)
- Port
$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")
Example Output
$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")
Output Flags
Lex-Item Flags:
- MISSPELL- Identifies a token as potential misspelling.
- SUGGESTION - Added to the newly created token to identify it as a generated token and coming from the dictionary.
Vertex Flags:
Resource Data
The data used by the dictionary may come from 2 sources:
Both options are accessed through Saga Server or the endpoints of the service. To create a dictionary from a dataset, select the one you are interested in and select the pipeline to process it, remember that the pipeline must end with a Spellchecker Stage. To create a dictionary from a file you only need a plain text file with terms separated by new line.
$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")