This stage uses Apache Lucene™ to create a custom Lucene pipeline. It offers a large amount of possible tokenizers and filters to adapt to the users needs.

Operates On:  Lexical Items with TOKEN and possibly other flags as specified below.

Stage is a Recognizer for Saga Solution, and can also be used as part of a manual pipeline or a base pipeline


A Lucene Custom Analyzer is composed of two components: the Tokenizer and the Filters (which can be stacked to use more than one at a time).

Generic Configuration Parameters

  • boundaryFlags ( type=string array | optional ) - List of vertex flags that indicate the beginning and end of a text block.
    Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"])
  • skipFlags ( type=string array | optional ) - Flags to be skipped by this stage.
    Tokens marked with this flag will be ignored by this stage, and no processing will be performed.
  • requiredFlags ( type=string array | optional ) - Lex items flags required by every token to be processed.
    Tokens need to have all of the specified flags in order to be processed.
  • atLeastOneFlag ( type=string array | optional ) - Lex items flags needed by every token to be processed.
    Tokens will need at least one of the flags specified in this array.
  • confidenceAdjustment ( type=double | default=1 | required ) - Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
    • 0.0 to < 1.0  decreases confidence value
    • 1.0 confidence value remains the same
    • > 1.0 to  2.0 increases confidence value
  • debug ( type=boolean | default=false | optional ) - Enable all debug log functionality for the stage, if any.
  • enable ( type=boolean | default=true | optional ) - Indicates if the current stage should be consider for the Pipeline Manager
    • Only applies for automatic pipeline building

Configuration Parameters

  • Tokenizer ( type=string | default=None | required ) - Tokenizer to use for the pipeline (only one can be used at a time).
    • It offers more than 10 different Tokenizers from nGrams to Japanese Tokenizers.
  • Filter ( type=string | default=None | optional ) - Filter to use for the pipeline (can be stacked).


"atLeastOneFlag": []
"boundaryFlags": []
"confidenceAdjustment": 1
"debug": false
"requiredFlags": []
"skipFlags": []
"tokenizer": "whitespace",
"filter": None

Example Output

Using Whitespace Tokenizer alone

V-------------[Hey there! I am using Lucene Pipeline]-------------V 
^-[Hey]-V-[there!]-V-[I]-V-[am]-V-[using]-V-[Lucene]-V-[Pipeline]-^

Output Flags

Lex-Item Flags:

  • ALL_LETTERS- All of the characters in the token are characters.
  • ALL_PUNCTUATION - All of the characters in the token have punctuation.
  • ALL_DIGITS - All of the characters in the token are digits (0-9)
  • TOKEN - All tokens produced are tagged as TOKEN 
  • HAS_LETTER -  Tokens produced with at least one letter character are tagged as HAS_LETTER
  • HAS_DIGIT - Tokens produced with at least one digit character are tagged as HAS_DIGIT 
  • HAS_PUNCTUATION - Tokens produced with at least one punctuation character are tagged as HAS_PUNCTUATION. (ALL_PUNCTUATION will not be tagged as HAS_PUNCTUATION)
  • LUCENE_STAGE- All words retrieved will be marked as LUCENE_STAGE

Vertex Flags:

No vertices are created in this stage