You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 2
Next »
This stage uses Apache Lucene™ to create custom pipelines apart from the default selection of pipelines. It offers a large amount of possible customization options and filters to adapt to the users needs.
Operates On: Lexical Items with TOKEN and possibly other flags as specified below.
Generic Configuration Parameters
-
boundaryFlags ( type=string array
| optional
)
- List of vertex flags that indicate the beginning and end of a text block.
Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"]) -
skipFlags ( type=string array
| optional
)
- Flags to be skipped by this stage.
Tokens marked with this flag will be ignored by this stage, and no processing will be performed. -
requiredFlags ( type=string array
| optional
)
- Lex items flags required by every token to be processed.
Tokens need to have all of the specified flags in order to be processed. -
atLeastOneFlag ( type=string array
| optional
)
- Lex items flags needed by every token to be processed.
Tokens will need at least one of the flags specified in this array. -
confidenceAdjustment ( type=double
| default=1
| required
)
- Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
- 0.0 to < 1.0 decreases confidence value
- 1.0 confidence value remains the same
- > 1.0 to 2.0 increases confidence value
-
debug ( type=boolean
| default=false
| optional
)
- Enable all debug log functionality for the stage, if any.
-
enable ( type=boolean
| default=true
| optional
)
- Indicates if the current stage should be consider for the Pipeline Manager
- Only applies for automatic pipeline building
Configuration Parameters
-
Tokenizer ( type=string
| default=None
| required
)
- Tokenizer to use for the pipeline (only one can be used at a time).
- It offers more than 10 different Tokenizers from nGrams to Japanese Tokenizers.
-
Filter ( type=string
| default=None
| optional
)
- Filter to use for the pipeline (can be stacked).
$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")
Example Output
Using Whitespace Tokenizer alone
$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")
Output Flags
Lex-Item Flags:
- ALL_LETTERS- All of the characters in the token are characters.
- ALL_PUNCTUATION - All of the characters in the token have punctuation.
- ALL_DIGITS - All of the characters in the token are digits (0-9)
- TOKEN - All tokens produced are tagged as TOKEN
- HAS_LETTER - Tokens produced with at least one letter character are tagged as HAS_LETTER
- HAS_DIGIT - Tokens produced with at least one digit character are tagged as HAS_DIGIT
- HAS_PUNCTUATION - Tokens produced with at least one punctuation character are tagged as HAS_PUNCTUATION. (ALL_PUNCTUATION will not be tagged as HAS_PUNCTUATION)
- LUCENE_STAGE- All words retrieved will be marked as LUCENE_STAGE
Vertex Flags: