You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This stage review tokens using Elasticsearch suggestions functionality and creates a new token with a "suggestion" for word it does not recognize.The process takes all the available tokens (usually already tokenized by the "WhitespaceTokenizerStage") for the stage (using the highest confidence route), flags like "STOP_WORD" or "ALL_UPPER_CASE" can be used as filters by including them in the "Skip Flags" list.

Operates On:  Lexical Items with TOKEN and possibly other flags as specified below.

Stage is a Recognizer for Saga Solution, and can also be used as part of a manual pipeline or a base pipeline


This recognizer requires a dictionary to work, so it must be loaded either from a dataset or a file before using it.

Generic Configuration Parameters

  • boundaryFlags ( type=string array | optional ) - List of vertex flags that indicate the beginning and end of a text block.
    Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"])
  • skipFlags ( type=string array | optional ) - Flags to be skipped by this stage.
    Tokens marked with this flag will be ignored by this stage, and no processing will be performed.
  • requiredFlags ( type=string array | optional ) - Lex items flags required by every token to be processed.
    Tokens need to have all of the specified flags in order to be processed.
  • atLeastOneFlag ( type=string array | optional ) - Lex items flags needed by every token to be processed.
    Tokens will need at least one of the flags specified in this array.
  • confidenceAdjustment ( type=double | default=1 | required ) - Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
    • 0.0 to < 1.0  decreases confidence value
    • 1.0 confidence value remains the same
    • > 1.0 to  2.0 increases confidence value
  • debug ( type=boolean | default=false | optional ) - Enable all debug log functionality for the stage, if any.
  • enable ( type=boolean | default=true | optional ) - Indicates if the current stage should be consider for the Pipeline Manager
    • Only applies for automatic pipeline building

Configuration Parameters

  • parameter ( type=string | default=cheese | optional ) - description
    • Explanation


$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")

Example Output

$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")

Output Flags

Lex-Item Flags:

  • SEMANTIC_TAG - Identifies all lexical items which are semantic tags.
  • PROCESSED - Placed on all the tokens which composed the semantic tag.
  • ALL_LOWER_CASE - All of the characters in the token are lower-case characters.
  • ALL_UPPER_CASE - All of the characters in the token are upper-case characters (for example, acronyms).
  • ALL_DIGITS - All of the characters in the token are digits (0-9)
  • TITLE_CASE - The first character is upper case, all of the other characters are lower case.
  • MIXED_CASE - Handles any mixed upper & lower case scenario not covered above.
  • TOKEN - All tokens produced are tagged as TOKEN 
  • CHAR_CHANGE -  Identifies the vertex as a change between character formats
  • HAS_DIGIT - Tokens produced with at least one digit character are tagged as HAS_DIGIT 
  • HAS_PUNCTUATION - Tokens produced with at least one punctuation character are tagged as HAS_PUNCTUATION. (ALL_PUNCTUATION will not be tagged as HAS_PUNCTUATION)
  • LEMMATIZE- All words retrieved will be marked as LEMMATIZE
  • NUMBER - Flagged on all tokens which are numbers according to the rules above.
  • TEXT_BLOCK - Flags all text blocks.
  • STOP_WORD- All matched stop words will be marked as STOP_WORD
  • WEIGHT_VECTOR - Identifies the tag as a weight vector representation of a sentence
  • BANK- Identifies a Bank account number.
  • ABA- Account number with ABA format.
  • BIC- Account number with BIC format.
  • IBAN- Account number with IBAN format.
  • ORIGINAL - Identifies that the Lex-Items produced by this stage are the original, as written, representation of every token (e.g. before normalization)
  • SSN - Identifies a Federal ID number
  • GEONAME - Identifies a geographical location name

Vertex Flags:

No vertices are created in this stage

  • ALL_PUNCTUATION - Identifies the vertex as all token
    • The default flag if no "splitFlag" is present.
  • <splitFlag> - Defines an alternative flag to ALL_PUNCTUATION, if desired (see above)
  • CHAR_CHANGE -  Identifies the vertex as a change between character formats
  • TEXT_BLOCK_SPLIT - Identifies the vertex as a split between text blocks.
  • OVERFLOW_SPLIT - Identifies that an entire buffer was read without finding a split between text blocks.
    • The current maximum size of a text block is 64K characters.
    • Text blocks larger than this will be arbitrarily split, and the vertex will be marked with "OVERFLOW_SPLIT"\
  • ALL_WHITESPACE - Identifies that the characters spanned by the vertex are all whitespace characters (spaces, tabs, new-lines, carriage returns, etc.)

Resource Data

Description of resource.

Resource Format

$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")

  • Multiple entries can have the same pattern. If the pattern is matched, then it will be tagged with multiple (ambiguous) entry IDs.
  • Additional fielded data can be added to the record; as needed by downstream processes.

Fields

  • display ( type=string | required ) - What to show the user when browsing this entity
  • tag ( type=string | required ) - Tag which will identify any match in the graph, as an interpretation
    • These will all be added to the interpretation graph with the SEMANTIC_TAG flag.

      Tags are hierarchical representations of the same intent. For example, {city} → {administrative-area} → {geographical-area}

  • pattern ( type=string | required ) - Pattern to match in the content

  • _id ( type=string | required ) - Identifies the entity by unique ID. This identifier must be unique across all entries (across all dictionaries).

  • confAdjust ( type=boolean | required ) - Adjustment factor to apply to the confidence value of 0.0 to 2.0

    • This is the confidence of the entry, in comparison to all of the other entries. (Essentially, the likelihood that this entity will be randomly encountered.)
    • 0.0 to < 1.0  decreases confidence value
    • 1.0 confidence value remains the same
    • > 1.0 to  2.0 increases confidence value
  • updatedAt ( type=date epoch | required ) - Date in milliseconds of the last time the entry was updated
  • createdAt ( type=date epoch | required ) - Date in milliseconds of the creation time of the entry



  • No labels