Connects directly to the Python Bridge, to send text or sections of the interpretation graph to be processed by ML algorithms in Python

Operates On:  Lexical Items with TEXT_BLOCK, EOF, others.


The Python Classification Watcher Stage is an implementation of the abstract class called Watcher Stage, this Stage is used to classify large blocks of texts as for example: complete paragraphs or whole documents. The Watcher saves the classification paths for every text block and then verifies the vertices to look for triggers which mark the end of the saving cycle and start to process the saved data, then returns a weight vector that would be the classification of the text block.


Stage is a Recognizer for Saga Solution, and can also be used as part of a manual pipeline or a base pipeline


The difference between this and the Python Model Recognizer Stage is that this stage requires a trigger flag to start processing the text.

Generic Configuration Parameters

  • boundaryFlags ( type=string array | optional ) - List of vertex flags that indicate the beginning and end of a text block.
    Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"])
  • skipFlags ( type=string array | optional ) - Flags to be skipped by this stage.
    Tokens marked with this flag will be ignored by this stage, and no processing will be performed.
  • requiredFlags ( type=string array | optional ) - Lex items flags required by every token to be processed.
    Tokens need to have all of the specified flags in order to be processed.
  • atLeastOneFlag ( type=string array | optional ) - Lex items flags needed by every token to be processed.
    Tokens will need at least one of the flags specified in this array.
  • confidenceAdjustment ( type=double | default=1 | required ) - Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
    • 0.0 to < 1.0  decreases confidence value
    • 1.0 confidence value remains the same
    • > 1.0 to  2.0 increases confidence value
  • debug ( type=boolean | default=false | optional ) - Enable all debug log functionality for the stage, if any.
  • enable ( type=boolean | default=true | optional ) - Indicates if the current stage should be consider for the Pipeline Manager
    • Only applies for automatic pipeline building

Configuration Parameters

  • modelName ( type=string | required ) - Model name registered in the python bridge
  • modelVersion ( type=string | default=latest | optional ) - Model version registered in the python wrapper to query
  • modelMethod ( type=string | required ) - Model method to call for the model
  • hostname ( type=string | default=localhost | optional ) - Python server communication hostname
  • port ( type=string | default=5000 | optional ) - Python server communication port
  • sendTokens ( type=boolean | default=false | optional ) - If True, a collection of tokens is sent, otherwise a string is sent
  • sendTextBlockList ( type=boolean | default=false | optional ) - If True, sends the content as a list of the text blocks that were detected
  • sendOriginalText ( type=boolean | default=false | optional ) - If True, sends a node called "original" with the original text that was provided
  • includeVertexText ( type=boolean | default=false | optional ) - Include text of tokens flagged as vertices


"dependencyTags": [],
"modelName": "bert-base-nli-stsb-mean-tokens",
"modelVersion": "1",
"modelMethod": "predict",
"normalizeTags": false,
"hostname": "localhost",
"port": 5000,
"sendTextBlockList": true,
"sendTokens": true,
"sendOriginalText": true,


The settings "sendTextBlockList", "sendTokens" and "sendOriginalText" are independent to each other.  This means that if the original input is:

"This is a good test

This is a bad test"


And we have a normalization tag called "sentiment" that matches "good" and "bad", the payloads will look as follows:


sendTextBlockListsendTokenssendOriginalTextRequest Payload
TrueTrueTrue
{
    "sendTextBlocks": true,
    "sendTokens": true,
    "sendOriginalText": true,
    "normalized": [
        ["this", "is", "a", "{sentiment}", "test"],
        ["this", "is", "a", "{sentiment}", "test"]
    ],
    "original": [
        ["this", "is", "a", "good", "test"],
        ["this", "is", "a", "bad", "test"]
    ]
}
TrueTrueFalse
{
    "sendTextBlocks": true,
    "sendTokens": true,
    "sendOriginalText": false,
    "normalized": [
        ["this", "is", "a", "{sentiment}", "test"],
        ["this", "is", "a", "{sentiment}", "test"]
    ]
}
TrueFalseTrue
{
    "sendTextBlocks": true,
    "sendTokens": false,
    "sendOriginalText": true,
    "normalized": [
        {"this is a {sentiment} test"},
        {"this is a {sentiment} test"}
    ],
    "original": [
        {"this is a good test"},
        {"this is a bad test"}
    ]
}
TrueFalseFalse
{
    "sendTextBlocks": true,
    "sendTokens": false,
    "sendOriginalText": false,
    "normalized": [
        {"this is a {sentiment} test"},
        {"this is a {sentiment} test"}
    ]
}
FalseTrueTrue
{
    "sendTextBlocks": false,
    "sendTokens": true,
    "sendOriginalText": true,
    "normalized": [
        ["this", "is", "a", "{sentiment}", "test", "this", "is", "a", "{sentiment}", "test"]
    ],
    "original": [
        ["this", "is", "a", "good", "test", "this", "is", "a", "bad", "test"]
    ]
}
FalseTrueFalse
{
    "sendTextBlocks": false,
    "sendTokens": true,
    "sendOriginalText": true,
    "normalized": [
        ["this", "is", "a", "{sentiment}", "test", "this", "is", "a", "{sentiment}", "test"]
    ]
}
FalseFalseTrue
{
    "sendTextBlocks": false,
    "sendTokens": false,
    "sendOriginalText": true,
    "normalized": [
        {"this is a {sentiment} test this is a {sentiment} test"}
    ],
    "original": [
        {"this is a good test this is a bad test"}
    ]
}
FalseFalseFalse
{
    "sendTextBlocks": false,
    "sendTokens": false,
    "sendOriginalText": false,
    "normalized": [
        {"this is a {sentiment} test this is a {sentiment} test"}
    ]
}

Example Output

The output of the Watcher Stage is at the metadata of the vertex flagged as the trigger, for this example it is the EOF but it could be configured to work with TEXT_BLOCK_SPLIT or any other flag.

V--------------[abraham lincoln likes macaroni and cheese]--------------------V <=== EOF, at this vertex's metadata is the embedded vector
^--[abraham]--V--[lincoln]--V--[likes]--V--[macaroni]--V--[and]--V--[cheese]--^
              ^---{place}---^           ^----{food}----^         ^---{food}---^
^----------{person}---------^           ^-----------------{food}--------------^

Output Flags

Lex-Item Flags:

  • SEMANTIC_TAG - Identifies all lexical items that are semantic tags.
  • ML_PREDICT- Result from a machine learning algorithm for prediction
  • ML_CLASSIFY- Result from a machine learning algorithm for classification
  • ML_REGRESS- Result from a machine learning algorithm for regression
  • TEXT_BLOCK - Flags all text blocks.

Vertex Flags:

  • WEIGHT_VECTOR - Identifies the text block related to this vertex as a weight vector representation