You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Connects directly to the Python Bridge, to send text or sections of the interpretation graph to be process by ML algorithms in Python

Operates On:  Lexical Items with TOKEN and TEXT_BLOCK.

Stage is a Recognizer for Saga Solution, and can also be used as part of a manual pipeline or a base pipeline


The difference between this and the Python Model Recognizer Stage is that this stage requires a trigger flag to start processing the text.

Generic Configuration Parameters

  • boundaryFlags ( type=string array | optional ) - List of vertex flags that indicate the beginning and end of a text block.
    Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"])
  • skipFlags ( type=string array | optional ) - Flags to be skipped by this stage.
    Tokens marked with this flag will be ignored by this stage, and no processing will be performed.
  • requiredFlags ( type=string array | optional ) - Lex items flags required by every token to be processed.
    Tokens need to have all of the specified flags in order to be processed.
  • atLeastOneFlag ( type=string array | optional ) - Lex items flags needed by every token to be processed.
    Tokens will need at least one of the flags specified in this array.
  • confidenceAdjustment ( type=double | default=1 | required ) - Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
    • 0.0 to < 1.0  decreases confidence value
    • 1.0 confidence value remains the same
    • > 1.0 to  2.0 increases confidence value
  • debug ( type=boolean | default=false | optional ) - Enable all debug log functionality for the stage, if any.
  • enable ( type=boolean | default=true | optional ) - Indicates if the current stage should be consider for the Pipeline Manager
    • Only applies for automatic pipeline building

Configuration Parameters

  • modelName ( type=string | required ) - Model name registered in the python bridge
  • modelVersion ( type=string | default=latest | optional ) - Model version registered in the python wrapper to query
  • modelMethod ( type=string | required ) - Model method to call for the model
  • hostname ( type=string | default=localhost | optional ) - Python server communication hostname
  • port ( type=string | default=5000 | optional ) - Python server communication port
  • sendTokens ( type=boolean | default=false | optional ) - Expected content for this model are tokens
  • includeVertexText ( type=boolean | default=false | optional ) - Include text of tokens flagged as vertices


$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")

Example Output

The output of the Watcher Stage is at the metadata of the vertex flagged as the trigger, for this example it is the EOF but it could be configured to work with TEXT_BLOCK_SPLIT or any other flag. $action.getHelper().renderConfluenceMacro("$codeS$body$codeE")

Output Flags

Lex-Item Flags:

  • SEMANTIC_TAG - Identifies all lexical items that are semantic tags.
  • ML_PREDICT- Result from a machine learning algorithm for prediction
  • ML_CLASSIFY- Result from a machine learning algorithm for classification
  • ML_REGRESS- Result from a machine learning algorithm for regression
  • TEXT_BLOCK - Flags all text blocks.

Vertex Flags:

  • WEIGHT_VECTOR - Identifies the text block related to this vertex as a weight vector representation



  • No labels