You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 4
Next »
Connects directly to the Python Bridge, to send tokens without SEMANTIC_TAGs to be processed by ML algorithms in Python
Operates On: Lexical Items with TOKEN and not with SEMANTIC_TAG.
Generic Configuration Parameters
-
boundaryFlags ( type=string array
| optional
)
- List of vertex flags that indicate the beginning and end of a text block.
Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"]) -
skipFlags ( type=string array
| optional
)
- Flags to be skipped by this stage.
Tokens marked with this flag will be ignored by this stage, and no processing will be performed. -
requiredFlags ( type=string array
| optional
)
- Lex items flags required by every token to be processed.
Tokens need to have all of the specified flags in order to be processed. -
atLeastOneFlag ( type=string array
| optional
)
- Lex items flags needed by every token to be processed.
Tokens will need at least one of the flags specified in this array. -
confidenceAdjustment ( type=double
| default=1
| required
)
- Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
- 0.0 to < 1.0 decreases confidence value
- 1.0 confidence value remains the same
- > 1.0 to 2.0 increases confidence value
-
debug ( type=boolean
| default=false
| optional
)
- Enable all debug log functionality for the stage, if any.
-
enable ( type=boolean
| default=true
| optional
)
- Indicates if the current stage should be consider for the Pipeline Manager
- Only applies for automatic pipeline building
Configuration Parameters
-
modelName ( type=string
| required
)
- Model name registered in the python bridge
-
modelVersion ( type=string
| default=latest
| optional
)
- Model version registered in the python wrapper to query
-
modelMethod ( type=string
| required
)
- Model method to call for the model
-
hostname ( type=string
| default=localhost
| optional
)
- Python server communication hostname
-
port ( type=string
| default=5000
| optional
)
- Python server communication port
$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")
Example Output
$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")
Output Flags
Lex-Item Flags:
- WEIGHT_VECTOR - Identifies the tag as a weight vector representation of a sentence
- ML_PREDICT- Result from a machine learning algorithm for prediction
- ML_CLASSIFY- Result from a machine learning algorithm for classification
- ML_REGRESS- Result from a machine learning algorithm for regression