The FAQ stage does a semantic comparison of a sentence against questions and its respective answer (using TensorFlow), if the confidence value is in the threshold, it will create a tag holding the question and answer.
This recognizer uses a frozen Universal Sentence Encoder TensorFlow model to encode (using sentence embedding vectors) a list of Frequently Asked Questions. Then tags sentences that match a question/answer pair given a specified threshold of accuracy with the question/answer from the FAQ. Most recently the FAQ also can use Saga Python Bridge to encode the sentence with different algorithms.
Operates On: Lexical Items with TOKEN and possibly other flags as specified below.
Library: saga-fag-stage
Generic Configuration Parameters
-
boundaryFlags ( type=string array
| optional
)
- List of vertex flags that indicate the beginning and end of a text block.
Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"]) -
skipFlags ( type=string array
| optional
)
- Flags to be skipped by this stage.
Tokens marked with this flag will be ignored by this stage, and no processing will be performed. -
requiredFlags ( type=string array
| optional
)
- Lex items flags required by every token to be processed.
Tokens need to have all of the specified flags in order to be processed. -
atLeastOneFlag ( type=string array
| optional
)
- Lex items flags needed by every token to be processed.
Tokens will need at least one of the flags specified in this array. -
confidenceAdjustment ( type=double
| default=1
| required
)
- Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
- 0.0 to < 1.0 decreases confidence value
- 1.0 confidence value remains the same
- > 1.0 to 2.0 increases confidence value
-
debug ( type=boolean
| default=false
| optional
)
- Enable all debug log functionality for the stage, if any.
-
enable ( type=boolean
| default=true
| optional
)
- Indicates if the current stage should be consider for the Pipeline Manager
- Only applies for automatic pipeline building
Configuration Parameters
-
questions ( type=string
| required
)
- Name of the resource containing the questions of the FAQ
-
tagWith ( type=string
| default=questionTag
| optional
)
- Indicates the tag which is going to be use for tagging
-
modelPath ( type=string
| required
)
- Path to the TensorFlow model in case the TensorFlow model is going to be used
-
threshold ( type=double
| default=0.6
| optional
)
- Threshold indicating when a predictions is acceptable
-
useTensorFlow ( type=boolean
| default=true
| optional
)
- Indicates if the FAQ is going to use TensorFlow or Saga Bridge
-
evalAnswers ( type=boolean
| optional
)
- Indicates if the sentence must be compared against the answer as well
-
modelName ( type=string
| default=bert-base-uncased
| required
)
- Model name registered in the python bridge
-
version ( type=string
| default=1
| optional
)
- Model version registered in the python wrapper to query
-
hostname ( type=string
| default=localhost
| optional
)
- Python server communication hostname
-
port ( type=integer
| default=5000
| optional
)
- Python server communication port
"questions": "saga_provider:saga_faq",
"tagWith": "question",
"modelPath": "./tf-model",
"threshold": 0.6,
"useTensorFlow": true,
"evalAnswers": true,
"modelName": "bert-base-uncased",
"version": "1",
"hostname": "localhost",
"port": 5000,
Example Output
V---------------------[Any analytics available for Internal Sites? Why are dogs always hungry?]---------------------V
^----------[Any analytics available for Internal Sites? ]-----------V---------[Why are dogs always hungry?]---------^
^-[Any]-V-[analytics]-V-[available]-V-[for]-V-[Internal]-V-[Sites?]-^-[Why]-V-[are]-V-[dogs]-V-[always]-V-[hungry?]-^
^----------------------------[{question}]---------------------------^
Output Flags
Lex-Item Flags:
- SEMANTIC_TAG - Identifies all lexical items which are semantic tags.
- ANSWER- Identifies a lexical item holding an answer from the FAQ.
Vertex Flags:
Resource Data
Description of resource.
$action.getHelper().renderConfluenceMacro("$codeS$body$codeE")
Fields
-
question ( type=string
| required
)
- question you want to compare against the sentence
-
answer ( type=string
| required
)
- answer of the question, that can be also be compare against the sentence
-
fields ( type=json
| required
)
- metadata of the FAQ entry
-
questionVector ( type=double array
| required
)
- context vector of the question, size can change depending of the algorithm used to build it
-
answerVector ( type=double array
| required
)
- context vector of the answer, size can change depending of the algorithm used to build it
-
url ( type=string
| optional
)
- for use in external applications
-
tag ( type=string
| required
)
- Tag which will identify any match in the graph, as an interpretation
_id ( type=string
| required
)
- Identifies the entity by unique ID. This identifier must be unique across all entries (across all dictionaries).
confAdjust ( type=boolean
| required
)
- Adjustment factor to apply to the confidence value of 0.0 to 2.0
- This is the confidence of the entry, in comparison to all of the other entries. (Essentially, the likelihood that this entity will be randomly encountered.)
- 0.0 to < 1.0 decreases confidence value
- 1.0 confidence value remains the same
- > 1.0 to 2.0 increases confidence value
-
updatedAt ( type=date epoch
| required
)
- Date in milliseconds of the last time the entry was updated
-
createdAt ( type=date epoch
| required
)
- Date in milliseconds of the creation time of the entry