You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 8
Next »
Operates On: Lexical Items with TOKEN or SEMANTIC_TAG and other possible flags as specified below.
Generic Configuration Parameters
-
boundaryFlags ( type=string array
| optional
)
- List of vertex flags that indicate the beginning and end of a text block.
Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"]) -
skipFlags ( type=string array
| optional
)
- Flags to be skipped by this stage.
Tokens marked with this flag will be ignored by this stage, and no processing will be performed. -
requiredFlags ( type=string array
| optional
)
- Lex items flags required by every token to be processed.
Tokens need to have all of the specified flags in order to be processed. -
atLeastOneFlag ( type=string array
| optional
)
- Lex items flags needed by every token to be processed.
Tokens will need at least one of the flags specified in this array. -
confidenceAdjustment ( type=double
| default=1
| required
)
- Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
- 0.0 to < 1.0 decreases confidence value
- 1.0 confidence value remains the same
- > 1.0 to 2.0 increases confidence value
-
debug ( type=boolean
| default=false
| optional
)
- Enable all debug log functionality for the stage, if any.
-
enable ( type=boolean
| default=true
| optional
)
- Indicates if the current stage should be consider for the Pipeline Manager
- Only applies for automatic pipeline building
Configuration Parameters
- patterns (string, required) - The resource that contains the pattern database.
- See below for the format.
{
"type":"Fragmentation",
"patterns":"fragmented-provider:patterns"
}
Example Output
Description
V--------------[abraham lincoln likes macaroni and cheese]--------------------V
^--[abraham]--V--[lincoln]--V--[likes]--V--[macaroni]--V--[and]--V--[cheese]--^
^---{place}---^ ^----{food}----^ ^---{food}---^
^----------{person}---------^ ^-----------------{food}--------------^
Output Flags
Lex-Item Flags
- SEMANTIC_TAG - Identifies all lexical items that are semantic tags.
- FRAGMENT- Identifies all lexical items that were created from a fragmentation pattern.
- PROCESSED - Placed on all the tokens that compose the semantic tag.
Resource Data
The resource data is a database of fragmented patterns, and the resulting semantic tags they produce.
The only required file is the entity dictionary. It is a series of JSON records, typically indexed by entity ID.
Description of entity
{
"id":"Q28260",
"tags":["{city}", "{administrative-area}", "{geography}"],
"patterns":[
"Lincoln", "Lincoln, Nebraska", "Lincoln, NE"
],
"options": {
"minTokens": 3,
"maxTokens": 6,
"combination": true
}
"confidence":0.95
. . . additional fields as needed go here . . .
}
Notes
- Multiple entities can have the same pattern.
- If the pattern is matched, then it will be tagged with multiple (ambiguous) entity IDs.
- Additional fielded data can be added to the record.
- As needed by downstream processes.
Fields
- id (required, string) - Identifies the entity by unique ID. This identifier must be unique across all entities (across all dictionaries).
- Typically, this is an identifier with meaning to the larger application that uses the Language Processing Toolkit.
- tags (required, array of string) - The list of semantic tags to be added to the interpretation graph whenever any of the patterns are matched.
- These will be added to the interpretation graph with the SEMANTIC_TAG flag.
- Typically, multiple tags are hierarchical representations of the same intent. For example, {city} → {administrative-area} → {geographical-area}
- patterns (required, array of string) - A list of patterns to match in the content.
- Patterns will be tokenized and multiple variations may match.
NOTE: Currenty, tokens are separated on simple white-space and punctuation, and then reduced to lowercase.
TODO: This will need to be improved in the future, perhaps by specifying a pipeline to perform the tokenization and to allow for multiple variations.
- options (optional, JSON Object) - Object with options applicable for this entity
- minTokens (optional, int) - Minimum number of tokens the match must contains to be valid. The default is the number of tokens contained in each pattern.
- maxTokens (optional, int) - Maximum number of tokens the match must contain to be valid. The default is the number of tokens contained in each pattern.
- combination (optional, boolean) - Indicates if the given tokens can be matched in any order as long as all appear in the match. If false, the tokens must be in the order provided.
- confidence (optional, float) - Specifies the confidence level of the entity, independent of any patterns matched.
- This is the confidence of the entity, in comparison to all of the other entities. Essentially, the likelihood that this entity will be randomly encountered.
Other, Optional Fields
- display (optional, string) - What to show the user when browsing this entity.