You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 3
Next »
JSON Producer stage will as it's name suggest produces a JSON array representation of TEXT_BLOCK or SENTENCE items. Output can be filtered to only entities or to all tokens. Access to the produced output is done programatically (See below).
Operates On: Every lexical Item in the graph.
Generic Configuration Parameters
-
boundaryFlags ( type=string array
| optional
)
- List of vertex flags that indicate the beginning and end of a text block.
Tokens to process must be inside two vertices marked with this flag (e.g ["TEXT_BLOCK_SPLIT"]) -
skipFlags ( type=string array
| optional
)
- Flags to be skipped by this stage.
Tokens marked with this flag will be ignored by this stage, and no processing will be performed. -
requiredFlags ( type=string array
| optional
)
- Lex items flags required by every token to be processed.
Tokens need to have all of the specified flags in order to be processed. -
atLeastOneFlag ( type=string array
| optional
)
- Lex items flags needed by every token to be processed.
Tokens will need at least one of the flags specified in this array. -
confidenceAdjustment ( type=double
| default=1
| required
)
- Adjustment factor to apply to the confidence value of 0.0 to 2.0 from (Applies for every pattern match).
- 0.0 to < 1.0 decreases confidence value
- 1.0 confidence value remains the same
- > 1.0 to 2.0 increases confidence value
-
debug ( type=boolean
| default=false
| optional
)
- Enable all debug log functionality for the stage, if any.
-
enable ( type=boolean
| default=true
| optional
)
- Indicates if the current stage should be consider for the Pipeline Manager
- Only applies for automatic pipeline building
Configuration Parameters
- name (string, required) - Unique name to identify the stage in the pipeline. It is used programatically to retrieve the stage and be able to consume the produced output.
- onlyEntities (boolean, optional)
{
"type": "JsonProducerStage",
"name": "JsonProducer",
"boundaryFlags": [
"TEXT_BLOCK_SPLIT"
],
"onlyEntities": true,
"queueTimeout": 10,
"queueRetries": 1
}
Example Output
Description
V--------------[abraham lincoln likes macaroni and cheese]--------------------V
^--[abraham]--V--[lincoln]--V--[likes]--V--[macaroni]--V--[and]--V--[cheese]--^
^---{place}---^ ^----{food}----^ ^---{food}---^
^----------{person}---------^ ^-----------------{food}--------------^
Output Flags
Lex-Item Flags:
- SEMANTIC_TAG - Identifies all lexical items which are semantic tags.
- PROCESSED - Placed on all the tokens which composed the semantic tag.
- ALL_LOWER_CASE - All of the characters in the token are lower-case characters.
- ALL_UPPER_CASE - All of the characters in the token are upper-case characters (for example, acronyms).
- TITLE_CASE - The first character is upper case, all of the other characters are lower case.
- MIXED_CASE - Handles any mixed upper & lower case scenario not covered above.
- TOKEN - All tokens produced are tagged as TOKEN
- CHAR_CHANGE - Identifies the vertex as a change between character formats
- HAS_DIGIT - Tokens produced with at least one digit character are tagged as HAS_DIGIT
- HAS_PUNCTUATION - Tokens produced with at least one punctuation character are tagged as HAS_PUNCTUATION. (ALL_PUNCTUATION will not be tagged as HAS_PUNCTUATION)
- LEMMATIZE- All words retrived will be marked as LEMMATIZE
- NUMBER - Flagged on all tokens which are numbers according to the rules above.
- TEXT_BLOCK - Flags all text blocks produced by the SimpleReader
- SKIP - All matched stop words will be marked as SKIP
- ORIGINAL - Identifies that the Lex-Items produced by this stage are the original, as written, representation of every token (e.g. before normalization)
Vertex Flags:
No vertex is created in this stage
- ALL_PUNCTUATION - Identifies the vertex as all token
- The default flag if no "splitFlag" is present.
- <splitFlag> - Defines an alternative flag to ALL_PUNCTUATION, if desired (see above)
- CHAR_CHANGE - Identifies the vertex as a change between character formats
- TEXT_BLOCK_SPLIT - Identifies the vertex as a split between text blocks.
- OVERFLOW_SPLIT - Identifies that an entire buffer was read without finding a split between text blocks.
- The current maximum size of a text block is 64K characters.
- Text blocks larger than this will be arbitrarily split, and the vertex will be marked with "OVERFLOW_SPLIT"\
- ALL_WHITESPACE - Identifies that the characters spanned by the vertex are all whitespace characters (spaces, tabs, new-lines, carriage returns, etc.)
Resource Data
Description of resource.
The only file which is absolutely required is the entity dictionary. It is a series of JSON records, typically indexed by entity ID.
Description of entity:
Entity JSON Format
{
"id":"Q28260",
"tags":["{city}", "{administrative-area}", "{geography}"],
"patterns":[
"Lincoln", "Lincoln, Nebraska", "Lincoln, NE"
],
"confidence":0.95
. . . additional fields as needed go here . . .
}
Notes
- Multiple entities can have the same pattern.
- If the pattern is matched, then it will be tagged with multiple (ambiguous) entity IDs.
- Additional fielded data can be added to the record
- As needed by downstream processes.
Fields
- id (required, string) - Identifies the entity by unique ID. This identifier must be unique across all entities (across all dictionaries).
- Typically this is an identifier with meaning to the larger application which is using the Language Processing Toolkit.
- tags (required, array of string) - The list of semantic tags which will be added to the interpretation graph whenever any of the patterns are matched.
- These will all be added to the interpretation graph with the SEMANTIC_TAG flag.
- Typically, multiple tags are hierarchical representations of the same intent. For example, {city} → {administrative-area} → {geographical-area}
- patterns (required, array of string) - A list of patterns to match in the content.
- Patterns will be tokenized and there may be multiple variations which can match.
- NOTE: Currenty, tokens are separated on simple white-space and punctuation, and then reduced to lowercase.
- TODO: This will need to be improved in the future, perhaps by specifying a pipeline to perform the tokenization and to allow for multiple variations.
- confidence (optional, float) - Specifies the confidence level of the entity, independent of any patterns matched.
- This is the confidence of the entity, in comparison to all of the other entities. Essentially, the likelihood that this entity will be randomly encountered.
Other, Optional Fields
- display (optional, string) - What to show the user when browsing this entity.
- context (optional, object) - A context vector which can help disambiguate this entity from others with the same pattern.
- Format TBD, but probably a list of weighted words, phrases and tags.