Excerpt |
---|
JSON Producer stage will as it's name suggest produces a JSON array representation of TEXT_BLOCK or SENTENCE items. Output can be filtered to only entities or to all tokens. Access to the produced output is done programatically (See below). |
Operates On: Every lexical Item in the graph.
Include Page |
---|
| Generic Configuration Parameters |
---|
| Generic Configuration Parameters |
---|
|
Configuration Parameters
- name (string, required) - Unique name to identify the stage in the pipeline. It is used programatically to retrieve the stage and be able to consume the produced output.
- onlyEntities (boolean, optional)
Code Block |
---|
language | js |
---|
theme | Eclipse |
---|
title | Example Configuration |
---|
|
{
"type": "JsonProducerStage",
"name": "JsonProducer",
"boundaryFlags": [
"TEXT_BLOCK_SPLIT"
],
"onlyEntities": true,
"queueTimeout": 10,
"queueRetries": 1
} |
Example Output
Description
Code Block |
---|
language | text |
---|
theme | FadeToGrey |
---|
|
V--------------[abraham lincoln likes macaroni and cheese]--------------------V
^--[abraham]--V--[lincoln]--V--[likes]--V--[macaroni]--V--[and]--V--[cheese]--^
^---{place}---^ ^----{food}----^ ^---{food}---^
^----------{person}---------^ ^-----------------{food}--------------^ |
Output Flags
Lex-Item Flags:
- SEMANTIC_TAG - Identifies all lexical items which are semantic tags.
- PROCESSED - Placed on all the tokens which composed the semantic tag.
- ALL_LOWER_CASE - All of the characters in the token are lower-case characters.
- ALL_UPPER_CASE - All of the characters in the token are upper-case characters (for example, acronyms).
- TITLE_CASE - The first character is upper case, all of the other characters are lower case.
- MIXED_CASE - Handles any mixed upper & lower case scenario not covered above.
- TOKEN - All tokens produced are tagged as TOKEN
- CHAR_CHANGE - Identifies the vertex as a change between character formats
- HAS_DIGIT - Tokens produced with at least one digit character are tagged as HAS_DIGIT
- HAS_PUNCTUATION - Tokens produced with at least one punctuation character are tagged as HAS_PUNCTUATION. (ALL_PUNCTUATION will not be tagged as HAS_PUNCTUATION)
- LEMMATIZE- All words retrived will be marked as LEMMATIZE
- NUMBER - Flagged on all tokens which are numbers according to the rules above.
- TEXT_BLOCK - Flags all text blocks produced by the SimpleReader
- SKIP - All matched stop words will be marked as SKIP
- ORIGINAL - Identifies that the Lex-Items produced by this stage are the original, as written, representation of every token (e.g. before normalization)
Vertex Flags:
No vertex is created in this stage
- ALL_PUNCTUATION - Identifies the vertex as all token
- The default flag if no "splitFlag" is present.
- <splitFlag> - Defines an alternative flag to ALL_PUNCTUATION, if desired (see above)
- CHAR_CHANGE - Identifies the vertex as a change between character formats
- TEXT_BLOCK_SPLIT - Identifies the vertex as a split between text blocks.
- OVERFLOW_SPLIT - Identifies that an entire buffer was read without finding a split between text blocks.
- The current maximum size of a text block is 64K characters.
- Text blocks larger than this will be arbitrarily split, and the vertex will be marked with "OVERFLOW_SPLIT"\
- ALL_WHITESPACE - Identifies that the characters spanned by the vertex are all whitespace characters (spaces, tabs, new-lines, carriage returns, etc.)
Resource Data
Description of resource.
The only file which is absolutely required is the entity dictionary. It is a series of JSON records, typically indexed by entity ID.
Description of entity:
Entity JSON Format
Code Block |
---|
language | js |
---|
theme | Eclipse |
---|
title | Entity JSON Format |
---|
|
{
"id":"Q28260",
"tags":["{city}", "{administrative-area}", "{geography}"],
"patterns":[
"Lincoln", "Lincoln, Nebraska", "Lincoln, NE"
],
"confidence":0.95
. . . additional fields as needed go here . . .
} |
Notes
- Multiple entities can have the same pattern.
- If the pattern is matched, then it will be tagged with multiple (ambiguous) entity IDs.
- Additional fielded data can be added to the record
- As needed by downstream processes.
Fields
- id (required, string) - Identifies the entity by unique ID. This identifier must be unique across all entities (across all dictionaries).
- Typically this is an identifier with meaning to the larger application which is using the Language Processing Toolkit.
- tags (required, array of string) - The list of semantic tags which will be added to the interpretation graph whenever any of the patterns are matched.
- These will all be added to the interpretation graph with the SEMANTIC_TAG flag.
- Typically, multiple tags are hierarchical representations of the same intent. For example, {city} → {administrative-area} → {geographical-area}
- patterns (required, array of string) - A list of patterns to match in the content.
- Patterns will be tokenized and there may be multiple variations which can match.
- NOTE: Currenty, tokens are separated on simple white-space and punctuation, and then reduced to lowercase.
- TODO: This will need to be improved in the future, perhaps by specifying a pipeline to perform the tokenization and to allow for multiple variations.
- confidence (optional, float) - Specifies the confidence level of the entity, independent of any patterns matched.
- This is the confidence of the entity, in comparison to all of the other entities. Essentially, the likelihood that this entity will be randomly encountered.
Other, Optional Fields
- display (optional, string) - What to show the user when browsing this entity.
- context (optional, object) - A context vector which can help disambiguate this entity from others with the same pattern.
- Format TBD, but probably a list of weighted words, phrases and tags.