Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Excerpt

The JSON Producer

...

Stage will, as

...

its name suggests, produces a JSON array representation of TEXT_BLOCK

...

items. Output can be filtered to entities only

...

or to all tokens. Access to the produced output is done programatically

...

.


Operates On:  Every lexical Item in the graph.

Include Page
Generic Configuration Parameters
Generic Configuration Parameters

Include Page
Generic Producer Configuration Parameters
Generic Producer Configuration Parameters

Configuration Parameters

  • name (string, required) - Unique name to identify the stage in the pipeline. It is used programatically to retrieve the stage and be able to consume the produced output.
  • onlyEntities (boolean, optional)
  • Parameter
    summaryWill only include tagged entities in the output if true. Otherwise, it will include all tokens
    defaultfalse
    nameonlyEntities
    typeboolean
  • Parameter
    summaryIf non empty, only entities of the given names in the whitelist are added to the JSON output.
    defaultempty
    namewhitelist
    typestring array
  • Parameter
    summaryIf non empty, any entity will be added to the JSON output, except for those in a blacklist
    defaultempty
    nameblacklist
    typestring array


Code Block
boundaryFlagstext block split
stageJsonProducer
languagejs
"name": "JsonProducer",

...

languagejs
themeEclipse
titleExample Configuration

...

"onlyEntities": true,

...

"queueTimeout": 10,

...

"queueRetries": 1

...

Example Output

...

If you have a text block like the following:

Code Block
languagetext
themeFadeToGrey
V----------[300 ml of water]----------

...

V 
^----------[300 ml of water]----------

...

^ 
^-

...

[

...

300]-V-

...

--[

...

ml]---V--[

...

of]--V

...

-[

...

water]-

...

^ 
^-[{#}]-^-[{unit}]-^-[have]-^ 
^-[{measurement}]--^ 

the stage will produce the following JSON (if onlyEntities = true):

Code Block
languagejs
"entities":[{
  "text":"300 ml",
  "value":[
    {
      "value":"300",
     

...

 "entity":"#"
    },
    

...

{

...


      "value":"mililiters",
      

...

"entity":"unit"
    }
  ],
  "entity":"measurement",
  "startPos":0,
  "endPos":6
}]

or the following (if onlyEntities = false):

Code Block
languagejs
"tokens":[
  {
    "text":"300 ml",
    "value":[
      {
        "value":"300",
        "entity":"#"
      },
      {
        "value":"mililiters",
        "entity":"unit"
      }
    ],
    "entity":"measurement",
    "startPos":0,
    "endPos":6
  },
  {
    "text":"of",
    "startPos":7,
    "endPos":9
  },
  {
    "text":"water",
    "startPos":10,
    "endPos":15
  }
]

Output Flags

Lex-Item Flags:

  • SEMANTIC_TAG - Identifies all lexical items which are semantic tags.
  • PROCESSED - Placed on all the tokens which composed the semantic tag.
  • ALL_LOWER_CASE - All of the characters in the token are lower-case characters.
  • ALL_UPPER_CASE - All of the characters in the token are upper-case characters (for example, acronyms).
  • TITLE_CASE - The first character is upper case, all of the other characters are lower case.
  • MIXED_CASE - Handles any mixed upper & lower case scenario not covered above.
  • TOKEN - All tokens produced are tagged as TOKEN
  • CHAR_CHANGE -  Identifies the vertex as a change between character formats
  • HAS_DIGIT - Tokens produced with at least one digit character are tagged as HAS_DIGIT 
  • HAS_PUNCTUATION - Tokens produced with at least one punctuation character are tagged as HAS_PUNCTUATION. (ALL_PUNCTUATION will not be tagged as HAS_PUNCTUATION)
  • LEMMATIZE- All words retrived will be marked as LEMMATIZE
  • NUMBER - Flagged on all tokens which are numbers according to the rules above.
  • TEXT_BLOCK - Flags all text blocks produced by the SimpleReader
  • SKIP - All matched stop words will be marked as SKIP
  • ORIGINAL - Identifies that the Lex-Items produced by this stage are the original, as written, representation of every token (e.g. before normalization)

Vertex Flags:

No vertex is created in this stage

  • ALL_PUNCTUATION - Identifies the vertex as all token
    • The default flag if no "splitFlag" is present.
  • <splitFlag> - Defines an alternative flag to ALL_PUNCTUATION, if desired (see above)
  • CHAR_CHANGE -  Identifies the vertex as a change between character formats
  • TEXT_BLOCK_SPLIT - Identifies the vertex as a split between text blocks.
  • OVERFLOW_SPLIT - Identifies that an entire buffer was read without finding a split between text blocks.
    • The current maximum size of a text block is 64K characters.
    • Text blocks larger than this will be arbitrarily split, and the vertex will be marked with "OVERFLOW_SPLIT"\
  • ALL_WHITESPACE - Identifies that the characters spanned by the vertex are all whitespace characters (spaces, tabs, new-lines, carriage returns, etc.)

Resource Data

Description of resource.

Resource Format

The only file which is absolutely required is the entity dictionary. It is a series of JSON records, typically indexed by entity ID.

Description of entity:
Entity JSON Format

Code Block
languagejs
themeEclipse
titleEntity JSON Format
{
  "id":"Q28260",
  "tags":["{city}", "{administrative-area}", "{geography}"],
  "patterns":[
    "Lincoln", "Lincoln, Nebraska", "Lincoln, NE"
  ],
  "confidence":0.95
  
  . . . additional fields as needed go here . . . 
}

Notes

  1. Multiple entities can have the same pattern.
    1. If the pattern is matched, then it will be tagged with multiple (ambiguous) entity IDs.
  2. Additional fielded data can be added to the record
    1. As needed by downstream processes.

Fields

  • id (required, string) - Identifies the entity by unique ID. This identifier must be unique across all entities (across all dictionaries).
    • Typically this is an identifier with meaning to the larger application which is using the Language Processing Toolkit.
  • tags (required, array of string) - The list of semantic tags which will be added to the interpretation graph whenever any of the patterns are matched.
    • These will all be added to the interpretation graph with the SEMANTIC_TAG flag.
    • Typically, multiple tags are hierarchical representations of the same intent. For example, {city} → {administrative-area} → {geographical-area}
  • patterns (required, array of string) - A list of patterns to match in the content.
    • Patterns will be tokenized and there may be multiple variations which can match.
      • NOTE:  Currenty, tokens are separated on simple white-space and punctuation, and then reduced to lowercase.
      • TODO:  This will need to be improved in the future, perhaps by specifying a pipeline to perform the tokenization and to allow for multiple variations.
  • confidence (optional, float) - Specifies the confidence level of the entity, independent of any patterns matched.
    • This is the confidence of the entity, in comparison to all of the other entities. Essentially, the likelihood that this entity will be randomly encountered.

Other, Optional Fields

...