You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

A key innovation of the Saga Library is that the output of language processing is graph of alternative representations

What is an 'interpretation graph' ?

Every token in a piece of text could have multiple interpretations. An "interpretation graph" is an efficient method for showing all possible interpretations of a piece of text.

As an example, the interpretation graph of "Abe Lincoln likes the iPhone-8" might look like this:

In this example, we see that:

  • An interpretation of the entire sentence is {person-product-preference}.
    • In other words, there's a person who likes a product
  • The {person} is made up of two tokens:  "abe" → "lincoln"
  • The token "lincoln" has a title-case alternative: "Lincoln"
  • The token "likes" has a lemmatized alternative:  "like"

What's not shown in the above diagram are confidence factors, which are tagged on every interpretation.

Interpretation Graphs are made from Vertexes and Lexical Items

  • Lexical Items - Can be a text block, token, or semantic tag
    • Typically important carriers of semantic information
  • Vertexes - Are the junction points between interpretations
    • Typically the white-space or punctuation between lexical items

It is this "node and edge" structure which makes this an interpretation graph.

Interpretation Graphs are "Add Only"

Information can only be added to an interpretation graph. It can never be removed or changed. By this we mean:

  • More alternative paths can be added
  • More lexical variations can be added
  • Flags (and possibly additional metadata - TBD) can be added to lexical items and vertexes
    • Flags, once set, can never be un-set.
  • Tokens, text blocks, semantic tags, once added, can never be removed

This comes from hard experience where we have discovered that, ultimately, "all interpretations are possible". When we have implemented these toolkits previously, we have had to make hard choices. For example, what punctuation splits a token, is upper-case important, do we need to save the original variation or is the root word enough. In almost all cases the answer is "sometimes" or, occasionally, "almost always".

And so, we never actually remove any interpretations from the graph. Instead, all interpretations are kept at all times and disambiguation is used to choose which interpretation the application will be most likely to be correct.

Everything is Saved

Along with the "add only" approach, we endeavor to save everything. For example:

  • Lexical items contain character buffers of the text for the item.
  • Vertexes contain character buffers of the characters which they cover (e.g. the spaces, punctuation, etc.).

Further, every vertex and lexical item identifies the start and end character position (from the original content stream) which it covers.

Flags

Flags are bits which can be turned on (e.g. 'set') for lexical items and vertexes. Flags are typically used for unambiguous, processing-related functions. Their function is often to control down-stream processing to make the pipelines more efficient.

Once they are set, they can never be un-set (well, frankly, you can actually change the bits at any time, so this is more of an honor-system).

Flags typically identify obvious and unambiguous characteristics of the lexical item and/or vertex. For example lexical item type (TEXT_BLOCK, TOKEN, SEMANTIC_TAG), case (ALL_UPPER_CASE, TITLE_CASE, MIXED_CASE), vertex characters (WHITESPACE, PUNCTUATION), etc.

Flags Only Describe the Lexical Item Itself

It may seem obvious, but flags describe the Lexical Item itself, and do not describe any items from which it was derived.


For example if you have the following graph:

V----[President]----V


And then you apply the Case Analysis Stage to this graph, you will get:

V----[President]----V
 ^---[president]----^


In this example, the first "President" token will have the TITLE_CASE flag, and the second (normalized) "president" token will have the ALL_LOWER_CASE flag. There is no flag which says "I was derived from some other token which was TITLE_CASE".

Note that you can traverse the component links from the derived item ("president") to the original item ("President") to  determine if some token was original TITLE_CASE.

Semantic Tags

Semantic tags identify (typically) semantic interpretations of sections of the content. This can include anything from entities (like {person}, {place}, etc.) to full sentence interpretation (as in {person-fact-request}, {restrictive-covenant-term}, {language-fluency-statement}, etc.) or possibly more.

Unlike flags (see above), the Language Processing Toolkit does not pre-define any semantic tags. Instead, semantic tags are determined based on the requirements of the text to be processed.

Specifically:

  • Taggers will add semantic tags for entities
    • For example, to look up names from a dictionary and to tag those names where they occur in the document
  • Advanced pattern recognizers will identify combinations of tags and literal text and create new tags
    • They are called "advanced" because they allow for patterns which have nested and recursive tagging

Semantic Tags will be Ambiguous

A key philosophy of this toolkit is that ambiguity is embraced rather than dreaded. To this end, the system will generate all possible semantic tags, including many and various ambiguous alternatives.

Confidence Values

All lexical items will have a confidence value, which describe the confidence of the interpretation. This is key for semantic tags where the confidence value can initially come from external sources (e.g. the likelyhood of a entity occurring randomly) and then will build up based on context and how the entity participates in larger patterns.

In addition, patterns can be generated by statistical techniques and then entered into the system. Systems which generate patterns in this way are encouraged to include a confidence value which then is then combined with the confidence of the supporting parts to generate a confidence value for every interpretation.

Confidence can be Strengthened with Context

Finally, it is the intention that confidence can be further strengthened with external confidence models. This allows for semantic tags to include or be linked to contextual clues which, when found in nearby text, will help provided the needed context.

Using the Output

The output of the processing engine will be an interpretation graph with confidence values. It is expected that the application will:

  • Scan through the output
  • Decide (using business rules and confidence factors) which interpretation to accept
  • Use the "components" links to identify all of the text which went into the interpretation
  • Do something with the output

(we hope to make this considerably easier with some helpful post-processing stages - TBD)



  • No labels