A key innovation of the Saga Library is that the output of language processing is graph of alternative representations.
Every token in a piece of text could have multiple interpretations. An "interpretation graph" is an efficient method for showing all possible interpretations of a piece of text.
As an example, the interpretation graph of "Abe Lincoln likes the iPhone-8" might look like this:
In this example, we see that:
What's not shown in the above diagram are confidence factors, which are tagged on every interpretation.
It is this "node and edge" structure which makes this an interpretation graph.
Information can only be added to an interpretation graph. It can never be removed or changed. By this we mean:
This comes from hard experience where we have discovered that, ultimately, "all interpretations are possible". When we have implemented these toolkits previously, we have had to make hard choices. For example, what punctuation splits a token, is upper-case important, do we need to save the original variation or is the root word enough. In almost all cases the answer is "sometimes" or, occasionally, "almost always".
And so, we never actually remove any interpretations from the graph. Instead, all interpretations are kept at all times and disambiguation is used to choose which interpretation the application will be most likely to be correct.
Along with the "add only" approach, we endeavor to save everything. For example:
Further, every vertex and lexical item identifies the start and end character position (from the original content stream) which it covers.
Flags are bits which can be turned on (e.g. 'set') for lexical items and vertexes. Flags are typically used for unambiguous, processing-related functions. Their function is often to control down-stream processing to make the pipelines more efficient.
Once they are set, they can never be un-set (well, frankly, you can actually change the bits at any time, so this is more of an honor-system).
Flags typically identify obvious and unambiguous characteristics of the lexical item and/or vertex. For example lexical item type (TEXT_BLOCK, TOKEN, SEMANTIC_TAG), case (ALL_UPPER_CASE, TITLE_CASE, MIXED_CASE), vertex characters (WHITESPACE, PUNCTUATION), etc.
It may seem obvious, but flags describe the Lexical Item itself, and do not describe any items from which it was derived.
For example if you have the following graph:
V----[President]----V
And then you apply the Case Analysis Stage to this graph, you will get:
V----[President]----V
^---[president]----^
In this example, the first "President" token will have the TITLE_CASE flag, and the second (normalized) "president" token will have the ALL_LOWER_CASE flag. There is no flag which says "I was derived from some other token which was TITLE_CASE".
Note that you can traverse the component links from the derived item ("president") to the original item ("President") to determine if some token was original TITLE_CASE.
Semantic tags identify (typically) semantic interpretations of sections of the content. This can include anything from entities (like {person}, {place}, etc.) to full sentence interpretation (as in {person-fact-request}, {restrictive-covenant-term}, {language-fluency-statement}, etc.) or possibly more.
Unlike flags (see above), the Language Processing Toolkit does not pre-define any semantic tags. Instead, semantic tags are determined based on the requirements of the text to be processed.
Specifically:
A key philosophy of this toolkit is that ambiguity is embraced rather than dreaded. To this end, the system will generate all possible semantic tags, including many and various ambiguous alternatives.
All lexical items will have a confidence value, which describe the confidence of the interpretation. This is key for semantic tags where the confidence value can initially come from external sources (e.g. the likelyhood of a entity occurring randomly) and then will build up based on context and how the entity participates in larger patterns.
In addition, patterns can be generated by statistical techniques and then entered into the system. Systems which generate patterns in this way are encouraged to include a confidence value which then is then combined with the confidence of the supporting parts to generate a confidence value for every interpretation.
Finally, it is the intention that confidence can be further strengthened with external confidence models. This allows for semantic tags to include or be linked to contextual clues which, when found in nearby text, will help provided the needed context.
The output of the processing engine will be an interpretation graph with confidence values. It is expected that the application will:
(we hope to make this considerably easier with some helpful post-processing stages - TBD)