By default, the Connector Framework allows connectors to handle incremental crawling using the snapshot NoSQL database. This snapshot contains an entry for each item discovered by the last crawl with an id, a subset of the metadata, a signature and a crawl id. On the following incremental the action is determined using the following criteria:
The Connector Framework is able to create the hierarchical structure of a seed based on how items are being discovered. This feature depends on the specific type of connector in use. To know if a connector supports hierarchy generation, check its documentation.
Connectors can fetch the content of a document and set the content stream on the job so that it can be processed on a later stage. Each connector will allow content fetching on specific types of items, check the documentation to see which ones are allowed.
If text extraction is enabled, the content stream opened during the fetch stage is sent to Apache Tika to extract the text content of the document. Take into account that the text extraction will consume the content stream, thus making it unavailable for other components to work with it.
The connectors that allow content fetching can be configured so that certain document types are not processed on the text extraction stage, leaving the data stream open so it can be processed on a Workflow stage. Non-Text documents can be identified using a comma separated list of extensions or a file containing a list of regex patterns to match the documents (one regex pattern per line).
To support document level security, the Connector Framework provides support for access control lists (ACLs). At minimum an ACL entry will contain the following information:
Each connector can add other attributes to the ACL entries depending on the repository needs.
For connectors that support document level security, identity crawling is available. This type of crawl will fetch entities (users, groups and memberships), pass those entities to a Workflow to allow for custom stages to be added and finally stores them in the identity cache NoSQL database.
Identity crawling doesn't support incremental crawls, which means that every crawl will create a new entry in the identity cache database. Old entries are deleted after a given number of crawls.
Is important to note that the memberships extracted by the Identity crawls may not be flattened, which means that it may only show direct relationships between users and groups. To expand those relationships and to use multiple seeds for group expansion, use the Group Expansion Connector.
As mentioned on the previous section, group expansion is done using the Group Expansion Connector. This connector will retrieve all entities stored in the identity cache for all seeds and will create an expanded version of each of those entities. It is important to notice that those expanded entities are not stored in the identity cache and that the user can add a publisher on the Workflow to store the expanded identities. For more information on how this works, please check the Group Expansion Connector documentation.
As we said earlier there is no Group Expansion endpoint as we had in Aspire 2, 3 and 4. The way we intended this to work in Aspire 5 is by having an index with the expanded groups which can be queried directly by the Elasticsearch UI to obtain the direct and indirect groups of any given user.
How this works in Aspire 5:
Here is a more concrete example using AzureAD + SharePoint Online:
You have a SP site called SiteA, which has a site local group (exists only in SharePoint Online within siteA) called: “SiteA Owners”, as a member it has an Azure AD group called “Engineering”
In AzureAD you might have this:
The right order to execute this in Aspire 5 would be:
1. Execute the Azure AD identity crawler
This will generate these entries in the identity cache:
2. Execute the SP Online SiteA indentity crawl
This will generate this entry in the identity cache:
3. Now you execute the Group Expander including the data from both seeds (Azure AD + SP Online)
This will generate three jobs to be indexed in the index of your preference:
The Connector Framework is able to retry any documents that fail during a crawl. These failures include any exceptions raised during a Workflow stage or at index time. If enabled, the connector will retry all errored documents or any documents where the exception matches one the configured regex patterns. For failed documents processing, the user can configure the following:
This feature consists of two phases: pre-reprocess and post-reprocess.
This phase is only executed at the beginning of each incremental crawl. The failed documents will be retried once in this phase, the in-crawl retries will be reset to 0 and the crawl retries will be increased by 1.
This phase is executed after each crawl (full and incremental). The failed documents will be retried as many times as configured with the maximum in-crawl retries.