FAQs


Specific

Is incremental indexing supported by the connector?

 

Yes. At the moment, this is done by accessing SharePoint Online changes database directly, so after a full scan the connector will only fetch objects that changes in a given date range (since the last scan and the current scan).

SharePoint Online doesn't keep track of item updates/deletes for BCS external list items on it's change log. To be able to check for changes on BCS external lists, all external list items are crawled and checked against the information on the current snapshot file. If any of those items has any changes, then it is sent to the pipeline to be updated or deleted.

For a discussion on crawling, see here.

General 

Why does an incremental crawl last as long as a full crawl?

Some connectors perform incremental crawls based on snapshot files, which are meant to match the exact documents that have been indexed by the connector to the search engine. On an incremental crawl, the connector fully crawls the file system the same way as a full crawl, but it only indexes the modified, new or deleted documents during that crawl.

For a discussion on crawling, see Full & Incremental Crawls.

Save your content source before creating or editing another one

Failing to save a content source before creating or editing another content source can result in an error.

ERROR [aspire]: Exception received attempting to get execute component command com.searchtechnologies.aspire.services.AspireException: Unable to find content source

Save the initial content source before creating or working on another.