FAQs


Specific

Is incremental indexing supported by the connector?

Yes. At the moment, this is done by accessing SharePoint 2013 changes database directly, so after a full scan the connector will only fetch objects that changes in a given date range (since the last scan and the current scan).

SharePoint 2013 doesn't keep track of item updates/deletes for BCS external list items on it's change log. To be able to check for changes on BCS external lists, all external list items are crawled and checked against the information on the current snapshot file. If any of those items has any changes, then it is sent to the pipeline to be updated or deleted.

For a discussion on crawling, see here.

Why do I need to set up permissions at Web Application level?

Those permissions ("Full Read") are required so the connector can fetch additional information from site collections. This information is required for incremental indexing.

If setting permissions at Web Application level is not suitable to your environment, consider setting site collection administrator rights to the crawler account on each site collection you want to crawl. (This will work without Web App permissions, but has to be set on every site collection.)

General 

Why does an incremental crawl last as long as a full crawl?

Some connectors perform incremental crawls based on snapshot entries, which are meant to match the exact documents that have been indexed by the connector to the search engine. On an incremental crawl, the connector fully crawls the repository the same way as a full crawl, but it only indexes the modified, new or deleted documents during that crawl.

For a discussion on crawling, see Full & Incremental Crawls.


Save your content source before creating or editing another one

Failing to save a content source before creating or editing another content source can result in an error.

ERROR [aspire]: Exception received attempting to get execute component command com.searchtechnologies.aspire.services.AspireException: Unable to find content source

Save the initial content source before creating or working on another.


My connector keeps the same status "Running" and is not doing anything

After a crawl has finished, the connector status may not be updated correctly.  

To confirm this, do the following:

1. In RoboMongo, go to your connector database (like: aspire-nameOfYourConnector).

2. Open the "Status" collection and perform the following query:

db.getCollection('status').find({}).limit(1).sort({$natural:-1})


3, Edit the entry and set the status to "S" (Completed). 


Note:  To see the full options of "Status" values, see MongoDB Collection Status.


My connector is not providing group expansion results

Make sure your connector has a manual scheduler configured for Group Expansion.


1, Go to the Aspire debug console, and look for the respective scheduler (in the fourth table: Aspire Application Scheduler).

2. If you are unsure which scheduler is for Group Expansion, you can check the Schedule Detail.

    • You can identify it with the value: cacheGroups

3.To run the Group Expansion process, click Run.


Troubleshooting