Some connectors perform incremental crawls based on snapshot entries, which are meant to match the exact documents that have been indexed by the connector to the search engine. On an incremental crawl, the connector fully crawls the repository the same way as a full crawl, but it only indexes the modified, new or deleted documents during that crawl.
For a discussion on crawling, see Full & Incremental Crawls.
Failing to save a content source before creating or editing another content source can result in an error.
ERROR [aspire]: Exception received attempting to get execute component command com.accenture.aspire.services.AspireException: Unable to find content source
Save the initial content source before creating or working on another.
After a crawl has finished, the connector status may not be updated correctly.
To confirm this, do the following:
1. In Robo 3T (formerly Robomongo), go to your connector database (like: aspire-nameOfYourConnector).
2. Open the "Status" collection and perform the following query:
db.getCollection('status').find({}).limit(1).sort({$natural:-1})
3, Edit the entry and set the status to "S" (Completed).
Note: To see the full options of "Status" values, see MongoDB Collection Status.
Make sure your connector has a manual scheduler configured for Group Expansion.
1, Go to the Aspire debug console, and look for the respective scheduler (in the fourth table: Aspire Application Scheduler).
2. If you are unsure which scheduler is for Group Expansion, you can check the Schedule Detail.
3.To run the Group Expansion process, click Run.
Sometimes just the username and password fields are not enough to authenticate to a site. Some sites requires some custom fields or even the "submit" button in order to successfully authenticate you. So you may have to add them as custom fields in the Aspider Configuration.
You can also open the browser inspect mode in order to break down the authentication request and make sure you are not missing any field.
Don't worry about those hidden fields inside the form, Aspider will automatically include them in the request, you don't have to do anything.
Watch out for "logout" pages which usually send requests to the browsers to clear their cookies. If Aspider is requested to clear its cookies for logging out, it will do that and will not try to re-login.
Suggestion: Add an exclusion pattern for the "log out" pages.
Unfortunately not at the moment, Aspider Cookie Based Authentication is built to send only one request, but we are already considering improvements for it.
If your login form relies on javascript for sending the Authentication request your form element probably won't have an "action" attribute, which Aspider use for sending the POST request to. So you wouldn't be able to authenticate.
As of Java 8, Aspider supports and has been tested to work on the following protocols:
Note: SSLv2 and SSLv3 are not supported by Aspider.