This tutorial walks through the steps necessary to crawl SharePoint 2013 using the SharePoint 2013 connector.
A prerequisite for crawling SharePoint 2013 is to have a Windows Active Directory account. The domain, username and password for this account will be required below.
The recommended name for this account is "aspire_crawl_account". See prerequisites section for more details.
"aspire_crawl_account" will need to have sufficient access rights to read all of the documents in SharePoint 2013 that you wish to process. See User Account Requirements for details on what rights will be required for the account in SharePoint.
To set the rights for your account at Web Application level, do the following:
To set the rights for your "aspire_crawl_count" on site collections, do the following:
Launch Aspire (if it's not already running). See:
To specify exactly what shared folder to crawl, we will need to create a new "Content Source".
To create a new content source:
From the Content Source , click Add Source.
Click SharePoint 2013 Connector.
In the "General" tab in the Content Source Configuration window, specify basic information for the content source:
After selecting a Scheduled, specify the details, if applicable:
You can add more schedules by clicking in the Add New option, and rearrange the order of the schedules. |
If you want to disable the content source, clear the the "Enable" check box. This is useful if the folder will be under maintenance and no crawls are wanted during that period of time. |
Real Time and Cache Groups crawl will be available depending of the connector. |
In the "Connector" tab, specify the connection information to crawl SharePoint 2013.
Index Containers: index sites, lists and folders. If unchecked, only list items and attachments will be indexed.
Crawl Attachments: crawl list item attachments. (e.g. Documents attached to an Event or a Task).
Scan Excluded Items: if a container is excluded by a pattern, scan it anyways.
Include/Exclude patterns: Enter regex patterns to include or exclude items.
Patterns added for including/excluding items are matched against the item's display url. Make sure your pattern matches that field. |
It should not be the URL to a form or document, but the actual URL to the SharePoint object. For example instead of https://sharepoint.domain.com/Pages/home.aspx it should be https://sharepoint.domain.com. In this version of the Aspire SharePoint 2013 Connector, the URL must be one of the following:
|
For additional information on the connector's specific properties see SharePoint 2013 Configuration. |
In the "Workflow" tab, specify the workflow steps for the jobs that come out of the crawl. Drag and drop rules to determine which steps should an item follow after being crawled.
These rules could be where to publish the document or transformations needed on the data before sending it to a search engine. See Workflow for more information.
After completing this steps click on the Save then Done and you'll be sent back to the Home Page.
Now that the content source is set up, the crawl can be initiated.
The status will show RUNNING while the crawl is going, and CRAWLED when it is finished.
If there are errors, you will get a clickable "Error" flag that will take you to a detailed error message page.
If you only want to process content updates from the SharePoint 2013 (documents which are added, modified, or removed), then click on the "Incremental" button instead of the "Full" button. The SharePoint 2013 connector will automatically identify only changes which have occurred since the last crawl.
If this is the first time that the connector has crawled, the action of the "Incremental" button depends on the exact method of change discovery. It may perform the same action as a "Full" crawl crawling everything, or it may not crawl anything. Thereafter, the Incremental button will only crawl updates.
Statistics are reset for every crawl. |
Group expansion configuration is done on the "Group Expansion" section of the Connector tab.