Launch Aspire (if it's not already running). See:
To specify exactly what shared folder to crawl, we will need to create a new "Content Source".
To create a new content source:
In the "General" tab in the Content Source Configuration window, specify basic information for the content source:
After selecting a Scheduled, specify the details, if applicable:
In the "Connector" tab, specify the connection information to crawl the Web Site.
You can use Java regular expressions to specifically include or exclude patterns to crawl. These are optional.
To add a new pattern:
If you enter crawl patterns to accept or reject, the URL will be compared to the pattern and crawled or not crawled, as specified. For example, to exclude javascript files, you can set a reject crawl pattern of: "\.js$" (The defaults for crawling patterns are "none"; you can enter one pattern, multiple patterns, or no patterns.)
To remove a pattern, click on the X icon next to the Pattern field.
You can use Java regular expressions to specifically include or exclude patterns to index. These are optional.
To add a new pattern:
If you enter index patterns to accept or reject, the URL will be compared to the pattern and indexed or not indexed, as specified. For example, the crawler may need to crawl a "robots.txt" file in order to read the rules on how to crawl a particular site, but you won't want to index that rules file. To exclude it, you would enter ".*robots.txt*." as a reject index pattern. (The defaults for indexing patterns are "none"; you can enter one pattern, multiple patterns, or no patterns.)
To remove a pattern, click on the X icon next to the Pattern field.
In the "Workflow" tab, specify the workflow steps for the jobs that come out of the crawl. Drag and drop rules to determine which steps should an item follow after being crawled. This rules could be where to publish the document or transformations needed on the data before sending it to a search engine. See Workflow for more information.
After completing this steps click on the Save then Done and you'll be sent back to the Home Page.
Step 2d. Optional Authentication
If you need to set up any kind of the supported authentication mechanisms visit:
Now that the content source is set up, the crawl can be initiated.
The status will show RUNNING while the crawl is going, and CRAWLED when it is finished.
If there are errors, you will get a clickable "Error" flag that will take you to a detailed error message page.
If you only want to process content updates from the Heritrix (documents which are added, modified, or removed), then click on the "Incremental" button instead of the "Full" button. The Heritrix connector will automatically identify only changes which have occurred since the last crawl.
If this is the first time that the connector has crawled, the action of the "Incremental" button depends on the exact method of change discovery. It may perform the same action as a "Full" crawl crawling everything, or it may not crawl anything. Thereafter, the Incremental button will only crawl updates.