Launch Aspire (if it's not already running). See:
For details on using the Aspire Content Source Management page, please refer to the Admin UI.
To specify exactly which shared folder to crawl, we need to create a new Content Source.
In the General tab in the Content Source Configuration window, specify basic information for the content source:
After selecting Scheduled, specify the details, if applicable:
In the Workflow tab, specify the workflow steps for the jobs that come out of the crawl. Drag and drop rules determine which steps an item should follow after being crawled. These rules could include where to publish the document or transformations needed on the data before sending it to a search engine. See Workflow for more information.
After completing this step, click Save then Done and you'll be sent back to the Home page.
Now that the content source is set up, the crawl can be initiated.
During the crawl, you can do the following:
The status will show RUNNING while the crawl is going, and CRAWLED when it is finished. (*see note below)
If there are errors, you will get a clickable "Error" flag that will take you to a detailed error message page.
Important: Due to the nature of Kinesis Data Streams, the connector does not actually stop crawling and will keep running as long as errors are not encountered, or the connector is not stopped or paused. This is normal behavior and allows the connector to continuously pick up any incoming new data. The only time it will stop is that if all the shards that were picked up at the start of the crawl end up being closed by a reshard operation.