On this page:
Table of Contents
Launch Aspire (if it's not already running). See:
For details on using the Aspire Content Source Management page, please refer to the Admin UI.
To specify exactly which shared folder to crawl, we need to create a new Content Source.
To create a new content source:
In the General tab in the Content Source Configuration window, specify basic information for the content source:
After selecting Scheduled, specify the details, if applicable:
Kafka Servers: The servers(s) to connect to, in <host>:<port> format. This can be a comma separated list of servers
e.g. first.server:9092,second.server:9092
Topic: The message stream to subscribe to.
Starting Offset for Full Crawls: Which messages to start fetching when starting a full crawl
Note: If using the At Sequence Number, After Sequence Number or At Timestamp options, unspecified shards will default to "Earliest"
Note: If certain partitions are not specified, Kafka will default to fetching the earliest message.
In the Workflow tab, specify the workflow steps for the jobs that come out of the crawl. Drag and drop rules determine which steps an item should follow after being crawled. These rules could include where to publish the document or transformations needed on the data before sending it to a search engine. See Workflow for more information.
After completing this step, click Save then Done and you'll be sent back to the Home page.
Now that the content source is set up, the crawl can be initiated.
During the crawl, you can do the following:
The status will show RUNNING while the crawl is going, and CRAWLED when it is finished. (*see note below)
If there are errors, you will get a clickable "Error" flag that will take you to a detailed error message page.
Incremental: Obtain messages continuously, stop only when the content source is manually stopped or paused
Note: For the Kafka connector, the behavior of the crawl is different between Incremental and Full crawls Anchor #kinesis-crawl-note #kinesis-crawl-note
Due to the nature of Kinesis Data Streams, the connector does not actually stop crawling and will keep running as long as errors are not encountered, or the connector is not stopped or paused. This is normal behavior and allows the connector to continuously pick up any incoming new data. The only time it will stop is that if all the shards that were picked up at the start of the crawl end up being closed by a reshard operation.