Step 3: Add a new Amazon S3 Content Source
To specify the location of the store desired to crawl, we will need to create a new "Content Source". To create a new content source:
- From the Aspire 2 Home page, click on "Add Source" button.
- Click on "Amazon S3 Connector".
In the "General" tab in the Add New Content Source window, specify basic information for the content source:
- Enter a content source name in the "Name" field.
This is any useful name which you decide is a good name for the source. It will be displayed in the content source page, in error messages, etc.
- Click on the "Active?" checkbox to add a checkmark.
Unchecking the "Active?" option allows you to configure content sources but not have them enabled. This is useful if the folder will be under maintenance and no crawls are wanted during that period of time.
- Click on the "Schedule" drop-down list and select one of the following: Manually, Periodically, Daily, or Weekly.
Aspire can automatically schedule content sources to be crawled on a set schedule, such as once a day, several times a week, or periodically (every N minutes or hours). For the purposes of this tutorial, you may want to select Manually and then set up a regular crawling schedule later.
- After selecting a Schedule type, specify the details, if applicable:
- Manually: No additional options.
- Periodically: Specify the "Run every:" options by entering the number of "hours" and "minutes."
- Daily: Specify the "Start time:" by clicking on the hours and minutes drop-down lists and selecting options.
- Weekly: Specify the "Start time:" by clicking on the hours and minutes drop-down lists and selecting options, then clicking on the day checkboxes to specify days of the week to run the crawl.
- Advance: Enter a custom CRON Expression (e.g. 0 0 0 ? * *)
In the "Connector" tab, specify the connection information to crawl the Amazon S3 location.
- Enter the "URL" you want to crawl
"/" (all the buckets)"/bucket/" (an specific bucket)"/bucket/folder/" (an specific folder)
- Enter the credentials. It needs sufficient access to crawl the S3 bucket and folder(s) that you specified.
Access KeySecret Key. It will be automatically encrypted by Aspire.
- Check the other options as needed:
- Index folders?: index subfolders as items. If unchecked, only files will be indexed.
- Scan subfolders?: Scan through subfolder's child nodes.
- Include/Exclude patterns: Enter regex patterns to include or exclude files/folders based on URL matches.
In the "Workflow" tab, specify the workflow steps for the jobs that come out of the crawl. Drag and drop rules to determine which steps should an item follow after being crawled. This rules could be where to publish the document or transformations needed on the data before sending it to a search engine. See Workflow for more information.
- For the purpose of this tutorial, drag and drop the Publish To File rule found under the Publishers tab to the onPublish Workflow tree.
- Specify a Name and Description for the Publisher.
- Click Add.
After completing this steps click on the Save button and you'll be sent back to the Home Page.