Step 3: Install and Configure the RDB Connector via Snapshots Content Source into Aspire
Add new source
To specify exactly what RDBMS to crawl, we will need to create a new "Content Source".
To create a new content source:
- From the Aspire 2 Home page, click on "Add Source" button.
- Click on RDB Connector via Snapshots.
General Configuration Tab
In the "General" tab in the Add New Content Source window, specify basic information for the content source:
- Enter a content source name in the "Name" field.
This is any useful name which you decide is a good name for the source. It will be displayed in the content source page, in error messages, etc.
- Click on the "Active?" checkbox to add a checkmark.
Unchecking the "Active?" option allows you to configure content sources but not have them enabled. This is useful if the folder will be under maintenance and no crawls are wanted during that period of time.
- Click on the "Schedule" drop-down list and select one of the following: Manually, Periodically, Daily, or Weekly.
Aspire can automatically schedule content sources to be crawled on a set schedule, such as once a day, several times a week, or periodically (every N minutes or hours). For the purposes of this tutorial, you may want to select Manually and then set up a regular crawling schedule later.
- After selecting a Schedule type, specify the details, if applicable:
- Manually: No additional options.
- Periodically: Specify the "Run every:" options by entering the number of "hours" and "minutes."
- Daily: Specify the "Start time:" by clicking on the hours and minutes drop-down lists and selecting options.
- Weekly: Specify the "Start time:" by clicking on the hours and minutes drop-down lists and selecting options, then clicking on the day checkboxes to specify days of the week to run the crawl.
- Advance: Enter a custom CRON Expression (e.g. 0 0 0 ? * *)
Step 3b: Specify RDBMS Properties & Connection Details
In the "Connector" tab, specify the connection information to crawl the RDBMS
SQL Configuration
The connector uses a number of SQL statements to define what is exacted from the database when a crawl is run. In retrieve everything mode, a single statement is used to extract data. In retrieve data per batch mode, a number of statements are used. When data is extracted from the database, that data is put in to Aspire by column name. If you want is to appear by another name, use the SQL as operator in your select statements.
For the purposes of this tutorial, you'll need to understand the schema of your database and have access to a third party SQL client that will allow you to run SQL statements.
- In the "Retrieve everything" field, enter the full crawl SQL statement (refer to Full Crawl SQL for details).
- Leave the "Use slices?" checkbox unchecked for the purposes of this tutorial.
Check this if you want to divide the Full crawl SQL into multiple slices.
Retrieve Everything SQL
The retrieve everything SQL statement is executed once when the "Retrieve everything" button is pressed. It should extract all the data you wish to be submitted to Aspire and can extract from one or more tables. In it's simplest form, it may look something like:
SELECT
id,
col1,
col2,
col3,
col4
FROM
main_data
This will result in an Aspire Job for each row returned, each comprising a document which hold fields named id, col1, col2, col3 and col4
The connector also needs to be told more information about the query result and how should behave while crawling.
- Set the ID Colum with the id obtained from the query, based on the query above it will be id.
- Check ID Column is a string if the id from the main_data table is a textual value instead of a numerical one.
- Check Index tables? if you want to index the table itself while crawling.
Database Connection
- On Advanced Connector Properties, check Advanced Configuration
- Check Specify Connector Defaults
RDBMS Specific Properties
In the "JDBC Url" field in the Properties section, enter the JDBC URL for the database to be crawled (refer to RDBMS URLs for details).- Specify the username and password of the crawl account you created earlier.
It needs sufficient access to crawl the RDBMS documents and folders in the path that you specified. Note: The password will be automatically encrypted by Aspire.
- In the "JDBC Driver Jar:" field, enter the name of the JAR file containing the driver for your database.
- In the "JDBC Driver Class" field, enter the Java class name for the driver.
This is optional.
RDBMS URLs
An RDBMS "URL" is needed to tell the connector application what database to crawl. The exact form of this URL is dictated by the JDBC driver and therefore the database vendor, but will be of the form
jdbc:<vendor>://<server>:<port>/<database>
For example
jdbc:mysql://192.168.40.27/wikidb
See your database vendor's documentation for more information on JDBC URLs.
Workflow Configuration Tab
In the "Workflow" tab, specify the workflow steps for the jobs that come out of the crawl. Drag and drop rules to determine which steps should an item follow after being crawled. This rules could be where to publish the document or transformations needed on the data before sending it to a search engine. See Workflow for more information.
- For the purpose of this tutorial, drag and drop the Publish To File rule found under the Publishers tab to the onPublish Workflow tree.
- Specify a Name and Description for the Publisher.
- Click Add.
After completing this steps click on the Save button and you'll be sent back to the Home Page.