You are viewing an old version of this page. View the current version.
Compare with Current View Page History
« Previous Version 3 Current »
The RDB via Snapshot Connector can be configured using the Aspire Admin UI. It requires the following entities to be created:
Bellow are the examples of how to create the Credential, Connection and the Seed. For the Connector please check this page.
An RDBMS "URL" is needed to tell the connector application what database to crawl. The exact form of this URL is dictated by the JDBC driver and therefore the database vendor, but will be of the form
jdbc:<vendor>://<server>:<port>/{DATABASE}
For example: jdbc:mysql://192.168.40.27/{DATABASE} - the marker {DATABASE} will be replaced with the value of Database name field (see Create Seed section on this page)
See your database vendor's documentation for more information on JDBC URLs.
The description of all "connection" fields can be found here.
The description of all "seed" fields can be found here.
The connector uses a number of SQL statements to define what is exacted from the database when a crawl is run. In full mode, a single statement is used to extract data. In incremental mode, a number of statements are used. When data is extracted from the database, that data is put in to Aspire by column name. If you want is to appear by another name, use the SQL ''as'' operator in your select statements.
For the purposes of this tutorial, you'll need to understand the schema of your database and have access to a third party SQL client that will allow you to run SQL statements.
See here for further details on configuring SQL for crawls
The full crawl SQL statement is executed once when the "Full" button is pressed. It should extract all the data you wish to be submitted to Aspire and can extract from one or more tables. In it's simplest form, it may look something like:
SELECT
id,
col1,
col2,
col3,
col4
FROM
main_data
This will result in an Aspire Job for each row returned, each comprising a document which hold fields named ''id'', ''col1'', ''col2'', ''col3'' and ''col4''
If you wish, when you perform a full crawl, the connector will split the data in to "slices", allowing these "slices" to be processed in parallel and thus decreasing the length of time taken to perform the crawl. If you check this option, you'll be able to specify the number of slices to use. The sql you specified for the full crawl will then be modified by the connector to include a where clause performing the "slice".
For example, if your sql was select * from table and you chose 10 slices, then 10 sql statements would be executed at the server. The sql executed would be select * from table where id mod 10 = n (where n is a value between 0 and 9).
Enter the column that holds the id of the row and specify if this column is a string
Check this box if you want the ability to run a piece of sql once the crawl has completed, and enter that SQL in the Post Crawl Sql entry
Check this box if you want to configure incremental crawls and then select from the various options.
Checking this option allows incremental crawls to use SQL that is bounded by a condition. When entering SQL you may use the variables in a WHERE clause to limit the data collected. The upper bound will be calculated at the start of the crawl and the lower will be the upper from the previous crawl. Two types of bounding are available - Timestamp uses the current system time whilst SQL allows you to define SQL to return the bounds when the crawl starts.
SQL statements to run before an incremental crawl. This SQL can be used to mark documents for update, save timestamps, clear update tables, and other actions as needed to prepare for an incremental crawl. This field can be left blank if you never do an incremental crawl
SQL statements to run for an incremental crawl. This SQL should provide a list of all adds and deletes to the documents in the index. Some field names have special meaning, such as title, content, url, and id,. Note the special column "action" should report I (for insert), U (for update), or D (for delete).
SQL statement sot run after each record is processed. This SQL can be used to unmark/delete each document from the table after it is complete.
SQL statements to run after each record if processing fails. If this SQL field is left blank, the SQL entered in the "Post incremental crawl SQL" field will run instead.
Enter the name of the column in the returned data which holds action of the update (ie Insert, Update or Delete). This must match the name returned by the SQL. If the column is aliased using the SQL "AS" construct,
you should provide the alias name here
Enter the name of the column in the returned data which holds the sequence number of the update. This match the name returned by the SQL. If the column is aliased using the SQL "AS" construct, you should provide the alias name here
If required, acls can be added to the documents collected by the connector. Choose Column to specify the name of a column in the main data holding the acls, or SQL to enter a separate piece of sql to collect the ACLs
Copyright © 2024 Accenture