Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The RDB via Tables Connector can be configured using the Aspire Admin UI. It requires the following entities to be created:

  • Credential
  • Connection
  • Connector
  • Seed

Bellow are the examples of how to create the Credential, Connection and the Seed. For the Connector please check this page.

Easy Heading Free
navigationTitleOn this Page
navigationExpandOptionexpand-all-by-default

Create Credential


  1. On the Aspire Admin UI go to the credentials tab 
  2. All existing credential will be list. Click on the new button 
  3. Enter the new credential description. 
  4. Select Rdbconfluence-Tables identities from the Type list.
  5. Enter User and Password. Note: The password will be automatically encrypted by Aspire.

The description of all "credential" fields  can be found here.

Image AddedImage Removed

Create Connection

  1. On the Aspire Admin UI go to the connections tab 
  2. All existing connection will be list. Click on the new button 
  3. Enter the new connection description. 
  4. Select Rdbconfluence-Tables identities from the Type list.In the "JDBC Url" field in the Properties section, enter the JDBC URL for the database to be crawled.

    RDBMS URLs

    An RDBMS "URL" is needed to tell the connector application what database to crawl. The exact form of this URL is dictated by the JDBC driver and therefore the database vendor, but will be of the form

    jdbc:<vendor>://<server>:<port>/{DATABASE}

    For example:  jdbc:mysql://192.168.40.27/{DATABASE} - the marker {DATABASE} will be replaced with the value of Database name field (see Create Seed section on this page)

    See your database vendor's documentation for more information on JDBC URLs.
  5. In the "JDBC Driver Jar:" field, enter the name of the JAR file containing the driver for your database.
  6. In the "JDBC Driver Class" field, enter the Java class name for the driver (optional)If your JDBC Driver jar does not contain all the necessary classes, you may add additional jar files in to the class path. To do this, select "Specify classpath" and enter the classpath to the jars containing the classes.

The

classpath may include jar files, directories or individual classes.
  • etc.
  • The description of all "connection" fields  can be found here.

    Image RemovedImage Added

    Create Connector


    For the creation of the Connector object using the Admin UI check this page


    Create Seed


    1. On the Aspire Admin UI go to the seeds tab 
    2. All existing seeds will be list. Click on the new button 
    3. Enter the new seed description. 
    4. Select Rdbconfluence-Tables identities from the Type list.Database name field:

    The

    name of the database. It will replace the marker {DATABASE} used in the field jdbcUrl of connection object
  • etc.
  • The description of all "seed" fields can be found here.

    SQL Configuration

    The connector uses a number of SQL statements to define what is exacted from the database when a crawl is run. In full mode, a single statement is used to extract data. In incremental mode, a number of statements are used. When data is extracted from the database, that data is put in to Aspire by column name. If you want is to appear by another name, use the SQL ''as'' operator in your select statements.

    For the purposes of this tutorial, you'll need to understand the schema of your database and have access to a third party SQL client that will allow you to run SQL statements.

    See here for further details on configuring SQL for crawls

    Full Crawl SQL

    The full crawl SQL statement is executed once when the "Full" button is pressed. It should extract all the data you wish to be submitted to Aspire and can extract from one or more tables. In it's simplest form, it may look something like:

    SELECT
    id,
    col1,
    col2,
    col3,
    col4
    FROM
    main_data

    This will result in an Aspire Job for each row returned, each comprising a document which hold fields named ''id'', ''col1'', ''col2'', ''col3'' and ''col4''

    Use Slices for Full SQL

    If you wish, when you perform a full crawl, the connector will split the data in to "slices", allowing these "slices" to be processed in parallel and thus decreasing the length of time taken to perform the crawl. If you check this option, you'll be able to specify the number of slices to use. The sql you specified for the full crawl will then be modified by the connector to include a where clause performing the "slice".

    For example, if your sql was select * from table and you chose 10 slices, then 10 sql statements would be executed at the server. The sql executed would be select * from table where id mod 10 = n (where n is a value between 0 and 9).

    ID

    Enter the column that holds the id of the row and specify if this column is a string

    Enable Post Crawl SQL

    Check this box if you want the ability to run a piece of sql once the crawl has completed, and enter that SQL in the Post Crawl Sql entry

    Configure Incremental Crawl

    Check this box if you want to configure incremental crawls and then select from the various options.

    Incremental crawl bounding

    Checking this option allows incremental crawls to use SQL that is bounded by a condition. When entering SQL you may use the variables in a WHERE clause to limit the data collected. The upper bound will be calculated at the start of the crawl and the lower will be the upper from the previous crawl. Two types of bounding are available - Timestamp uses the current system time whilst SQL allows you to define SQL to return the bounds when the crawl starts.

    Pre incremental Crawl SQL

    SQL statements to run before an incremental crawl. This SQL can be used to mark documents for update, save timestamps, clear update tables, and other actions as needed to prepare for an incremental crawl. This field can be left blank if you never do an incremental crawl

    Incremental Crawl SQL

    SQL statements to run for an incremental crawl. This SQL should provide a list of all adds and deletes to the documents in the index. Some field names have special meaning, such as title, content, url, and id,. Note the special column "action" should report I (for insert), U (for update), or D (for delete).

    Post incremental Crawl SQL

    SQL statement sot run after each record is processed. This SQL can be used to unmark/delete each document from the table after it is complete.

    Post incremental Crawl SQL (failures)

    SQL statements to run after each record if processing fails. If this SQL field is left blank, the SQL entered in the "Post incremental crawl SQL" field will run instead.

    Action column

    Enter the name of the column in the returned data which holds action of the update (ie Insert, Update or Delete). This must match the name returned by the SQL. If the column is aliased using the SQL "AS" construct,
    you should provide the alias name here

    Sequence column

    Enter the name of the column in the returned data which holds the sequence number of the update. This match the name returned by the SQL. If the column is aliased using the SQL "AS" construct, you should provide the alias name here

    ACL fetching

    If required, acls can be added to the documents collected by the connector. Choose Column to specify the name of a column in the main data holding the acls, or SQL to enter a separate piece of sql to collect the ACLs

    Image RemovedImage RemovedImage Added