This tutorial walks through the steps necessary to crawl a Documentum repository using the Aspire 2.0 Documentum connector.

Step 1: Set Documentum Access Rights

In order to crawl the content of the Documentum repository, the Aspire Documentum Connector requires a superuser account to connect with. To keep track of the access that Aspire is doing to the repository, we recommend the creation of a new account, aspire_crawl_account for example.

To set the rights for your aspire_crawl_account, do the following:

  1. Log into the Documentum Webtop as an Administrator.
  2. Click on Administration.
  3. Click on User Management.
  4. Make the role of the aspire_crawl_account either administrator or superuser (so that it has access to all Documentum content).

You will need this login information later in these procedures, when entering properties for your Documentum Connector.

The Aspire Documentum Connector will also required the files: dfc.properties and dfc.keystore, from the Documentum Server, to be able to correctly establish a session between Aspire and the Documentum repository. Copy these two files to your Aspire server. You will require these files on the following steps.

Step 2: Launch Aspire and open the Content Source Management Page



Aspire Content Source Management Page

Launch Aspire (if it's not already running). See:

Browse to: http://localhost:50505. For details on using the Aspire Content Source Management page, please refer to UI Introduction.


Step 3: Add a new Documentum Content Source



Add new source

To specify exactly what Dcoumentum repository to crawl, we will need to create a new "Content Source".

To create a new content source:

  1. From the Aspire 2 Home page, click on "Add Source" button.
  2. Click on "Documentum Connector".

Step 3a: Specify Basic Information




General Configuration Tab

In the "General" tab in the Add New Content Source window, specify basic information for the content source:

  1. Enter a content source name in the "Name" field.

    This is any useful name which you decide is a good name for the source. It will be displayed in the content source page, in error messages, etc.

  2. Click on the "Active?" checkbox to add a checkmark.

    Unchecking the "Active?" option allows you to configure content sources but not have them enabled. This is useful if the folder will be under maintenance and no crawls are wanted during that period of time.

  3. Click on the "Schedule" drop-down list and select one of the following: Manually, Periodically, Daily, or Weekly.

    Aspire can automatically schedule content sources to be crawled on a set schedule, such as once a day, several times a week, or periodically (every N minutes or hours). For the purposes of this tutorial, you may want to select Manually and then set up a regular crawling schedule later.

  4. After selecting a Schedule type, specify the details, if applicable:
    1. Manually: No additional options.
    2. Periodically: Specify the "Run every:" options by entering the number of "hours" and "minutes."
    3. Daily: Specify the "Start time:" by clicking on the hours and minutes drop-down lists and selecting options.
    4. Weekly: Specify the "Start time:" by clicking on the hours and minutes drop-down lists and selecting options, then clicking on the day checkboxes to specify days of the week to run the crawl.
    5. Advance: Enter a custom CRON Expression (e.g. 0 0 0 ? * *)

Step 3b: Specify the Connector Information



Connector Configuration Tab

In the "Connector" tab, specify the connection information to crawl the Documentum repository.

  1. Enter the dctm url you want to crawl (Format: dctm://<docbroker-server>:<docbroker-port>/<docbase>/<cabinet>/<folder>. The URL requires at least a docbase; cabinets and folders are optional for a more narrow scan).
  2. Enter the username (aspire_crawl_account).
  3. Enter the user's password.
  4. Enter the location of the dfc.properties file. Make sure the dfc.properties file correctly points to the dfc.keystore in the property: dfc.security.keystore.file.
  5. Enter the webtop URL. This URL will be suffixed with the object id and used to request the object directly from Documentum. I. e. http://<servername>:<port>/webtop/objectId=
  6. Enter the max file size. Any file larger than this size will be ignored by the connector. Unlimited to include all files.
  7. Check on the other options as needed:
    1. Index folders?: index subfolders as items. If unchecked, only files will be indexed.
    2. Scan subfolders?: Scan through subfolder's child nodes.
    3. Scan system cabinets?: if you want to scan Documentum's hidden and private system cabinets.
    4. Include/Exclude patterns: Enter regex patterns to include or exclude files/folders based on URL matches.

Optional: Group Expansion

Documentum group expansion occurs at search time. This is needed when you want to perform security trimming (early binding) on a Search Engine. If you don't need search authorization then you don't need to configure this.

If secure search results are needed, the search engine will be responsible of checking what documents should be returned to a user or not, based on the ACLs of each document (indexed). To correctly determine if a user has access or not to a document, the search engine will have to know which groups the user is member of. To do this, the search engine can query Documentum group expansion component, through the Group Expansion Manager, either using LDAP or simple HTTP requests.

To learn about specifics of how group expansion is done for Documentum check the Documentum Group Expansion component.

To configure group expansion, follow these instructions on the same Connector configuration page:

  1. Expand the Advanced Connector Properties
  2. Enter the Documentum server docbroker host name.
  3. Enter the Documentum server docbroker port number.
  4. Enter the Documentum docbase to extract security information from.
  5. Enter the location of the dfc.properties file. Make sure the dfc.properties file correctly points to the dfc.keystore in the property: dfc.security.keystore.file.
  6. Enter the username (aspire_crawl_account).
  7. Enter the user's password

Step 3c: Specify Workflow Information



Workflow Configuration Tab

In the "Workflow" tab, specify the workflow steps for the jobs that come out of the crawl. Drag and drop rules to determine which steps should an item follow after being crawled. This rules could be where to publish the document or transformations needed on the data before sending it to a search engine. See Workflow for more information.

  1. For the purpose of this tutorial, drag and drop the Publish To File rule found under the Publishers tab to the onPublish Workflow tree.
    1. Specify a Name and Description for the Publisher.
    2. Click Add.

After completing this steps click on the Save button and you'll be sent back to the Home Page.



























Step 4: Initiate the Full Crawl



Start Crawl

Now that the content source is set up, the crawl can be initiated.

  1. Click on the crawl type option to set it as "Full" (is set as "Incremental" by default and the first time it'll work like a full crawl. After the first crawl, set it to "Incremental" to crawl for any changes done in the repository).
  2. Click on the Start button.


During the Crawl



Crawl Statistics

During the crawl, you can do the following:

  • Click on the "Refresh" button on the Content Sources page to view the latest status of the crawl.

    The status will show RUNNING while the crawl is going, and CRAWLED when it is finished.

  • Click on "Complete" to view the number of documents crawled so far, the number of documents submitted, and the number of documents with errors.

If there are errors, you will get a clickable "Error" flag that will take you to a detailed error message page.








  • No labels