The IBM Connections connector will crawl content from IBM Connections server.

 

 

Features


The IBM Connections connector will crawl content from any IBM Connections applications (Activities, Blogs, Bookmarks, Communities, Files, Forums, Profiles, and Wikis).

Note that the IBM Connections connector is not part of Aspire Enterprise bundle, however it may be purchased separately.

Some of the features of the IBM Connections connector include:

  • Performs incremental crawling (so that only new/updated/deleted documents are indexed)
  • Metadata extraction.
  • Fetches access control lists (ACLs) for document-level security (For this feature you need to configure the LDAP credentials in the connector UI section "Extract ACL").
  • Is search engine independent
  • Runs from any machine with access to the given IBM Connections Site
  • Filter the crawled documents by file names using regex patterns.

Content Retrieved


The IBM Connections connector retries several types of documents, listed bellow are the inclusions and exclusions of these documents.

Include

 

 

Operation Mode 

 The connector will use the Seedlist service provider interface (SPI) provided with IBM Connections to integrate a search engine with IBM Connections content over HTTP or HTTPS to communicate. The connector acquires content by doing the following:

  • Send a GET request to the seedlist feed for all application whose data you want to crawl.
    http://<servername>/activities/seedlist/myserver 
    http://<servername>/blogs/seedlist/myserver
    http://<servername>/dogear/seedlist/myserver
    http://<servername>/communities/seedlist/myserver
    http://<servername>/files/seedlist/myserver 
    http://<servername>/forums/seedlist/myserver
    http://<servername>/profiles/seedlist/myserver
    http://<servername>/wikis/seedlist/myserver
    http://<servername>/dm/seedlist/myserver

     

  • Process the returned feed. Find the rel=next link and send a GET request to the web address specified by its href attribute.
  • Repeat the previous two steps until the response includes a <wplc:timestamp> element in its body.
  • Store the value of the <wplc:timestamp> element; you must pass that value as a parameter when you perform a subsequent crawl of the data.

 

  • No labels