The IBM Connections Scanner component performs full and incremental scans over a IBM Connections Site, the timestamp attribute of the Seedlist's (Activities, Blogs, Files, Forums, etc) are saved to performed the incremental crawl if necessary. Updated content is then submitted to the configured pipeline in AspireObjects attached to Jobs. As well as the URL of the changed item, the AspireObject will also contain metadata extracted from the repository. Updated content is split in to three types: add, update, and delete. Each type of content is published on a different event so that it may be handled by different Aspire pipelines
The scanner reacts to an incoming job. This job may instruct the scanner to start, stop, pause or resume. Typically the start job will contain all information required by the job to perform the crawl. However, the scanner can be configured with default values via application.xml file. When pausing or stopping, the scanner will wait until all the jobs it published have completed before itself completing.
On this page:
Table of Contents |
---|
IBM Connections Scanner (Aspire 2) | |
---|---|
Factory Name | com.searchtechnologies.aspire:aspire-ibmconnections-connector |
subType | default |
Inputs | AspireObject from a content source submitter holding all the information required for a crawl |
Outputs | Jobs from the crawl |
This section lists all configuration parameters available to configure the IBM Connections Scanner component.
Property | Type | Default | Description |
---|---|---|---|
IBMServer | string | none | The Url of the IBM Connection server to crawl (you have to specify the protocol). |
IBMUser | string | none | The Username to connect with. |
IBMPassword | string | none | The password of the Username to connect with. |
Page Size | integer | 100 | Specifies the number of entries per page to return in the crawl |
useLTPA | boolean | false | true if the connector is going to use LTPA token for authentication. |
IBMLoginUrl | string | none | IBM Connections login page (contains the LTPA token) |
extractACL | boolean | false | true if the connectori is going to requires user's GUID from LDAP server. |
crawlAllApps | boolean | false | false, All the default endpoints (applications: Activities, Blogs, Bookmarks, Communities, Files, Forums, Profiles, and Wikis) will be crawl.The user should select the applications that want to crawl. true, The user will select which application wants to crawl. |
withLimitedAccess | boolean | false | true if the connector is going to hace limited access to the network or internet |
withPatterns | boolean | false | true, if the user defined the accessible servers using patterns false, if the user defined the accessible server's names or ips |
addRequestProperty | List<String> | Specifies the header and the value of the Request Property or Properties | |
geTPSize | integer | 5 | Number of threads for the thread pool to download users and groups from IBM server. |
shouldBackoff | boolean | false | If true, the connector will have a back off re-connection method when the server returns the specified error. |
backoffErrorPattern | regex | false | Indicate the regex to match the error message to back-off. |
backoffMinutes | integer | 15 | Time to wait when a back-off error is encountered. |
backoffRetries | integer | 3 | Number of retries with backoff when error is encountered. |
dateFormat | String | Format to parse the LastModifiedDate and Publish Date. | |
expandACL | boolean | false | If true, the containers' ACL (Communities, Forums, Activities) will be expanded during crawling. |
This component publishes to the onAdd, onDelete and onUpdate, so a branch must be configured for each of these three events.
Element | Type | Description |
---|---|---|
branches/branch/@event | string | The event to configure - onAdd, onDelete or onUpdate. |
branches/branch/@pipelineManager | string | The name of the pipeline manager to publish to. Can be relative. |
branches/branch/@pipeline | string | The name of the pipeline to publish to. If missing, publishes to the default pipeline for the pipeline manager. |
<component name="IBMScanner" subType="default" factoryName="aspire-ibmconnections-connector"> <debug>${debug}</debug> <snapshotDir>${IBMsnapshotDir}</snapshotDir> <updaterComponent>../JobStatusUpdater</updaterComponent> <branches> <branch event="onAdd" pipelineManager="../ProcessPipelineManager" pipeline="add-update-pipeline" allowRemote="true" batching="true" batchSize="50" batchTimeout="60000" simultaneousBatches="2" /> <branch event="onUpdate" pipelineManager="../ProcessPipelineManager" pipeline="add-update-pipeline" allowRemote="true" batching="true" batchSize="50" batchTimeout="60000" simultaneousBatches="2" /> <branch event="onDelete" pipelineManager="../ProcessPipelineManager" pipeline="post-to-search-engine-pipeline" allowRemote="true" batching="true" batchSize="50" batchTimeout="60000" simultaneousBatches="2" /> </branches> </component>
The following table describes the list of attributes that the AspireObject of the incoming scanner job requires to correctly execute and control the flow of a scan process.
Element | Type | Options | Description |
---|---|---|---|
@action | string | start, stop, pause, resume, abort | Control command to tell the scanner which operation to perform. Use start option to launch a new crawl. |
@actionProperties | string | full, incremental | When a start @action is received, it will tell the scanner to either run a full or an incremental crawl. |
@normalizedCSName | string | Unique identifier name for the content source that will be crawled. | |
displayName | string | Display or friendly name for the content source that will be crawled. |
Header Example
<doc action="start" actionProperties="full" actionType="manual" crawlId="0" dbId="0" jobNumber="0" normalizedCSName="FeedOne_Connector" scheduleId="0" scheduler="##AspireSystemScheduler##" sourceName="ContentSourceName"> ... <displayName>testSource</displayName> ... </doc>