The Feed One Connector creates one job with associated metadata and sends it to process to the Workflow.

The connector, once started, cannot be stopped, paused or resumed. Typically the start job will contain all information required by the job to perform the scan. There is only one job to be sent and once sent it cannot be paused or stopped, the connector will wait until all the jobs it published have completed before updating the statistics and status of the connector.

Feed One Connector Application Bundle (Aspire 2)
AppBundle Name Feed One Connector
Maven Coordinates com.searchtechnologies.aspire:app-feedone-connector
Versions 2.2
Type Flags scheduled
Inputs AspireObject from a content source submitter holding all the information required for a crawl.
Outputs A job with an AspireObject containing the metadata fields configured.

Configuration

This section lists all configuration parameters available to install the Feed One Connector Application Bundle and to execute crawls using the connector.

General Application Configuration

PropertyTypeDefaultDescription
snapshotDirstring${aspire.home}/snapshotsThe directory for snapshot files to be stored.
disableTextExtractbooleanfalseBy default, connectors use Apache Tika to extract text from downloaded documents. If you wish to apply special text processing to the downloaded document in the workflow, you should disable text extraction. The downloaded document is then available as a content stream.
workflowReloadPeriodint15mThe period after which to reload the business rules. Defaults to ms, but can be suffixed with ms, s, m, h or d to indicate the required units.
workflowErrorTolerantbooleanfalseWhen set, exceptions in workflow rules will only effect the execution of the rule in which the exception occurs. Subsequent rules will be executed and the job will complete the workflow sucessfully. If not set, exceptions in workflow rules will be re-thrown and the job will be moved to the error workflow.
debugBooleanfalseControls whether debugging is enabled for the application. Debug messages will be written to the log files.


Configuration Example

To install the application bundle, add the configuration, as follows, to the <autoStart> section of the Aspire settings.xml.

  <application config="com.searchtechnologies.aspire:app-feedone-connector">
    <properties>
      <property name="enableAuditing">true</property>
      <property name="waitForSubJobs">600000</property>
      <property name="workflowErrorTolerant">false</property>
      <property name="fullRecovery">incremental</property>
      <property name="batchSize">50</property>
      <property name="maxThreads">10</property>
      <property name="jobQueue">30</property>
      <property name="batchTimeout">60000</property>
      <property name="debug">false</property>
      <property name="emitEndJob">false</property>
      <property name="disableTextExtract">false</property>
      <property name="workflowReloadPeriod">15s</property>
      <property name="generalConfiguration">false</property>
      <property name="emitStartJob">false</property>
      <property name="incrementalRecovery">incremental</property>
    </properties>
  </application>

Note: Any optional properties can be removed from the configuration to use the default value described on the table above.

Source Configuration

Scanner Control Configuration

The following table describes the list of attributes that the AspireObject of the incoming scanner job requires to correctly execute and control the flow of a scan process.

ElementTypeOptionsDescription
@actionstringstart, stop, pause, resume, abortControl command to tell the scanner which operation to perform. Use start option to launch a new crawl.
@actionPropertiesstringfull, incrementalWhen a start @action is received, it will tell the scanner to either run a full or an incremental crawl.
@normalizedCSNamestring
Unique identifier name for the content source that will be crawled.
displayNamestring
Display or friendly name for the content source that will be crawled.

Header Example

  <doc action="start" actionProperties="full" actionType="manual" crawlId="0" dbId="0" jobNumber="0" normalizedCSName="FeedOne_Connector"
   scheduleId="0" scheduler="##AspireSystemScheduler##" sourceName="ContentSourceName">
    ...
    <displayName>testSource</displayName>
    ...
  </doc>

All configuration properties described in this section are relative to /doc/connectorSource of the AspireObject of the incoming Job.

ElementTypeDefaultDescription
jobMetadata/fieldMeta/namestringnoneOptional. Field name for metadata property. This will be added to the single job sent to be processed.
jobMetadata/fieldMeta/valuestringnoneOptional. Field value for metadata property. This will be added to the single job sent to be processed.

Scanner Configuration Example

<doc action="start" actionProperties="full" actionType="manual" normalizedCSName="FeedOne_Connector" sourceName="FeedOne_Connector">
  <connectorSource>
    <jobMetadata>
      <fieldMeta>
        <name>testName</name>
        <value>testValue</value>
      </fieldMeta>
    </jobMetadata>
  </connectorSource>
  <displayName>FeedOne Connector</displayName>
</doc>

Note: To launch a crawl, the job should be sent (processed/enqueued) to the "/FeedOneConnector/Main" pipeline.

Output

<doc>
  <testName>testValue</testName>
  <connectorSource>
    <displayName>FeedOne Connector</displayName>
  </connectorSource>
  <action>add</action>
</doc>
  • No labels