You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

The S3 Connector performs full and incremental scans over an AmazonSimple Storage Service, bucket or folder and will extract security, metadata, and content from each object scanned. The connector allows you to select whether you wish to index folders in the results and, if you wish, to scan subfolder content. Each scanned object will be tagged with one of three possible actions--add, update, or delete--and can be routed to any Aspire pipeline as desired.

The connector, once started, can be stoppedpaused or resumed via the Scanner Configuration Job. Typically the start job will contain all information required by the job to perform the scan. When pausing or stopping, the connector will wait until all the jobs it published have completed before updating the statistics and status of the connector.

Feature only available with Aspire Premium

AppBundle Name S3 Connector
Maven Coordinates com.searchtechnologies.aspire:app-s3-connector
Versions 2.2.2
Type Flags scheduled
Inputs AspireObject from a content source submitter holding all the information required for a crawl
Outputs Jobs from the crawl

Configuration

This section lists all configuration parameters available to install the Amazon S3 Application Bundle and to execute crawls using the connector.

General Application Configuration

PropertyTypeDefaultDescription
snapshotDirstring${aspire.home}/snapshotsThe directory for snapshot files to be stored.
disableTextExtractbooleanfalseBy default, connectors use Apache Tika to extract text from downloaded documents. If you wish to apply special text processing to the downloaded document in the workflow, you should disable text extraction. The downloaded document is then available as a content stream.
workflowReloadPeriodint15mThe period after which to reload the business rules. Defaults to ms, but can be suffixed with ms, s, m,h ord to indicate the required units.
workflowErrorTolerantbooleanfalseWhen set, exceptions in workflow rules will only affect the execution of the rule in which the exception occurs. Subsequent rules will be executed and the job will complete the workflow successfully. If not set, exceptions in workflow rules will be re-thrown and the job will be moved to the error workflow.
debugBooleanfalseControls whether debugging is enabled for the application. Debug messages will be written to the log files.


Amazon S3 Specific Configuration

PropertyTypeDefaultDescription
S3AccessKeystring
The Amazon S3 Access Key.
S3SecretKeystring
The Amazon S3 Secret Key.


Configuration Example

<application config="com.searchtechnologies.aspire:app-s3-connector">
  <properties>
    <property name="generalConfiguration">true</property>
    <property name="S3AccessKey">MyAccessKey</property>
    <property name="S3SecretKey">MySecretKey</property>
    <property name="disableFetchUrl">false</property>
    <property name="snapshotDir">${dist.data.dir}/${app.name}/snapshots</property>
    <property name="disableTextExtract">false</property>
    <property name="workflowReloadPeriod">15s</property>
    <property name="workflowErrorTolerant">false</property>
    <property name="debug">true</property>
  </properties>
</application>

Source Configuration

Scanner Control Configuration

The following table describes the list of attributes that the AspireObject of the incoming scanner job requires to correctly execute and control the flow of a scan process.

ElementTypeOptionsDescription
@actionstringstart, stop, pause, resume, abortControl command to tell the scanner which operation to perform. Use start option to launch a new crawl.
@actionPropertiesstringfull, incrementalWhen a start @action is received, it will tell the scanner to either run a full or an incremental crawl.
@normalizedCSNamestring
Unique identifier name for the content source that will be crawled.
displayNamestring
Display or friendly name for the content source that will be crawled.

Header Example

  <doc action="start" actionProperties="full" actionType="manual" crawlId="0" dbId="0" jobNumber="0" normalizedCSName="FeedOne_Connector"
   scheduleId="0" scheduler="##AspireSystemScheduler##" sourceName="ContentSourceName">
    ...
    <displayName>testSource</displayName>
    ...
  </doc>


All configuration properties described in this section are relative to /doc/connectorSourceof the AspireObject of the incoming Job.

ElementTypeDefaultDescription
urlstring
The URL to scan.
accessKeystring
The access key of the Amazon S3 account.
secretKeystring
The secret key of the Amazon S3 account.
indexFoldersstring
true if folders (as well as files) should be indexed.
scanSubFoldersstring
true if subfolders of the given URL should be scanned.
fileNamePatterns/include/@patternregexnoneOptional. A regular expression pattern to evaluate file URLs against; if the file name matches the pattern, the file is included by the scanner. Multiple include nodes can be added.
fileNamePatterns/include/@patternregexnoneOptional. A regular expression pattern to evaluate file URLs against; if the file name matches the pattern, the file is excluded by the scanner. Multiple exclude nodes can be added.


Scanner Configuration Example

<doc action="start" actionProperties="full" actionType="manual" crawlId="0" dbId="1" jobNumber="0" normalizedCSName="amazonS3"
  scheduleId="1" scheduler="AspireScheduler" sourceName="amazonS3">
  <connectorSource>
    <url>/my-first-s3-bucket-1-0000000001/</url>
    <accessKey>myAccessKey</accessKey>
    <secretKey>mySecretKey</secretKey>
    <indexFolders>true</indexFolders>
    <scanSubFolders>true</scanSubFolders>
    <fileNamePatterns/>
  </connectorSource>
  <displayName>amazonS3</displayName>
</doc>

Output

<doc>
  <url>/my-first-s3-bucket-1-0000000001/</url>
  <id>/my-first-s3-bucket-1-0000000001/</id>
  <fetchUrl>/my-first-s3-bucket-1-0000000001/</fetchUrl>
  <repItemType>aspire/bucket</repItemType>
  <docType>container</docType>
  <snapshotUrl>/my-first-s3-bucket-1-0000000001/</snapshotUrl>
  <displayUrl/>
  <owner>andresau</owner>
  <lastModified>2013-11-06T03:51:55Z</lastModified>
  <acls>
    <acl access="allow" domain="my-first-s3-bucket-1-0000000001" entity="group" fullname="my-first-s3-bucket-1-0000000001\35b775a3073908cd529e174c2c59bd502c5eb986c5406029c2ced70b4e0ea4a7" name="35b775a3073908cd529e174c2c59bd502c5eb986c5406029c2ced70b4e0ea4a7" scope="global"/>
    <acl access="allow" domain="my-first-s3-bucket-1-0000000001" entity="group" fullname="my-first-s3-bucket-1-0000000001\41f7bbb1645b2b2a1d2134266f99695fc44e4735ca3725b457e373adcf31d9f0" name="41f7bbb1645b2b2a1d2134266f99695fc44e4735ca3725b457e373adcf31d9f0" scope="global"/>
  </acls>
  <sourceName>amazonS3</sourceName>
  <sourceType>s3</sourceType>
  <connectorSource>
    <url>/my-first-s3-bucket-1-0000000001/</url>
    <accessKey>AKIAIQRRCLVVIKV4ZYHQ</accessKey>
    <secretKey>encrypted:DB77B9869844B3651094CB293E842BD25E8F942CA94A09FE9D078A7AE762FB05F00A0929E7EDA0592CF73A47891DF3C3</secretKey>
    <indexFolders>true</indexFolders>
    <scanSubFolders>true</scanSubFolders>
    <fileNamePatterns/>
    <displayName>amazonS3</displayName>
  </connectorSource>
  <action>add</action>
  <hierarchy>
    <item id="2DEC5E0DACC737196DDA0C7ADA787EAF" level="2" name="my-first-s3-bucket-1-0000000001" type="aspire/bucket" url="/my-first-s3-bucket-1-0000000001/">
      <ancestors>
        <ancestor id="6666CD76F96956469E7BE39D750CC7D9" level="1" parent="true" type="aspire/server" url="/"/>
      </ancestors>
    </item>
    <itemType>container</itemType>
  </hierarchy>
  <content>my-first-s3-bucket-1-0000000001</content>
</doc>



  • No labels