Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Instead, it reads data from the input stream and outputs XML records to the sub-job pipeline as they are found using a SAX handler. This makes it very fast with very low memory requirements.

 

Contents

 

XML Sub Job Extractor
Factory Name com.searchtechnologies.aspire:aspire-xml-files
subType xmlSubJobExtractor
Inputs object['contentStream'] containing a data stream.

object['contentBytes'] a stream to the XML to process.

NOTE: A previous job (typically FetchURL) must have
opened the input stream.

Outputs An AspireObject containing data for each sub-job
contain the XML of the individual XML record,
published to the configured sub-job pipeline manager.

Contents

...

...

.

...

Configuration

ElementTypeDefaultDescription
branches NoneThe configuration of the pipeline to publish to. See below.
maxSubJobsinteger0 (= all)The maximum number of subjobs to generate. If there are more possible jobs in the input XML file, they will be ignored.
characterEncodingStringUTF-8The character encoding of the XML file to be read, if not UTF-8.
rootNodeStringNoneThe root node which contains the sub-jobs to publish. If not specified, the root node of the entire XML tree is considered to be the root node.

This value should be in path format, for example: /results/hits . This will publish as sub-jobs all of the child elements which occur within the <results>/<hits> tag.

Note: This is not an XPath, just a path which represents a named node within the XML hierarchy. It should start with a / and this will be added if missing.

cleansebooleantrueSet to true if you want to clean the XML content from non-readable characters (.i.e ASCII code 15).
honorDTDbooleanfalseSet to true if you want to fetch XML's DTD.
batchJobs  (2.1 Release)  booleanfalseSet to true if you want the extractor to create a batch for the input job and add child jobs to that batch.
maxBatchBytes  (2.1 Release)  longunlimitedWhen using batching, limit the data in a batch based on the xml representation of the AspireObject published with the job. Once the amount of data added to the batch exceeds the value given, the batch will be closed and a new batch created. The limit may be specified in the form 1, 1b, 1k, 1kb, 1m, 1mb, 1g, 1gb
maxBatchBytesJSON  (2.1 Release)  booleanfalseWhen limiting the batch size, use the JSON representation of the AspireObject published with the job when calculating batch size

...