You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

The Confluence connector will crawl content from any Confluence content repository. The connector will retrieve spaces, pages, blogs, attachments and comments.

The connector uses the Confluence REST API to crawl Confluence content and we support both Confluence On-premise installation and Cloud installation.

Introduction


Some of the features of the Confluence connector include:

  • Performs incremental crawling (so that only new/updated documents are indexed)
  • Fetches access control lists (ACLs) for document level security
  • Is search engine independent
  • Runs from any machine with access to the given Confluence URLs
  • It suppports HTTP and HTTPs.
  • Designed for supporting early binding mechanisms

For a Complete tutorial on Confluence see here

Summary of Confluence organization

This is the hierarchy of spaces/pages/blogs/attachments/comments for Confluence versions:

  • Dashboard: Is the first page you see when login to Confluence, it provides quick access to the top level features of Confluence.
    • Spaces: Spaces are containers that group content related to a specific theme or topic. Spaces contain pages and blogs.
      • Pages: Like a web page or a page in a book, pages are places where you write content related to a specific theme or topic. Pages can contain attachments and comments
        • Attachments: Documents (images, files, videos, etc) that are embedded in a page or blog and contain relevant information about the topic or theme the page/blog is talking about.
        • Comments: Remarks users leave on a page or blog to share information with other users.
      • Blogs: A blog is a discussion or informational site published on the World Wide Web and consisting of discrete entries ("posts") typically displayed in reverse chronological order. Blog. Confluence blogs can contain attachments and comments
        • Attachments
        • Comments

Environment and Access Requirements


Repository Support

The Aspire Confluence connector was created and tested using version Confluence 7.19.2

Before installing the Confluence connector, make sure that:

  • Confluence is up and running.
  • The Confluence REST API is available.
  • You have all the certificates you need to log into the site if your Confluence instance is in a secure connection (HTTPS)
  • You have a Confluence client login with sufficient permissions to crawl documents for indexing (at least Admin level permissions)

Account Privileges

In order to access Confluence a user account with sufficient privileges must be supplied. It is recommended that the account be the site administrator.

Environment Requirements

No special requirements here

Framework and Connector Features


Framework Features

NameSupported
Content Crawlingyes
Identity Crawlingyes
Snapshot-based Incremental syes
Non-snapshot-based Incremental sno
Document Hierarchyyes

Connector Features

The connector can operate in two modes: full and incremental.

Important:  The data submitted to Aspire by this connector is dependent entirely on the SQL that's configured. Therefore, it is quite possible to submit all of the data in an incremental crawl, or only some of the data in a full crawl.

Full Mode

In full mode, the connector executes a single SQL database statement and submits each row returned for processing in Aspire.

Incremental Mode

In incremental mode, there are three stages of processing: preprocessing, crawling, and post-processing.

1 - Pre-processing

(Optional) This stage runs a SQL statement against the database that can be used to mark rows to crawl (i.e., they have changed since the previous run).

2 - Crawling

This stage (similar to full mode) executes a single SQL database statement and submits each row returned for processing in Aspire. Typically, the result set is a subset of the full data that may be filtered using information updated in the (optional) pre-processing stage.

3 - Post-processing

(Optional) Each row of data submitted to Aspire can execute a SQL statement to update its status in the database. This may be to reset a flag set in the (optional) pre-processing stage, thereby denoting the item as processed. Different SQL can be executed for rows that were successfully processed versus ones that were not.

Click here to find out various crawling options

Content Crawled


The content retrieved by the connector is defined entirely using SQL statements, so you can select all or subsets of columns from one or more tables. Initially, the data is inserted into Aspire using the returned column names, but this may be changed by further Aspire processing.

The RDB via Tables connector is able to crawl the following objects:

NameType Relevant MetadataContent Fetch & ExtractionDescription
database row
table fieldsNAFields requested by SQL

Limitations


No limitations defined

  • No labels