You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 18 Next »

The biggest change in Aspire 3.1 is related to the way the connectors work, they now use an external database (MongoDB) to hold all the crawling information such as document urls, status, statistics, snapshots (for incrementals), logs, etc. The idea behind this change is to allow the connectors to work distributed from its very architectural design.

Now all the connectors run under the same principles, using the same logic, so each connector is more like a Repository Access Provider so we keep them as simple as possible, rather than a complex (multi-threaded) crawling application; so the complexity of distributed crawling and multi-threading relies on the Connector Framework.

What's next?

 

Responsibilities that the Connector developers have to implement:

  • Scan the repository document containers to discover new documents to process
  • Populate document metadata
  • Fetch document content

If you want to start creating your connector right away go to Write your own Connector From Scratch

Responsibilities of the Connector Framework (you don't have to worry about this):

  • Multi-threading processing
  • Distribute the crawl processing
  • Store and fetch documents from the database.
  • Maintain a snapshot for incremental crawling (adding, updating or deleting documents)
  • Handle statistics
  • Start, Pause, Stop, Resume the crawl
  • Send the documents to the respective workflows for processing and search engine indexing

 

 

The following diagram illustrates how the Connector Framework interacts with the connector implementation in order to run a crawl:

 

If you want to learn more about the Connector Framework check out NoSQL Connector Framework Overview.

 

  • No labels