Page History
The biggest change in Aspire 3.0 is related to the way the connectors work, they . They now use an external database (MongoDB) to hold all of the crawling information such as document urls, status, statistics, snapshots (for incrementals), logs, etc. The idea behind this change is allow This allows the connectors to work distributed from its very the architectural design.
Now all All of the connectors run under the same principles, using the same logic, so that each connector is more like a Repository Access Provider so we . We keep them as simple as possible, rather than a complex (multi-threaded) crawling application; so the . The complexity of distributed crawling and multi-threading relies on the Connector Framework.
What's next?
Children Display |
---|
|
Responsibilities that the Connector developers
implement:
- Scan the repository document containers to discover new documents to process
- Populate document metadata
- Fetch document content
If you want to
create your connector right away, go to
Write Your Own Connector from Scratch
Responsibilities of the Connector Framework (you don't have to worry about this):
- Multi-threading processing
- Distribute the crawl processing
- Store and fetch documents from the database.
- Maintain a snapshot for incremental crawling (adding, updating or deleting documents)
- Handle statistics
- Start, Pause, Stop, Resume the crawl
- Send the documents to the respective workflows for processing and search engine indexing
The following diagram illustrates how the Connector Framework interacts with the connector implementation in order to run a crawl:
If you want to learn more about the Connector Framework check out NoSQL Connector Framework Overview.