Panel | ||
---|---|---|
| ||
|
Launch Aspire (if it's not already running).
For details on using the Aspire Content Source Management page, refer to Admin UI
To specify exactly what shared folder to crawl, we will need to create a new "Content Source".
To create a new content source:
In the "General" tab in the Content Source Configuration window, specify basic information for the content source:
After selecting a Scheduled, specify the details, if applicable:
Info |
---|
You can add more schedules by clicking in the Add New option, and rearrange the order of the schedules. |
Info |
---|
If you want to disable the content source just unselect the the "Enable" checkbox. This is useful if the folder will be under maintenance and no crawls are wanted during that period of time. |
Note |
---|
Real Time and Cache Groups crawl will be available depending of the connector. |
In the "Connector" tab, specify the connection information to crawl the Gremlin.
With this option in crawl type, is important not use a "Limit rows" and "Performance sampling" options, if need use these functions please add them to the query e.g. 'SELECT c._etag FROM c ORDER BY c._rid OFFSET 1 LIMIT 3'. In case of you consult specifics values in your query, you must insert a necessary field contained in each vertex named 'id'. So if the field to query is "firstName" your query should look like this e.g. 'SELECT table.id, table.firstName FROM table'.
IMPORTANT: If you don't use the id field the connector will give an error with an AspireException indicating that the id field is needed. However, you can use instead of the 'id' field, the symbol '*'. The query should look like this: SELECT * FROM tableName.
In the "Workflow" tab, specify the workflow steps for the jobs that come out of the crawl. Drag and drop rules to determine which steps should an item follow after being crawled. This rules could be where to publish the document or transformations needed on the data before sending it to a search engine. See Workflow for more information.
After completing this steps click on the Save then Done and you'll be sent back to the Home Page.
The status will show RUNNING while the crawl is going, and CRAWLED when it is finished.
If there are errors, you will get a clickable "Error" flag that will take you to a detailed error message page.