The Azure Data Lake Connector will crawl content from a Azure Data Lake Store cloud at either root or specified paths.
Easy Heading Free | ||||||
---|---|---|---|---|---|---|
|
An Azure Data Lake makes it easy for developers, data scientists, and analysts to store data of any size, shape, and speed, for all types of processing and analytics across platforms. It removes the complexities of storing data while making it faster to get up and running with batch, streaming, and interactive analytics. Azure Data Lake works with existing. It integrates seamlessly with operational stores and data warehouses so you can extend current data applications.
For more information about the Azure Data Lake Store, see the official Microsoft Overview of Azure Data Lake Store documentation.
The Azure Data Lake Connector supports crawling the following the repositories
Repository | Version | Connector Version | |
---|---|---|---|
Azure Data Lake Storage | Gen1 | 5.1 |
Before installing the Azure Data Lake connector, make sure that:
User Account Requirements
In order to access the Azure Data Lake, an Application Account with sufficient privileges must be supplied. The following fields must be configured in order to set up a new Data Lake connection:
Fully Qualified Domain Name (FQDN): p.e [yourdomain].azuredatalakestore.net. No HTTP prefix is required
Name | Supported |
---|---|
Content Crawling | Yes |
Identity Crawling | Yes |
Snapshot-based Incrementals | Yes |
Non-snapshot-based Incrementals | No |
Document Hierarchy | Yes |
The Azure Data Lake connector has the following features:
The Azure Data Lake connector is able to crawl the following objects:
Name | Type | Relevant Metadata | Content Fetch and Extraction | Description |
---|---|---|---|---|
Folders | container | N/A | The directories of the files. Each directory will be scanned to retrieve more subfolders or documents | |
Files | document | Yes | Files stored in folders/subfolders |