Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The Azure Data Lake Connector will crawl content from a an Azure Data Lake Store cloud at either root Storage Gen2 for either all file systems or specified file system and paths.


Easy Heading Free
navigationTitleOn this Page
wrapNavigationTexttrue
navigationExpandOptionexpand-all-by-default

Introduction


An Azure Data Lake makes it easy for developers, data scientists, and analysts to store data of any size, shape, and speed, for all types of processing and analytics across platforms. It removes the complexities of storing data while making it faster to get up and running with batch, streaming, and interactive analytics. Azure Data Lake works with existing. It integrates seamlessly with operational stores and data warehouses so you can extend current data applications. Storage is a comprehensive, scalable, and efficient data lake solution designed for big data analysis, and it provides a hierarchical file system. It brings the capabilities of Azure Data Lake Storage Gen1 together with the Azure Blob storage.

For more information about the Azure Data Lake StoreStorage Gen 2, see the official Microsoft Microsoft Overview of Azure Data Lake StoreStorage Gen2 documentation.

Environment and Access Requirements


Repository Support

The Azure Data Lake Connector supports crawling the following the repositories

Repository

Version

Connector Version

Azure Data Lake StorageGen 2WindowsAll5.01

Environment Requirements

Before installing the Azure Data Lake connector, make sure that:

  • You have created the necessary Service-to-Service Application account with pertinent access to your Data Lake.
  • The Azure Data Lake is up and running.
  • You have Admin rights to allow Read and Execute permissions on the folders to Crawlcrawl.


User Account Requirements

In order to To access the Azure Data Lake,  an an Application Account with sufficient privileges must be supplied. The following fields must be configured in order to set up a new Data Lake connection:

Get an Application Account
  • Tenant ID

Following are the steps on how to get the required credentials:

  1. See Microsoft's Use portal to create an Azure Active Directory application and service principal that can access resources, for the steps on how to properly create an Application ID and properly, its key (Client Secret) and Tenant ID. Make sure to write down your Application Key at the time of creation. It will not be shown again after you exit the portal.  Important: Make sure to grant the necessary Reader access to your application.This connector uses a OAuth 2.0 authorization via Token End Point. Azure will supply this authorization. See Microsoft's Step 4: Get the OAuth 2.0 token endpoint (only for Java-based applications).  After these steps are completed, you will have created a valid ApplicationSee Microsoft Assign an Azure Role documentation.
  2. Make sure to grant Read and Execute access (at least) to files and folders to crawl. See Microsoft's Step 3: Assign the Azure AD application to the Azure Data Lake Store account file or folder.Follow the recommended Advance Features of the Data Lake File Explorer to Manage Access Control documentation. To recursively apply the same parent folder permissions to sub-folders using the "Apply folder permissions to sub-folders" option. The Application does not have access to any specific folder. Aspire will log this warning during the crawl process, please see "Apply an ACL recursively" section.


Framework and Connector Features


Framework Features

Name Supported
Content CrawlingYes
Identity CrawlingYesUse Azure Identity Connector
Snapshot-based IncrementalsYes
Non-snapshot-based IncrementalsNo
Document HierarchyYes

Connector Features

The Azure Data Lake connector has the following features:

  • Performs incremental crawling (so that only new/updated documents are indexed)
  • Fetches Object ACLs (Access Control Lists) for Azure document-level security
  • Runs from any machine with access to the given Data Lake source
  • Service-to-Service Authentication via OAuth 2.0 token


Content Crawled


The Azure The Azure Data Lake connector is able to can crawl the following objects:

NameTypeRelevant MetadataContent Fetch and ExtractionDescription
File Systemcontainer
N/AContains Folders and files
Folderscontainer
N/AThe directories of the files. Each directory will be scanned to retrieve more subfolders or documents.
FilesDocumentsdocument
YesDocuments Files stored in folders/subfolders