Use Case: Querying Data From Azure Blob Storage

In this article

Problem

Querying huge amounts of data from an external location, such as Azure Blob storage can be a rigorous task and time-consuming if the file size is huge. The resulting queried data can be inaccurate and there is a possibility of data loss.

Solution

Using the Azure Synapse SQL Snap Pack, you can automate the querying process for loading bulk data. This solution is efficient because it is easy to query your storage data and is cost-effective too, because the data processing works on the pay-as-you-go model. Learn more about the Azure Synapse Analytics pricing.

Download this solution

Understanding the Solution

Prerequisites:


Step 1: Configure the Azure Synapse SQL Account as follows:

Step 2: Configure the Azure Synapse SQL - Bulk Load Snap:

a. The BulkLoad_TC13_VK target table into which the data from Blob Storage should load.

b. The File Name Pattern /*.csv loads only the files names with .csv file extension.

c. The Copy Argument With MAXERRORS=1000 ignores 1000 record errors and continues with the Snap execution.

d. On validating the Pipeline, you can view the following query in the output:

Step 3: Configure the JSON Formatter Snap to format the output into JSON data.

Step 4: Configure the File Writer Snap to write the file to SLDB. After validating the Pipeline, you can download the file from the SnapLogic Manager to your local drive:

Step 5: Execute the Pipeline.

Downloads

  1. Download and import the Pipeline into SnapLogic.

  2. Configure Snap accounts as applicable.

  3. Provide Pipeline parameters as applicable.

  File Modified

File Use_Case_AzureSynapseSQL_Bulk_Load_from_Azure_Blob.slp

Oct 18, 2022 by Kalpana Malladi