Appsplus panel macro confluence macro | ||
---|---|---|
| ||
**Delete Before Publishing**
|
In this article
Table of Contents | ||||||
---|---|---|---|---|---|---|
|
Overview
You can use this Snap to...This Snap enables you to perform bulk update or insert (upsert) operations into a BigQuery table from existing tables or any input data stream.
The upsert operation updates existing rows if the specified value exists in the target table and inserts a new row if the specified value does not exist in the target table.
...
Snap Type
The <Snap name> This Snap is a Read/ Write/Parse/Transform/Flow/Format-type Snap that reads/fetches/writes/parses/executes/transforms/calls/creates…
Prerequisites
Valid client ID.
A valid account with the required permissions.
None.
performs a bulk upsert operation.
Prerequisites
Write access for the Google BigQuery Account is required.
Support for Ultra Pipelines
...
...
Works in Ultra Pipelines.
...
Works in Ultra Pipelines if....
Does not
...
work in Ultra Pipelines.
...
Limitations
...
None.
and Known Issues
None.
Snap Views
Type | Format | Number of Views | Examples of Upstream and Downstream Snaps | Description |
---|---|---|---|---|
Input | Document |
Binary
Binary or Document
|
Mapper
Copy
..
|
| This Snap has exactly one document input view. Input can come from any Snap that can pass a document to the output view, such as Structure or JSON Generator. Pipeline parameters can also be passed for project ID, dataset ID, and table ID, and so on. |
Output | Document |
Binary
Binary or Document
|
| The output is in document view format. The data from the incoming document that is loaded to the destination table is the output from this Snap. It gives the load statistics after the operation is completed The output view contains information about the bulk load details in the temporary table to better understand the flow. This also helps with error handling. The |
output view also lists the number of rows that were updated, modified, or inserted in the target table. | ||||
Error | Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the |
pipeline by choosing one of the following options from the When errors occur list under the Views tab:
|
Learn more about |
Snap Settings
Appsplus panel macro confluence macro | ||
---|---|---|
| ||
**Delete Before Publishing** Choose from the following sentences to document specific field types. Drop-down lists/Option Buttons (radio buttons): You must list the LoV and describe them if their meaning isn't apparent. In this case, format the LoV in italics, regular font for the LoV's description. In either case, list the LoVs as a bullet list.
Check boxes:
Text Fields
Numeric Text Fields
Notes in field descriptions
|
Snap Settings
Info |
---|
|
Field Name | Field Type |
---|
Field Dependency
Description |
---|
Label |
Default Value: |
BigQuery Bulk Upsert (Streaming) |
GBQ Load Employee Tables | String |
None.
Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your |
pipeline. |
Project ID Default Value: N/A | String/Expression |
Sampling Type is Number of records.
Enter the number of records to output.
fieldset Name
Specify advanced parameters that you want to include in the request.
Field 1*
Default Value: <value> or None.
Example: <value>
String
Debug mode check box is not selected.
Field 2
Default Value: <value> or None.
Example: <value>
String
None.
Snap Execution
Default Value:
Example: Validate & Execute
Dropdown list
N/A
/Suggestion | Specify the project ID in which the dataset resides. | |
Dataset ID
Default Value: N/A Example: dataset-12345 | String/Expression/Suggestion | Specify the dataset ID of the destination. |
Table ID
Default Value: N/A | String/Expression/Suggestion | Specify the table ID of the table you are creating. |
Batch size Default value: 1000 | String | The number of records batched per request. If the input has 10,000 records and the batch size is set to 100, the total number of requests would be 100. |
Batch timeout (milliseconds) Default value: 2000 | String | Time in milliseconds after which the batch will be processed (even though it might be less than the specified batch size). Batch timeout value must be set with care. When this limit is reached, the batch will be flushed whether or not all the records in the batch were loaded. |
Batch retry count Default value: 0 | String | The number of times the server should try to load a failed batch. |
Batch retry delay (milliseconds) Default value: 500 | String | The time delay between each retry. |
Snap Execution
Default Value: Validate & Execute | Dropdown list | Select one of the three modes in which the Snap executes. Available options are:
|
Troubleshooting
Error | Reason | Resolution |
---|
Account validation failed.
The Pipeline ended before the batch could complete execution due to a connection error.
Verify that the Refresh token field is configured to handle the inputs properly. If you are not sure when the input data is available, configure this field as zero to keep the connection always open.
Examples
Excluding Fields from the Input Data Stream
We can exclude the unrequired fields from the input data stream by omitting them in the Input schema fieldset. This example demonstrates how we can use the <Snap Name> to achieve this result:
<screenshot of Pipeline/Snap and description>
...
Key column name is required. | No key column(s) specified for checking for existing entries. | Please enter one or more key column names. |
Key column name is not present in target table. | Incorrect key column(s) specified for checking for existing entries. | Please select one or more key column names from the suggestion box. |
All columns in target table are key columns. | The merge will fail as all columns in the target table are key columns. | Please select one or more (but not all) key column names from the suggestion box. |
Examples
Prerequisite: Write access for the Google BigQuery Account is required.
Upsert customer data from Salesforce to a Google BigQuery table
This example demonstrates how to update or insert (upsert) records in a Google BigQuery table.
...
First, we configure the Salesforce Read Snap with the required details to read customer account data from Salesforce.
In this example, we selected Output Fields for Total, Id, and Name.
...
Upon validation, the Snap prepares the output to pass to the BigQuery Bulk Upsert Snap.
Next, we configure the BigQuery Bulk Upsert Snap to use unique identifiers to update the existing records.
To upsert data based on the Id and Name key columns, we enter Id and Name in the Key column fields.
...
Upon execution, this Snap updates or inserts new records into the Google BigQuery table.
The output shows that 5 records were updated successfully.
...
In this example, we updated the Total for each record (based on the unique identifiers Id and Name selected under Key columns).
The data is updated in the Google BigQuery table, as shown in the BigQuery console.
...
Downloads
Info |
---|
|
Attachments | ||||
---|---|---|---|---|
|
...
title | Click here to expand... |
---|
Insert excerpt |
---|
|
|
Related Content
...
Endpoint Doc Link 1
...
Endpoint Doc Link 2
...
Endpoint Doc Link 3
...
|
...