Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In this article

Table of Contents
minLevel1
maxLevel2
absoluteUrltrue

...

Field Name

Field Type

Description

Label

Default ValueBigQuery Bulk Upsert (Streaming)
ExampleGBQ Load Employee Tables

String

Specify the name for the Snap. You can modify this to be more specific, especially if you have more than one of the same Snap in your pipeline.

Project ID

Default Value: N/A
Example:test-project-12345

String/Expression/Suggestion

Specify the project ID in which the dataset resides.

Dataset ID

 

Default Value: N/A

Example: dataset-12345

String/Expression/Suggestion

Specify the dataset ID of the destination.

Table ID

 

Default Value: N/A

Example: table-12345

String/Expression/Suggestion

Specify the table ID of the table you are creating.

Batch size

String

The number of records batched per request. If the input has 10,000 records and the batch size is set to 100, the total number of requests would be 100.

Default value: 1000

Batch timeout (milliseconds)

String

Time in milliseconds after which the batch will be processed (even though it might be less than the specified batch size).

Default value: 2000

Batch timeout value must be set with care. When this limit is reached, the batch will be flushed whether or not all the records in the batch were loaded.

Batch retry count

String

The number of times the server should try to load a failed batch.

Default value: 0

Batch retry delay (milliseconds)

String

The time delay between each retry.

Default value: 500

Snap Execution

 

Default Value: Validate & Execute
Example: Execute only

Dropdown list

Select one of the three modes in which the Snap executes. Available options are:

  • Validate & Execute: Performs limited execution of the Snap, and generates a data preview during Pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during Pipeline runtime.

  • Execute only: Performs full execution of the Snap during Pipeline execution without generating preview data.

  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Troubleshooting

Error

Reason

Resolution

"All table columns have already been selected in the Snap configuration

As stated in the error message itself.

static final String ERR_ALL_COLUMNS_SELECTED_RESOLUTION = "Please make sure that at least " + "one column has not been selected to enable suggestions";

Account validation failed.

The Pipeline ended before the batch could complete execution due to a connection error.

Verify that the Refresh token field is configured to handle the inputs properly. If you are not sure when the input data is available, configure this field as zero to keep the connection always open.

Examples

Excluding Fields from the Input Data Stream

We can exclude the unrequired fields from the input data stream by omitting them in the Input schema fieldset. This example demonstrates how we can use the <Snap Name> to achieve this result:

...

Key column name is required.

No key column(s) specified for checking for existing entries.

Please enter one or more key column names.

Key column name is not present in target table.

Incorrect key column(s) specified for checking for existing entries.

Please select one or more key column names from the suggestion box.

All columns in target table are key columns.

The merge will fail as all columns in the target table are key columns.

Please select one or more (but not all) key column names from the suggestion box.

Examples

Prerequisite: Write access for the Google BigQuery Account is required.

Upsert customer data from Salesforce to a Google BigQuery table

This example demonstrates how to update or insert (upsert) records in a Google BigQuery table.

...

First, we configure the Salesforce Read Snap with the required details to read customer account data from Salesforce.

In this example, we selected Output Fields for Id, Type, and Name.

...

Upon validation, the Snap prepares the output to pass to the BigQuery Bulk Upsert Snap.

...

Next, we configure the BigQuery Bulk Upsert Snap to use unique identifiers to update the existing records.

To upsert data based on the Id and Name key columns, we enter Id and Name in the Key column fields.

...

Upon execution, this Snap updates or inserts new records into the target Google BigQuery table.

In this example, we’re updating the Type for each record (based on the unique identifiers Id and Name selected under Key columns).

The output shows that 50 records were updated successfully.

...

Download this Pipeline. 

Downloads

Info
  1. Download and import the Pipeline into SnapLogic.

  2. Configure Snap accounts, as applicable.

  3. Provide Pipeline parameters, as applicable.

Attachments
patterns*.slp, *.zip
sortByname

Snap Pack History

Expand
titleClick here to expand...
Insert excerpt
XYZ Snap Pack
XYZ Snap Pack

...

Related Content

...