Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • ELT Pipelines created prior to 4.24 GA release using one or more of the ELT Insert Select, ELT Merge Into, ELT Load, and ELT Execute Snaps may fail to show expected preview data due to a common change made across the Snap Pack for the current release (4.26 GA). In such a scenario, replace the Snap in your Pipeline with the same Snap from the Asset Palette and configure the Snap's Settings again.

  • The Snap’s preview data (during validation) contains a value with precision higher than that of the actual floating point value (float data type) stored in the Delta. For example, 24.123404659344 instead of 24.1234. However, the Snap reflects the exact values during Pipeline executions.

  • In any of the supported target databases, this Snap does not appropriately identify nor render column references beginning with an _ (underscore) inside SQL queries that use the following constructs and contexts (the Snap works as expected in all other scenarios):

    • WHERE clause

    • WHEN clause

    • ON condition (ELT Join, ELT Merge Into Snaps)

    • HAVING clause

    • QUALIFY clause

    • Insert expressions (column names and values in ELT Insert Select, ELT Load, and ELT Merge Into Snaps)

    • Update expressions list (column names and values in ELT Merge Into Snap)

    • Secondary AND condition

    • Inside SQL query editor (ELT Select and ELT Execute Snaps)

  • As a workaround while using these SQL query constructs, you can:

    • Precede this Snap with an ELT Transform Snap to re-map the '_' column references to suitable column names (that do not begin with an _ ) and reference the new column names in the next Snap, as needed.

  • In case of Databricks Lakehouse Platform where CSV files do not have a header (column names), a simple query like SELECT * FROM CSV.`/mnt/csv1.csv` returns default names such as _c0, _c1, _c2 for the columns which this Snap cannot interpret.

    • To avoid this scenario, you can:

      • Write the data in the CSV file to a DLP table beforehand, as in: CREATE TABLE csvdatatable (a1 int, b1 int,…) USING CSV `/mnt/csv1.csv` where a1, b1, and so on are the new column names.

      • Then, read the data from this new table (with column names a1, b1, and so on) using a simple SELECT statement.

...