Here are the main features:
Scala/Java APIs for DML commands - You can now modify data in Delta Lake tables using programmatic APIs for Delete, Update and Merge. These APIs mirror the syntax and semantics of their corresponding SQL commands and are great for many workloads, e.g., Slowly Changing Dimension (SCD) operations, merging change data for replication, and upserts from streaming queries. See the documentation for more details.
Scala/Java APIs for query commit history - You can now query a table’s commit history to see what operations modified the table. This enables you to audit data changes, time travel queries on specific versions, debug and recover data from accidental deletions, etc. See the documentation for more details.
Scala/Java APIs for vacuuming old files - Delta Lake uses MVCC to enable snapshot isolation and time travel. However, keeping all versions of a table forever can be prohibitively expensive. Stale snapshots (as well as other uncommitted files from aborted transactions) can be garbage collected by vacuuming the table. See the documentation for more details.
--
You received this message because you are subscribed to the Google Groups "Delta Lake Users and Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to delta-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/delta-users/CA%2BAHuKmAhUar%3D7GZ9bUwJKmh%3Diu67%3DTVzH%2BhiwTpC0v33A_MQQ%40mail.gmail.com.
This is indeed one of the most useful feature. Thank you everyone for making this available.
Thanks,
Pratap
--
You received this message because you are subscribed to the Google Groups "Delta Lake Users and Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
delta-users...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/delta-users/CA%2BAHuKmAhUar%3D7GZ9bUwJKmh%3Diu67%3DTVzH%2BhiwTpC0v33A_MQQ%40mail.gmail.com.
i have below code for merge update, when i query the dataframe after merge i still see duplicate records based
on "id" column
what is wrong here
val data: Map[String, String] = resultdf.columns
.map(mcol => s"leads.${mcol}" -> s"updates.${mcol}").toMapdeltaTable
.as("leads")
.merge(
resultdf.as("updates"),
"leads.id = updates.id"
).whenMatched("leads.id = updates.id")
.updateExpr(data)
//.updateAll()
.whenNotMatched()
.insertAll()
.execute()Can any one help, neither updateExpr or updateAll do not work, instead i am seeing duplicate record when i run below code.val leadsDF = spark.read.format("delta").parquet("/Users/HariKodali/tip/stage0/marketo/delta/leads")print(" total count ",leadsDF.count())leadsDF.groupBy("id").count().filter($"count" > 1).show()ThanksHari Kodali
BIGDATA Solutions Architect
--