Is there a way to make column values case-insensitive? we have many delta tables with string columns as unique key (PK in traditional relational db) and we don't want to insert new row because key value only differs in case.
Its lot of code change to use upper/lower function on column value compare so looking for alternative
i see a CHECK constraint on delta table column can enforce consistent case value but its too late , i already have mixed case data in tables.
is there anything similar to sql server collation feature?
spark.conf.set('spark.sql.caseSensitive', False) does not work as expected (meaning string comparison between mixed case value shows i have 2 different strings)
Also looked up spark.conf.set('spark.databricks.analyzer.batchResolveRelations', False) in vein
I have tried 7.3LTS and 9.1LTS databricks on azure