3.0.0-alpha2 - UTF8String is not a valid external type for schema of string

532 views
Skip to first unread message

Damián Franetovich

unread,
May 18, 2020, 1:38:41 PM5/18/20
to DataStax Spark Connector for Apache Cassandra
Hi, 

I'm having this error using the 3.0.0-alpha2 with spark 3.0.0-preview2, below is a minimal example to run in the spark-shell and the stacktrace, can you give me some clue about this? Seems to be a bug, but I don't know if it's from the cassandra connector or spark, also I tried with jdbc and mongo connectors and all runs ok. 

Thanks!

import com.datastax.spark.connector.cql.CassandraConnectorConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.cassandra._
// To create test data on cassandra
// CREATE KEYSPACE keytest WITH REPLICATION = { 'class': 'SimpleStrategy', 'replication_factor': 1 };
// CREATE TABLE keytest.tabletest (id UUID PRIMARY KEY, lastname text, firstname text );
// INSERT INTO keytest.tabletest(id, lastname, firstname) VALUES (uuid(), 'lastname', 'firstname');
// bin/spark-shell --packages "com.datastax.oss:java-driver-core:4.5.1,com.datastax.spark:spark-cassandra-connector_2.12:3.0.0-alpha2" 
val s: SparkSession = spark
val cluster = "cass-cluster"
val datacenter = "datacenter1";
s.setCassandraConf(cluster, CassandraConnectorConf.ConnectionHostParam.option("localhost"))
s.setCassandraConf(cluster, CassandraConnectorConf.LocalDCParam.option(datacenter))
val reader = s.read
.cassandraFormat("tabletest", "keytest", cluster = cluster)
reader.load().take(1)

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2023)
  at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1972)
  at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1971)
  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
  at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1971)
  at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:950)
  at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:950)
  at scala.Option.foreach(Option.scala:407)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:950)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2203)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2152)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2141)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:752)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2093)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2114)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2133)
  at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:467)
  at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:420)
  at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:47)
  at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3625)
  at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2695)
  at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3616)
  at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
  at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
  at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3614)
  at org.apache.spark.sql.Dataset.head(Dataset.scala:2695)
  at org.apache.spark.sql.Dataset.take(Dataset.scala:2902)
  ... 57 elided
Caused by: java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: org.apache.spark.unsafe.types.UTF8String is not a valid external type for schema of string
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, id), StringType), true, false) AS id#7
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, firstname), StringType), true, false) AS firstname#8
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, lastname), StringType), true, false) AS lastname#9
  at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$Serializer.apply(ExpressionEncoder.scala:215)
  at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$Serializer.apply(ExpressionEncoder.scala:197)
  at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
  at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
  at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
  at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
  at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
  at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
  at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
  at org.apache.spark.scheduler.Task.run(Task.scala:127)
  at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:444)
  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:447)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: org.apache.spark.unsafe.types.UTF8String is not a valid external type for schema of string
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.StaticInvoke_0$(Unknown Source)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
  at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$Serializer.apply(ExpressionEncoder.scala:211)
  ... 19 more 

Russell Spitzer

unread,
May 18, 2020, 1:41:20 PM5/18/20
to DataStax Spark Connector for Apache Cassandra
That is a known bug, it's currently fixed but not in a released branch.

--
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.


--
Russell Spitzer

Damián Franetovich

unread,
May 18, 2020, 1:51:58 PM5/18/20
to spark-conn...@lists.datastax.com
Thank you Rusell! 

Can you tell me how I can find the ticket for this bug? I was searching in the project jira and don't found anything related to it.
Damián

Russell Spitzer

unread,
May 18, 2020, 1:58:20 PM5/18/20
to DataStax Spark Connector for Apache Cassandra
It's currently patched on the DSV2 branch i'm working on, it's part of Spark 3 Compatibility that wasn't required with much earlier versions of spark 3. The values allowed in Rows returned by Datasources (V1) changed which broke the preview builds with Spark 3. 

Russell Spitzer

unread,
May 18, 2020, 2:01:15 PM5/18/20
to DataStax Spark Connector for Apache Cassandra
Please check 589 or 590 (both Wip) Basically the issue is for String types the type we were previously using is no longer usable and needs to be switched. The changes required are in CassandraSqlRow

Damián Franetovich

unread,
May 18, 2020, 2:07:16 PM5/18/20
to spark-conn...@lists.datastax.com
Great, I see the changes, I will build a local version and check for this error, thank you again for your quick response!

이상부

unread,
Jun 25, 2020, 11:59:38 AM6/25/20
to DataStax Spark Connector for Apache Cassandra, dami...@gmail.com
Please tell me the branch where this issue was fixed. I am testing Spark 3.0 SQL. 

2020년 5월 19일 화요일 오전 3시 7분 16초 UTC+9에 dami...@gmail.com님이 작성:

Russell Spitzer

unread,
Jun 25, 2020, 12:40:14 PM6/25/20
to DataStax Spark Connector for Apache Cassandra
There isn't a released artifact yet but the fix is in b3.0 and master.

이상부

unread,
Jun 25, 2020, 1:54:28 PM6/25/20
to DataStax Spark Connector for Apache Cassandra, russell...@gmail.com
running error.


Caused by: org.apache.spark.sql.AnalysisException: org.apache.spark.sql.cassandra is not a valid Spark SQL Data Source.;
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:414)
        at org.apache.spark.sql.execution.datasources.FindDataSourceTable.$anonfun$readDataSourceTable$1(DataSourceStrategy.scala:256)
        at org.sparkproject.guava.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4792)
        at org.sparkproject.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
        at org.sparkproject.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
        at org.sparkproject.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
        at org.sparkproject.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
        at org.sparkproject.guava.cache.LocalCache.get(LocalCache.java:4000)
        at org.sparkproject.guava.cache.LocalCache$LocalManualCache.get(LocalCache.java:4789)
        at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getCachedPlan(SessionCatalog.scala:143)
        at org.apache.spark.sql.execution.datasources.FindDataSourceTable.org$apache$spark$sql$execution$datasources$FindDataSourceTable$$readDataSourceTable(DataSourceStrategy.scala:243)
        at org.apache.spark.sql.execution.datasources.FindDataSourceTable$$anonfun$apply$2.applyOrElse(DataSourceStrategy.scala:269)
        at org.apache.spark.sql.execution.datasources.FindDataSourceTable$$anonfun$apply$2.applyOrElse(DataSourceStrategy.scala:260)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$2(AnalysisHelper.scala:108)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:108)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:106)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:104)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$4(AnalysisHelper.scala:113)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:399)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:237)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:397)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:350)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:113)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:106)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:104)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$4(AnalysisHelper.scala:113)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:399)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:237)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:397)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:350)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:113)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:106)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:104)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$4(AnalysisHelper.scala:113)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:399)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:237)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:397)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:350)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:113)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:106)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:104)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$4(AnalysisHelper.scala:113)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:399)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:237)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:397)
        at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:350)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:113)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:106)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:104)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:73)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:72)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
        at org.apache.spark.sql.execution.datasources.FindDataSourceTable.apply(DataSourceStrategy.scala:260)
        at org.apache.spark.sql.execution.datasources.FindDataSourceTable.apply(DataSourceStrategy.scala:239)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:149)
        at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
        at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
        at scala.collection.immutable.List.foldLeft(List.scala:89)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:146)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:138)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:138)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:176)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:170)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:130)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:116)
        at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:116)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:154)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
        at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:153)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:68)
        at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:133)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
        at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:133)
        at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:68)
        at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:66)
        at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:58)
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
        at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:606)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:763)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:601)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
        at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:280)
        ... 16 more


2020년 6월 26일 금요일 오전 1시 40분 14초 UTC+9에 russell...@gmail.com님이 작성:

Russell Spitzer

unread,
Jun 25, 2020, 1:58:28 PM6/25/20
to 이상부, DataStax Spark Connector for Apache Cassandra
The new version of the Connector doesn't support V1 Style create table commands. Please see the documentation on how to setup the catalog.
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/1_connecting.md#connecting-to-cassandra

The error you are seeing is a bad error message from spark, it found the Datasource but the version was incorrect for the command being used.

이상부

unread,
Jun 25, 2020, 3:37:20 PM6/25/20
to Russell Spitzer, DataStax Spark Connector for Apache Cassandra
Thank you. solved problem!

2020년 6월 26일 (금) 오전 2:58, Russell Spitzer <russell...@gmail.com>님이 작성:


--
안녕하세요. 한국오픈솔루션 이상부입니다.

-

감사합니다.

(주) 한국오픈솔루션
이상부 < SangBu Lee >
CEO
| 이메일_ l...@kopens.com| 모바일_ 010.4105.7711
| 대표전화_ 055.259.5113  | 팩스_ 055.259.5114| 홈페이지_ http://www.kopens.com

* 이 메시지 및 첨부자료는 법률상 비밀로 보호되어야 할 내용을 포함하고 있습니다. 잘못 수신하신 경우에는 즉시 송신자에게 회신하여 알려주신 후 삭제하여 주시기 바랍니다.
협조에 감사 드립니다. * 

Russell Spitzer

unread,
Jun 25, 2020, 3:45:07 PM6/25/20
to 이상부, DataStax Spark Connector for Apache Cassandra
Please let us know if you find any bugs we can fix! Please enjoy the new catalog functionality

이상부

unread,
Jun 26, 2020, 12:49:36 AM6/26/20
to DataStax Spark Connector for Apache Cassandra, russell...@gmail.com, DataStax Spark Connector for Apache Cassandra, 이상부
And ,
The materialized view in the catalog is not bound. I need to add new features.  

2020년 6월 26일 금요일 오전 4시 45분 7초 UTC+9에 russell...@gmail.com님이 작성:

Russell Spitzer

unread,
Jun 26, 2020, 9:42:43 AM6/26/20
to 이상부, DataStax Spark Connector for Apache Cassandra
Ah yes please file a jira, it should be relatively easy to add
Reply all
Reply to author
Forward
0 new messages