MySQL metadata connector returning Java Exception Error

312 views
Skip to first unread message

Luan

unread,
Oct 10, 2022, 2:16:31 PM10/10/22
to Druid User
I've been trying to connect MySQL database to Druid, so I can ingest from it.
The common properties are configured correctly, as far as I can tell. 
I'm new to it, some light will be great.
When I enable "mysql-metadata-storage", Druid UI return that error: 
Unknown exception / org.apache.druid.java.util.common.IOE: No known server / java.lang.RuntimeException.

Sometimes its error 500 or 400. Each time I restart druid the error change.

Also, router log give me this error:

ERROR [CoordinatorRuleManager-Exec--0] org.apache.druid.server.router.CoordinatorRuleManager - Exception while polling for rules

Also, another issue. When I try to configure Druid Basic Security, so I can connect it to superset, Druid's UI wont even load. I keep getting connection refused. I would like some help with it as well.
PS: There is no "druid-basic-security" on extension list, because I removed, just so I could connect in the website, but it was there.


 Here is my common.propertie file:

# Extensions specified in the load list will be loaded by Druid
# We are using local fs for deep storage - not recommended for production - use S3, HDFS, or NFS instead
# We are using local derby for the metadata store - not recommended for production - use MySQL or Postgres instead

# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info: https://druid.apache.org/docs/latest/operations/including-extensions.html
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches", "mysql-metadata-storage"]

# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
# and uncomment the line below to point to your directory.
#druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies


#
# Hostname
#
druid.host=localhost

#
# Logging
#

# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true

#
# Zookeeper
#

druid.zk.service.host=localhost
druid.zk.paths.base=/druid

# Druid basic security
#druid.auth.authenticatorChain=["MyBasicMetadataAuthenticator"]

#druid.auth.authenticator.MyBasicMetadataAuthenticator.type=basic
#druid.auth.authenticator.MyBasicMetadataAuthenticator.initialAdminPassword=senha1
#druid.auth.authenticator.MyBasicMetadataAuthenticator.initialInternalClientPassword=senha2
#druid.auth.authenticator.MyBasicMetadataAuthenticator.credentialsValidator.type=metadata
#druid.auth.authenticator.MyBasicMetadataAuthenticator.skipOnFailure=false
#druid.auth.authenticator.MyBasicMetadataAuthenticator.authorizerName=MyBasicMetadataAuthorizer

# Escalator

#druid.escalator.type=basic
#druid.escalator.internalClientUsername=druid_system
#druid.escalator.internalClientPassword=senha2
#druid.escalator.authorizerName=MyBasicMetadataAuthorizer
#druid.auth.authorizers=["MyBasicMetadataAuthorizer"]
#druid.auth.authorizer.MyBasicMetadataAuthorizer.type=basic

# Metadata storage
#

# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
druid.metadata.storage.type=derby
druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true
druid.metadata.storage.connector.host=localhost
druid.metadata.storage.connector.port=1527

# For MySQL (make sure to include the MySQL JDBC driver on the classpath):
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://(myIpHere):3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=diurd
druid.metadata.storage.connector.host=(myIpHere)

# For PostgreSQL:
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://db.example.com:5432/druid
#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...

#
# Deep storage

# For local disk (only viable in a cluster if this is a network mount):
druid.storage.type=local
druid.storage.storageDirectory=var/druid/segments

# For HDFS:
#druid.storage.type=hdfs
#druid.storage.storageDirectory=/druid/segments

# For S3:
#druid.storage.type=s3
#druid.storage.bucket=your-bucket
#druid.storage.baseKey=druid/segments
#druid.s3.accessKey=...
#druid.s3.secretKey=...

#
# Indexing service logs
#

# For local disk (only viable in a cluster if this is a network mount):
druid.indexer.logs.type=file
druid.indexer.logs.directory=var/druid/indexing-logs

# For HDFS:
#druid.indexer.logs.type=hdfs
#druid.indexer.logs.directory=/druid/indexing-logs

# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=your-bucket
#druid.indexer.logs.s3Prefix=druid/indexing-logs

#
# Service discovery
#

druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator

#
# Monitoring
#

druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor"]
druid.emitter=noop
druid.emitter.logging.logLevel=info

# Storage type of double columns
# ommiting this will lead to index double as float at the storage layer

druid.indexing.doubleStorage=double

#
# Security
#
druid.server.hiddenProperties=["druid.s3.accessKey","druid.s3.secretKey","druid.metadata.storage.connector.password"]

#
# SQL
#
druid.sql.enable=true

# Planning SQL query when there is aggregate distinct in the statement
druid.sql.planner.useGroupingSetForExactDistinct=true

#
# Lookups
#
druid.lookup.enableLookupSyncOnStartup=false

#
# Expression processing config
#
druid.expressions.useStrictBooleans=true

#
# Http client
#
druid.global.http.eagerInitialization=false

I tried to downgrade it's version, but it was no use.
What am I missing? Thanks in advance, I really need to fix it.

Druid Error.png

Jun Wan

unread,
Oct 10, 2022, 2:47:54 PM10/10/22
to druid...@googlegroups.com
Do you want to inject from mysql or use mysql as metadata storage?
> --
> You received this message because you are subscribed to the Google Groups "Druid User" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to druid-user+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/druid-user/b0cc45ce-9795-49b3-a829-4bac81e44fa8n%40googlegroups.com.

Luan

unread,
Oct 10, 2022, 3:06:53 PM10/10/22
to Druid User
I replied in the wrong place, sorry.
I wanted to ingest from MySQL withou having to export databases or use local disk. Is it possible to query from MySQL tables inside Druid?

Luan

unread,
Oct 11, 2022, 7:48:15 AM10/11/22
to Druid User
Issue solved. 
The extension mysql-metadata-storage was breaking druid apart. I still dont know what was causing that. Maybe some version incompatibility. 
Anyways, after I removed it, druid started working again, with basic security authentication working as well. Thanks.

Guillaume Lhermenier

unread,
Oct 11, 2022, 8:16:18 AM10/11/22
to druid...@googlegroups.com
Hi 
IMO it broke druid because you configured metadata storage in both derby and MySQL. 

Metadata storage (config & extension) is only to store druid metadata, not to ingest from this database. 

Luan

unread,
Oct 11, 2022, 9:21:17 AM10/11/22
to Druid User
When I posted my common.properties, I was testing with the databases, but at first, derby connection settings was commented. Thank you for the reply
Reply all
Reply to author
Forward
0 new messages