I've been trying to connect MySQL database to Druid, so I can ingest from it.
The common properties are configured correctly, as far as I can tell.
I'm new to it, some light will be great.
When I enable "mysql-metadata-storage", Druid UI return that error:
Unknown exception / org.apache.druid.java.util.common.IOE: No known server / java.lang.RuntimeException.
Sometimes its error 500 or 400. Each time I restart druid the error change.
Also, router log give me this error:
ERROR [CoordinatorRuleManager-Exec--0] org.apache.druid.server.router.CoordinatorRuleManager - Exception while polling for rules
Also, another issue. When I try to configure Druid Basic Security, so I can connect it to superset, Druid's UI wont even load. I keep getting connection refused. I would like some help with it as well.
PS: There is no "druid-basic-security" on extension list, because I removed, just so I could connect in the website, but it was there.
Here is my common.propertie file:
# Extensions specified in the load list will be loaded by Druid
# We are using local fs for deep storage - not recommended for production - use S3, HDFS, or NFS instead
# We are using local derby for the metadata store - not recommended for production - use MySQL or Postgres instead
# If you specify `druid.extensions.loadList=[]`, Druid won't load any extension from file system.
# If you don't specify `druid.extensions.loadList`, Druid will load all the extensions under root extension directory.
# More info:
https://druid.apache.org/docs/latest/operations/including-extensions.htmldruid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches", "mysql-metadata-storage"]
# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory
# and uncomment the line below to point to your directory.
#druid.extensions.hadoopDependenciesDir=/my/dir/hadoop-dependencies
#
# Hostname
#
druid.host=localhost
#
# Logging
#
# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true
#
# Zookeeper
#
druid.zk.service.host=localhost
druid.zk.paths.base=/druid
# Druid basic security
#druid.auth.authenticatorChain=["MyBasicMetadataAuthenticator"]
#druid.auth.authenticator.MyBasicMetadataAuthenticator.type=basic
#druid.auth.authenticator.MyBasicMetadataAuthenticator.initialAdminPassword=senha1
#druid.auth.authenticator.MyBasicMetadataAuthenticator.initialInternalClientPassword=senha2
#druid.auth.authenticator.MyBasicMetadataAuthenticator.credentialsValidator.type=metadata
#druid.auth.authenticator.MyBasicMetadataAuthenticator.skipOnFailure=false
#druid.auth.authenticator.MyBasicMetadataAuthenticator.authorizerName=MyBasicMetadataAuthorizer
# Escalator
#druid.escalator.type=basic
#druid.escalator.internalClientUsername=druid_system
#druid.escalator.internalClientPassword=senha2
#druid.escalator.authorizerName=MyBasicMetadataAuthorizer
#druid.auth.authorizers=["MyBasicMetadataAuthorizer"]
#druid.auth.authorizer.MyBasicMetadataAuthorizer.type=basic
# Metadata storage
#
# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
druid.metadata.storage.type=derby
druid.metadata.storage.connector.connectURI=jdbc:derby://localhost:1527/var/druid/metadata.db;create=true
druid.metadata.storage.connector.host=localhost
druid.metadata.storage.connector.port=1527
# For MySQL (make sure to include the MySQL JDBC driver on the classpath):
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://(myIpHere):3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=diurd
druid.metadata.storage.connector.host=(myIpHere)
# For PostgreSQL:
#druid.metadata.storage.type=postgresql
#druid.metadata.storage.connector.connectURI=jdbc:postgresql://
db.example.com:5432/druid#druid.metadata.storage.connector.user=...
#druid.metadata.storage.connector.password=...
#
# Deep storage
# For local disk (only viable in a cluster if this is a network mount):
druid.storage.type=local
druid.storage.storageDirectory=var/druid/segments
# For HDFS:
#druid.storage.type=hdfs
#druid.storage.storageDirectory=/druid/segments
# For S3:
#druid.storage.type=s3
#druid.storage.bucket=your-bucket
#druid.storage.baseKey=druid/segments
#druid.s3.accessKey=...
#druid.s3.secretKey=...
#
# Indexing service logs
#
# For local disk (only viable in a cluster if this is a network mount):
druid.indexer.logs.type=file
druid.indexer.logs.directory=var/druid/indexing-logs
# For HDFS:
#druid.indexer.logs.type=hdfs
#druid.indexer.logs.directory=/druid/indexing-logs
# For S3:
#druid.indexer.logs.type=s3
#druid.indexer.logs.s3Bucket=your-bucket
#druid.indexer.logs.s3Prefix=druid/indexing-logs
#
# Service discovery
#
druid.selectors.indexing.serviceName=druid/overlord
druid.selectors.coordinator.serviceName=druid/coordinator
#
# Monitoring
#
druid.monitoring.monitors=["org.apache.druid.java.util.metrics.JvmMonitor"]
druid.emitter=noop
druid.emitter.logging.logLevel=info
# Storage type of double columns
# ommiting this will lead to index double as float at the storage layer
druid.indexing.doubleStorage=double
#
# Security
#
druid.server.hiddenProperties=["druid.s3.accessKey","druid.s3.secretKey","druid.metadata.storage.connector.password"]
#
# SQL
#
druid.sql.enable=true
# Planning SQL query when there is aggregate distinct in the statement
druid.sql.planner.useGroupingSetForExactDistinct=true
#
# Lookups
#
druid.lookup.enableLookupSyncOnStartup=false
#
# Expression processing config
#
druid.expressions.useStrictBooleans=true
#
# Http client
#
druid.global.http.eagerInitialization=false
I tried to downgrade it's version, but it was no use.
What am I missing? Thanks in advance, I really need to fix it.