Errors Deploying Confluent Operator To Kubernetes With Helm

493 views
Skip to first unread message

Benson Liu

unread,
Mar 25, 2020, 8:16:01 PM3/25/20
to Confluent Platform
Hi, I am currently trying to deploy Confluent Operator onto my local Kubernetes instance (On Mac using Docker-Desktop for Kubernetes).
I already hit a blocker trying to install confluent operator (or any of the components for that matter):

⇒  helm install \
operator \
./confluent-operator -f \
./providers/private.yaml \
--namespace operator \
--set operator.enabled=true

Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "KafkaCluster" in version "cluster.confluent.com/v1alpha1", unable to recognize "": no matches for kind "PhysicalStatefulCluster" in version "operator.confluent.cloud/v1", unable to recognize "": no matches for kind "ZookeeperCluster" in version "cluster.confluent.com/v1alpha1"]

I did not really modify the files all that much:

./confluent-operator/values.yaml
# Default values for confluent-operator.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
serviceAccounts:
## Confluent Operator requires cluster-level admin access
##
operator:
name: cc-operator

global:
provider:
## Support values: aws | gcp | azure | private
##
name: "private"
## Name of region
##
region: ""
kubernetes:
## Configure if k8s internal domain name is different than svc.cluster.local
##
clusterDomain: ""
deployment:
## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate
## If kubernetes is deployed in single availability zone then specify appropriate values
## For the private cloud, use kubernetes node labels as appropriate
zones: []
storage:
##
provisioner: "hostpath"
## Parameters for storage-class. Please go through
## For multi zone deployment, use zones otherwise use zone parameter as appropriate
parameters: {
path: /Users/benson.liu/Volumes/operator
}
reclaimPolicy: Delete
mountOptions: []
volumeBindingMode: ""
allowedTopologies: []
## Add annotations in storage-class resources
##
annotations:
registry:
## Docker registry endpoint where Confluent Images are available.
##
fqdn:
credential:
## Enable if authentication is required to pull images in kubernetes cluster.
##
required: false
## Docker login username
##
username: ""
## Docker login password
##
password: ""
## All containers to run as non root with specific UID/GUID
## For OpenShift enable randomUID to true. If randomUID is false, then you must use right UID/GUID and configure confluent-scc.yaml accordingly.
## The scripts can be found at scripts/openshift/confluent-scc.yaml
pod:
securityContext:
fsGroup: 1001
runAsUser: 1001
runAsGroup: ""
runAsNonRoot: true
supplementalGroups: []
seLinuxOptions: {}
## Only enable for OpenShift Platform if random UID is required to run container process.
## Follow Readme.md in scripts/openshift folder for more information.
## For Debian based CP images, enable will run the container process as root UID.
randomUID: false
##
## This is configured for inter-broker-configurations username/password.
## All component and clients use these credentials to communicate with Kafka.
##
sasl:
plain:
username: test
password: test123
##
## Init Container configurations
##
initContainer:
image:
repository: confluentinc/cp-init-container-operator
tag: 5.4.1.0

operator:
enabled: true

zookeeper:
enabled: true

kafka:
enabled: true
schemaregistry:
enabled: true

controlcenter:
enabled: false

replicator:
enabled: false

connect:
enabled: false

ksql:
enabled: true

externaldns:
enabled: false

Amit Gupta

unread,
Mar 27, 2020, 3:56:50 PM3/27/20
to Confluent Platform
Hi Benson,

Hope you're doing well.

In general, you should not modify any of the files in the Operator bundle that you download and extract. This is true in general for Helm charts, it is best not to modify any of the files within the chart, and instead you can have your own customization in a separate values file which you use to override default values defined in the chart's own values.yaml file.

In particular, you should not modify ./confluent-operator/values.yaml. Instead, you should take one of the example files like ./providers/private.yaml, make a copy of that, and edit that. Then run the helm install ... command that you mentioned below.

Could you please (a) revert the modifications made to the ./confluent-operator/values.yaml file, and also share the contents of your modified or copied ./providers/private.yaml?

The problem you're experiencing is likely due to setting <component>.enabled: true for multiple components, which means Helm will try to install everything simultaneously. This won't work because, for example, Kafka is deployed to Kubernetes when the Helm chart creates a Kafka CustomResource, however your Kubernetes cluster is not aware of the Kafka CustomResourceDefinition yet until the Operator has been deployed (this may be improved in the future). If you avoid modifying the values file in the way you've shown, then Helm command you're running would only install the Operator itself. Then if you continue following the documented instructions you linked, you would install subsequent components in order like ZooKeeper, Kafka, etc.

Best,
Amit

Benson Liu

unread,
Mar 27, 2020, 5:16:29 PM3/27/20
to Confluent Platform
You are right. I should not be modifying './confluent-operator/values.yaml'.
After following your instructions, I was able to start the pods. Here is what I have currently for './providers/private.yaml':

## Overriding values for Chart's values.yaml
## Example values to run Confluent Operator in Private Cloud
global:
 provider:
   name: private
   ## if any name which indicates regions
   ##
   region: anyregion
   kubernetes:
      deployment:
        ## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate
        ## If kubernetes is deployed in single availability zone then specify appropriate values
        ## For the private cloud, use kubernetes node labels as appropriate
        zones:
         - myzones
   ##  more information can be found here
   storage:
     ## Use Retain if you want to persist data after CP cluster has been uninstalled
     reclaimPolicy: Delete
     provisioner: hostpath
     parameters: {}
   ##
   ## Docker registry endpoint where Confluent Images are available.
   ##
   registry:
     fqdn: docker.io
     credential:
       required: false
 sasl:
   plain:
     username: test
     password: test123
## Zookeeper cluster
##
zookeeper:
 name: zookeeper
 replicas: 1
 resources:
   requests:
     cpu: 200m
     memory: 512Mi

## Kafka Cluster
##
kafka:
 name: kafka
 replicas: 1
 resources:
   requests:
     cpu: 200m
     memory: 1Gi
 loadBalancer:
   enabled: false
   domain: ""
 tls:
   enabled: false
   fullchain: |-
   privkey: |-
   cacerts: |-
 metricReporter:
   enabled: false

## Connect Cluster
##
connect:
 name: connectors
 replicas: 0
 tls:
   enabled: false
   ## "" for none, "tls" for mutual auth
   authentication:
     type: ""
   fullchain: |-
   privkey: |-
   cacerts: |-
 loadBalancer:
   enabled: false
   domain: ""
 dependencies:
   kafka:
     bootstrapEndpoint: kafka:9071
     brokerCount: 1
   schemaRegistry:
     enabled: true
     url: http://schemaregistry:8081
## Replicator Connect Cluster
##
replicator:
 name: replicator
 replicas: 0
 tls:
   enabled: false
   authentication:
     type: ""
   fullchain: |-
   privkey: |-
   cacerts: |-
 loadBalancer:
   enabled: false
   domain: ""
 dependencies:
   kafka:
     brokerCount: 1
     bootstrapEndpoint: kafka:9071
##
## Schema Registry
##
schemaregistry:
 name: schemaregistry
 tls:
   enabled: false
   authentication:
     type: ""
   fullchain: |-
   privkey: |-
   cacerts: |-
 loadBalancer:
   enabled: false
   domain: ""
 dependencies:
   kafka:
     brokerCount: 1
     bootstrapEndpoint: kafka:9071

##
## KSQL
##
ksql:
 name: ksql
 replicas: 1
 tls:
   enabled: false
   authentication:
     type: ""
   fullchain: |-
   privkey: |-
   cacerts: |-
 loadBalancer:
   enabled: false
   domain: ""
 dependencies:
   kafka:
     brokerCount: 1
     bootstrapEndpoint: kafka:9071
     brokerEndpoints: kafka-0.kafka:9071,kafka-1.kafka:9071,kafka-2.kafka:9071
   schemaRegistry:
     enabled: false
     tls:
       enabled: false
       authentication:
         type: ""
     url: http://schemaregistry:8081

## Control Center (C3) Resource configuration
##
controlcenter:
 name: controlcenter
 license: ""
 ##
 ## C3 dependencies
 ##
 dependencies:
   c3KafkaCluster:
     brokerCount: 1
     bootstrapEndpoint: kafka:9071
     zookeeper:
       endpoint: zookeeper:2181
   connectCluster:
     enabled: true
     url: http://connectors:8083
   ksql:
     enabled: true
     url: http://ksql:9088
   schemaRegistry:
     enabled: true
     url: http://schemaregistry:8081
 ##
 ## C3 External Access
 ##
 loadBalancer:
   enabled: false
   domain: ""
 ##
 ## TLS configuration
 ##
 tls:
   enabled: false
   authentication:
     type: ""
   fullchain: |-
   privkey: |-
   cacerts: |-
 ##
 ## C3 authentication
 ##
 auth:
   basic:
     enabled: true
     ##
     ## map with key as user and value as password and role
     property:
       admin: Developer1,Administrators
       disallowed: no_access


It also looked like the helm chart created the services:
⇒  kubectl get services --all-namespaces
NAMESPACE     NAME                        TYPE        CLUSTER
-IP      EXTERNAL-IP   PORT(S)                                        AGE
default       kubernetes                  ClusterIP   10.96.0.1       <none>        443/TCP                                        8d
docker        compose
-api                 ClusterIP   10.103.2.242    <none>        443/TCP                                        8d
kube
-system   kube-dns                    ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP                         8d
operator      ksql                        ClusterIP   None            <none>        8088/TCP,9088/TCP,7203/TCP,7777/TCP            16m
operator      ksql-0-internal             ClusterIP   10.100.201.91   <none>        8088/TCP,9088/TCP,7203/TCP,7777/TCP            16m
operator      schemaregistry              ClusterIP   None            <none>        8081/TCP,9081/TCP,7203/TCP,7777/TCP            16m
operator      schemaregistry-0-internal   ClusterIP   10.110.7.232    <none>        8081/TCP,9081/TCP,7203/TCP,7777/TCP            16m
operator      schemaregistry-1-internal   ClusterIP   10.111.80.121   <none>        8081/TCP,9081/TCP,7203/TCP,7777/TCP            16m
operator      zookeeper                   ClusterIP   None            <none>        3888/TCP,2888/TCP,2181/TCP,7203/TCP,7777/TCP   17m
operator      zookeeper-0-internal        ClusterIP   10.103.136.55   <none>        3888/TCP,2888/TCP,2181/TCP,7203/TCP,7777/TCP   17m
It looks like for people to access the services, we would have to manually create an ingress for them. Is this right?

On a side note, my colleague came up with an alternative approach as well if anyone wanted to run these services locally for development using Docker Compose.
---
version: '2'
services:
 zookeeper:
   image: "confluentinc/cp-zookeeper:latest"
   ports:
     - '2181:32181'
   environment:
     ZOOKEEPER_CLIENT_PORT: 32181
     ZOOKEEPER_TICK_TIME: 2000

  kafka:
   image: "confluentinc/cp-kafka:latest"
   ports:
     - '9092:9092'
   depends_on:
     - zookeeper
   environment:
     KAFKA_BROKER_ID: 1
     KAFKA_ZOOKEEPER_CONNECT: zookeeper:32181
     KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
     KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
     KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:39092,PLAINTEXT_HOST://localhost:9092
     KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
     KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
     KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
     KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
     KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100

  schema-registry:
   image: "confluentinc/cp-schema-registry:latest"
   ports:
     - '8081:8081'
   depends_on:
     - zookeeper
     - kafka
   environment:
     SCHEMA_REGISTRY_HOST_NAME: schema-registry
     SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:32181

  ksql-server:
   image: "confluentinc/cp-ksql-server:latest"
   ports:
     - '8088:8088'
   depends_on:
     - kafka
     - schema-registry
   environment:
     KSQL_BOOTSTRAP_SERVERS: kafka:39092
     KSQL_LISTENERS: http://0.0.0.0:8088
     KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
You will only need to run 'docker-compose up' and you will also have all of these containers running locally for development.

Amit Gupta

unread,
Mar 27, 2020, 5:26:04 PM3/27/20
to confluent...@googlegroups.com
Hi Benson,

I'm not sure if Docker Desktop for Mac supports LoadBalancer services, if so then you can use those, the Confluent Operator helm charts can help you create those (see here). I know Minikube supports this, not sure about Docker Desktop.

Best,
Amit

--
You received this message because you are subscribed to a topic in the Google Groups "Confluent Platform" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/confluent-platform/kXQkGINQgDc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to confluent-platf...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/confluent-platform/19175109-da14-4694-9ffb-f84b3cbdffd9%40googlegroups.com.


--
Amit Kumar Gupta
Group Product Manager, Confluent

Benson Liu

unread,
Mar 27, 2020, 6:53:33 PM3/27/20
to Confluent Platform
I got my colleague to try out this config using Minikube and I also ran this on Docker-Desktop.
## Overriding values for Chart's values.yaml
## Example values to run Confluent Operator in Private Cloud
global:
provider:
name: private
## if any name which indicates regions
##
region: anyregion
kubernetes:
deployment:
## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate
## If kubernetes is deployed in single availability zone then specify appropriate values
## For the private cloud, use kubernetes node labels as appropriate
zones:
- myzones
## more information can be found here
storage:
## Use Retain if you want to persist data after CP cluster has been uninstalled
reclaimPolicy: Delete
provisioner: docker.io/hostpath
enabled: true
type: internal
domain: localhost
tls:
enabled: false
fullchain: |-
privkey: |-
cacerts: |-
metricReporter:
enabled: false

## Connect Cluster
##
connect:
name: connectors
replicas: 0
tls:
enabled: false
## "" for none, "tls" for mutual auth
authentication:
type: ""
fullchain: |-
privkey: |-
cacerts: |-
loadBalancer:
enabled: true
type: internal
domain: localhost
dependencies:
kafka:
bootstrapEndpoint: kafka:9071
brokerCount: 1
schemaRegistry:
enabled: true
## Replicator Connect Cluster
##
replicator:
name: replicator
replicas: 0
tls:
enabled: false
authentication:
type: ""
fullchain: |-
privkey: |-
cacerts: |-
loadBalancer:
enabled: true
type: internal
domain: localhost
dependencies:
kafka:
brokerCount: 1
bootstrapEndpoint: kafka:9071
##
## Schema Registry
##
schemaregistry:
name: schemaregistry
tls:
enabled: false
authentication:
type: ""
fullchain: |-
privkey: |-
cacerts: |-
loadBalancer:
enabled: true
type: internal
domain: localhost
dependencies:
kafka:
brokerCount: 1
bootstrapEndpoint: kafka:9071

##
## KSQL
##
ksql:
name: ksql
replicas: 1
tls:
enabled: false
authentication:
type: ""
fullchain: |-
privkey: |-
cacerts: |-
loadBalancer:
enabled: true
type: internal
domain: localhost

This seems to have gotten us pretty far along
Reply all
Reply to author
Forward
0 new messages