How to restart vttablet's slave mysql

506 views
Skip to first unread message

etha...@retigrid.com

unread,
Jan 15, 2018, 9:31:51 PM1/15/18
to vitess

Hello, I am new to vitess. 

I want to restart vttablet's slave mysql because vttablet's slave mysql lagged too much behind vttablet's master mysql and Slave_SQL_Running is No(from SHOW SLAVE  STATUS\G)

At first, I had plan to restart entire vttablets after I backed vttablet's master mysql. But I wouldn't back up vttablet's master mysql.(https://github.com/youtube/vitess/issues/2903)

How to restart vttablet's slave and sync master ?

Please let me know.


etha...@retigrid.com

unread,
Jan 15, 2018, 9:34:32 PM1/15/18
to vitess
I followed below 

Is there any better way?

Harshit Gangal

unread,
Jan 15, 2018, 9:58:28 PM1/15/18
to vit...@googlegroups.com
I used below steps like 8 months ago, ideally this should work if nothing much has changed in this region.

If you can take downtime then
  1. Stop the master.
  2. Start the master as rdonly.
  3. Check the tablet type should not be master, if it is try change tablet type.
  4. Take the backup using BACKUP command.
  5. Change the tablet type to master or Init Shard Master.
  6. Start new tablet, it should restore from the backup and latch to the master.
Try, this on some dummy topology before doing it on PROD.

Thanks,
Harshit

--
You received this message because you are subscribed to the Google Groups "vitess" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vitess+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

etha...@retigrid.com

unread,
Jan 15, 2018, 10:20:11 PM1/15/18
to vitess
Thanks for your advice. But I am newbie to vitess.
How to restart master as rdonly and start new tablet?

Assuming that cell is "test", keyspace is "test_keyspace" and master vttablet is "test-0000000100"
  1. Stop the master ~ 2.Start the master as rdonly: I saw API from http://vitess.io/reference/vtctl/#tablets , but I don't khow how.
  2. Check the tablet type should not be master, if it is try change tablet type: kvtctl.sh ListAllTablets test
  3. Take the backup using BACKUP command:  kvtctl.sh Backup test-0000000100
  4. Change the tablet type to master or Init Shard Master: kvtctl.sh InitShardMaster -force test_keyspace/0 test-0000000100
  5. Start new tablet, it should restore from the backup and latch to the master: I don't know how. But I think I could do using vttablet-pod-template.yaml(is located in github.com/youtube/vitess/examples/kubernetes )

I would appreciate it if you could help me.
To unsubscribe from this group and stop receiving emails from it, send an email to vitess+un...@googlegroups.com.

Harshit Gangal

unread,
Jan 15, 2018, 10:38:50 PM1/15/18
to vit...@googlegroups.com
In the vttablet-up script there is a argument called tablet_type you can set it to rdonly.

Not familier with kubernetes setup, but you can check.

To unsubscribe from this group and stop receiving emails from it, send an email to vitess+unsubscribe@googlegroups.com.

Harshit Gangal

unread,
Jan 15, 2018, 10:41:02 PM1/15/18
to vit...@googlegroups.com

etha...@retigrid.com

unread,
Jan 15, 2018, 10:51:52 PM1/15/18
to vitess
But as I know, If pod has restarted, the data of pod have gone because data of vttablet is stored in container. 
So I think that I must change the table type of master dynamically while vttablet is working.  

Sugu Sougoumarane

unread,
Jan 16, 2018, 12:41:46 AM1/16/18
to vitess
If I remember correctly, vttablet should be in one container and mysqlctld+mysql should be in a different container. So, you should be able to restart vttablet without bringing down mysql.

To unsubscribe from this group and stop receiving emails from it, send an email to vitess+unsubscribe@googlegroups.com.

etha...@retigrid.com

unread,
Jan 16, 2018, 8:43:04 AM1/16/18
to vitess
You are right. But How to restart one of contains in a pod? 
I saw below 
But I don't know how in vitess

I also saw below
   https://github.com/youtube/vitess/issues/2903

I think it is impossible to restart vttablet master container  in vitess frow above issue.

Sugu Sougoumarane

unread,
Jan 16, 2018, 1:41:13 PM1/16/18
to vitess, Anthony Yeh, Derek Perkins
Let's see if Anthony or Derek have suggestions.

To unsubscribe from this group and stop receiving emails from it, send an email to vitess+unsubscribe@googlegroups.com.

Anthony Yeh

unread,
Jan 16, 2018, 2:47:53 PM1/16/18
to Sugu Sougoumarane, vitess, Derek Perkins
It looks like you're using the config from vitess/examples/kubernetes. That config uses ephemeral, local storage for the data volume, so it's only appropriate if you can maintain (1) a recent, valid backup at all times, and (2) enough replicas to always have one running and caught up (ideally enforced through semi-sync) even if some go down.

It sounds like you've gotten into a situation where you have neither (1) nor (2), which is quite precarious. Earlier you mentioned the default "test_keyspace" names. Is this just an experimental Vitess cluster and you want to know how to recover from this for future reference? Or do you actually care about the data that's in there now?

If this was just a test, I recommend starting over and basing your Kubernetes config on our other example in the vitess/helm directory. That one optionally supports PersistentVolume (via StatefulSet), which is a safer way to run when you have a small number of replicas. For example, if you had gotten into this situation when using PersistentVolume (PV), you could delete and recreate the master Pod as a replica but with the same PersistentVolumeClaim (PVC), and then take a backup from that tablet.

If you actually care about the data that's in this stuck cluster, I suggest taking a dump at the MySQL level and using it to repopulate Vitess after starting over -- making sure to always have a recent backup, and either running more replicas or using PersistentVolume. One way to take the dump would be to `kubectl exec` into the mysql container in the master vttablet Pod, and use the shell to take a SQL dump and upload it somewhere remote.

Another option might be to try to "rescue" the existing replica and take a backup from it, then delete the Pod and let it restore. Whether this works would depend on exactly what's broken about it at the MySQL replication level. The first thing I would look for is any entries in the vttablet logs for that Pod that indicate why `START SLAVE` is failing.

etha...@retigrid.com

unread,
Jan 16, 2018, 8:09:45 PM1/16/18
to vitess

I  was trying to use whether vitess is suitable for our service
The current situation occurred during the test.
I plan to use PersistentVolume in my kubernets according to your advice
Thanks for the brilliant and detailed advice.

Sugu Sougoumarane

unread,
Jan 16, 2018, 9:17:14 PM1/16/18
to vitess
We have a slack channel. Let us know if you want an invite.

To unsubscribe from this group and stop receiving emails from it, send an email to vitess+unsubscribe@googlegroups.com.

etha...@retigrid.com

unread,
Jan 17, 2018, 3:51:39 AM1/17/18
to vitess
I'd like you to invite me. 
Thanks. 

etha...@retigrid.com

unread,
Jan 18, 2018, 9:02:30 PM1/18/18
to vitess


I tried to start basing kubernetes in the vitess/helm directory .but I faild to designate one of the tablets to master. 

The contents are long, but please read it.

(1)I installed chart. my values.yaml is like this

topology:
  cells:
    - name: "global"
      etcd:
        replicas: 3
      orchestrator:
        # NOTE: Our config currently only supports 1 orchestrator replica.
        replicas: 1
    - name: "zone1"
      etcd:
        replicas: 3
      vtctld:
        replicas: 1
      vtgate:
        replicas: 3
      keyspaces:
        - name: "metering_keyspace"
          shards:
            - name: "0" 
              tablets:
                - type: "replica"
                  uidBase: 100 
                  vttablet:
                    replicas: 2
                - type: "rdonly"
                  uidBase: 103 
                  vttablet:
                    replicas: 1
        - name: "info_keyspace"
          shards:
            - name: "0" 
              tablets:
                - type: "replica"
                  uidBase: 1 
                  vttablet:
                    replicas: 2

..... Below is the same as the original except using StatefulSet and PersistentVolume when setting vttablet pod

(2)When i designate one of the tablets to be the initial master, I got error message.

(2-1)ListAllTablets before InitShardMaster
    joseongeon-ui-MacBook-Air:kubernetes wizdear$ ./kvtctl.sh ListAllTablets zone1
    Starting port forwarding to vtctld...
    zone1-0908016100 metering_keyspace 0 rdonly zone1-metering-keyspace-0-rdonly-0.vttablet:15002 zone1-metering-keyspace-0-rdonly-0.vttablet:3306 []
    zone1-1178259800 info_keyspace 0 replica zone1-info-keyspace-0-replica-0.vttablet:15002 zone1-info-keyspace-0-replica-0.vttablet:3306 []
    zone1-1178259801 info_keyspace 0 replica zone1-info-keyspace-0-replica-1.vttablet:15002 zone1-info-keyspace-0-replica-1.vttablet:3306 []
    zone1-1274740500 metering_keyspace 0 replica zone1-metering-keyspace-0-replica-0.vttablet:15002 zone1-metering-keyspace-0-replica-0.vttablet:3306 []
    zone1-1274740501 metering_keyspace 0 replica zone1-metering-keyspace-0-replica-1.vttablet:15002 zone1-metering-keyspace-0-replica-1.vttablet:3306 []

(2-2)InitShardMaster metering_keyspace
    joseongeon-ui-MacBook-Air:kubernetes wizdear$ ./kvtctl.sh InitShardMaster --force metering_keyspace/0 zone1-1274740500
    Starting port forwarding to vtctld...
    W0119 10:23:29.435298   90409 main.go:58] W0119 01:23:27.025078 reparent.go:181] master-elect tablet zone1-1274740500 is not the shard master, proceeding anyway as -force was used
    W0119 10:23:29.436165   90409 main.go:58] W0119 01:23:27.123124 reparent.go:187] master-elect tablet zone1-1274740500 is not a master in the shard, proceeding anyway as -force was used
    E0119 10:23:29.787771   90409 main.go:61] Remote error: rpc error: code = Unavailable desc = grpc: the connection is unavailable

(2-3)InitShardMaster info_keyspace
    joseongeon-ui-MacBook-Air:kubernetes wizdear$ ./kvtctl.sh InitShardMaster --force info_keyspace/0 zone1-1178259800
    Starting port forwarding to vtctld...
    W0119 10:24:06.748730   90968 main.go:58] W0119 01:24:04.428200 reparent.go:181] master-elect tablet zone1-1178259800 is not the shard master, proceeding anyway as -force was used
    W0119 10:24:06.750092   90968 main.go:58] W0119 01:24:04.428526 reparent.go:187] master-elect tablet zone1-1178259800 is not a master in the shard, proceeding anyway as -force was used
    W0119 10:24:08.031179   90968 main.go:58] W0119 01:24:05.725140 reparent.go:275] master failed to PopulateReparentJournal, canceling slaves
    E0119 10:24:08.083721   90968 main.go:61] Remote error: rpc error: code = Unknown desc = failed to PopulateReparentJournal on master: rpc error: code = Unavailable desc = grpc: the connection is unavailable

(2-4)ListAllTabetls After InitShardMaster
    joseongeon-ui-MacBook-Air:kubernetes wizdear$ ./kvtctl.sh ListAllTablets zone1
    Starting port forwarding to vtctld...
    zone1-0908016100 metering_keyspace 0 rdonly zone1-metering-keyspace-0-rdonly-0.vttablet:15002 zone1-metering-keyspace-0-rdonly-0.vttablet:3306 []
    zone1-1178259800 info_keyspace 0 master zone1-info-keyspace-0-replica-0.vttablet:15002 zone1-info-keyspace-0-replica-0.vttablet:3306 []
    zone1-1178259801 info_keyspace 0 replica zone1-info-keyspace-0-replica-1.vttablet:15002 zone1-info-keyspace-0-replica-1.vttablet:3306 []
    zone1-1274740500 metering_keyspace 0 replica zone1-metering-keyspace-0-replica-0.vttablet:15002 zone1-metering-keyspace-0-replica-0.vttablet:3306 []
    zone1-1274740501 metering_keyspace 0 replica zone1-metering-keyspace-0-replica-1.vttablet:15002 zone1-metering-keyspace-0-replica-1.vttablet:3306 []


(3) checked the log vdcld 

I0119 01:23:25.655005       1 locks.go:356] Locking shard metering_keyspace/0 for action InitShardMaster(zone1-1274740500)
W0119 01:23:27.122479       1 reparent.go:181] master-elect tablet zone1-1274740500 is not the shard master, proceeding anyway as -force was used
W0119 01:23:27.123190       1 reparent.go:187] master-elect tablet zone1-1274740500 is not a master in the shard, proceeding anyway as -force was used
I0119 01:23:27.123864       1 reparent.go:213] resetting replication on tablet zone1-1274740500
I0119 01:23:27.124263       1 reparent.go:213] resetting replication on tablet zone1-1274740501
I0119 01:23:27.124475       1 reparent.go:213] resetting replication on tablet zone1-0908016100
W0119 01:23:27.422699       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {zone1-metering-keyspace-0-replica-1.vttablet:16002 0  <nil>}
W0119 01:23:27.422774       1 clientconn.go:1028] grpc: addrConn.transportMonitor exits due to: context canceled
W0119 01:23:27.422947       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {zone1-metering-keyspace-0-replica-0.vttablet:16002 0  <nil>}
W0119 01:23:27.422960       1 clientconn.go:1028] grpc: addrConn.transportMonitor exits due to: context canceled
I0119 01:23:27.423059       1 reparent.go:226] initializing master on zone1-1274740500
W0119 01:23:27.423374       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {zone1-metering-keyspace-0-rdonly-0.vttablet:16002 0  <nil>}
W0119 01:23:27.423387       1 clientconn.go:1028] grpc: addrConn.transportMonitor exits due to: context canceled
W0119 01:23:27.423873       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: lookup zone1-metering-keyspace-0-replica-0.vttablet on 10.43.240.10:53: dial udp 10.43.240.10:53: operation was canceled"; Reconnecting to {zone1-metering-keyspace-0-replica-0.vttablet:16002 0  <nil>}
I0119 01:23:27.423942       1 locks.go:392] Unlocking shard metering_keyspace/0 for action InitShardMaster(zone1-1274740500) with error rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0119 01:23:27.423971       1 lock.go:266] results of /vt/keyspaces/metering_keyspace/0/1: {
  "Action": "InitShardMaster(zone1-1274740500)",
  "HostName": "vtctld-2545618834-nsnjt",
  "UserName": "vitess",
  "Time": "2018-01-19T01:23:25Z",
  "Status": "Error: rpc error: code = Unavailable desc = grpc: the connection is unavailable"
}
W0119 01:23:27.424215       1 clientconn.go:696] Failed to dial zone1-metering-keyspace-0-replica-0.vttablet:16002: context canceled; please retry.
I0119 01:24:03.122796       1 locks.go:356] Locking shard info_keyspace/0 for action InitShardMaster(zone1-1178259800)
W0119 01:24:04.428259       1 reparent.go:181] master-elect tablet zone1-1178259800 is not the shard master, proceeding anyway as -force was used
W0119 01:24:04.428543       1 reparent.go:187] master-elect tablet zone1-1178259800 is not a master in the shard, proceeding anyway as -force was used
I0119 01:24:04.429223       1 reparent.go:213] resetting replication on tablet zone1-1178259801
I0119 01:24:04.429652       1 reparent.go:213] resetting replication on tablet zone1-1178259800
W0119 01:24:04.739655       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {zone1-info-keyspace-0-replica-0.vttablet:16002 0  <nil>}
W0119 01:24:04.739721       1 clientconn.go:1028] grpc: addrConn.transportMonitor exits due to: context canceled
I0119 01:24:04.822447       1 reparent.go:226] initializing master on zone1-1178259800
W0119 01:24:04.823393       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {zone1-info-keyspace-0-replica-1.vttablet:16002 0  <nil>}
W0119 01:24:04.823545       1 clientconn.go:1028] grpc: addrConn.transportMonitor exits due to: context canceled
I0119 01:24:05.722920       1 reparent.go:261] initializing slave zone1-1178259801
W0119 01:24:05.723436       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {zone1-info-keyspace-0-replica-0.vttablet:16002 0  <nil>}
W0119 01:24:05.723667       1 clientconn.go:1028] grpc: addrConn.transportMonitor exits due to: context canceled
I0119 01:24:05.723914       1 reparent.go:254] populating reparent journal on new master zone1-1178259800
W0119 01:24:05.724987       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: lookup zone1-info-keyspace-0-replica-0.vttablet on 10.43.240.10:53: dial udp 10.43.240.10:53: operation was canceled"; Reconnecting to {zone1-info-keyspace-0-replica-0.vttablet:16002 0  <nil>}
W0119 01:24:05.725188       1 reparent.go:275] master failed to PopulateReparentJournal, canceling slaves
I0119 01:24:05.725666       1 locks.go:392] Unlocking shard info_keyspace/0 for action InitShardMaster(zone1-1178259800) with error failed to PopulateReparentJournal on master: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0119 01:24:05.725827       1 lock.go:266] results of /vt/keyspaces/info_keyspace/0/2: {
  "Action": "InitShardMaster(zone1-1178259800)",
  "HostName": "vtctld-2545618834-nsnjt",
  "UserName": "vitess",
  "Time": "2018-01-19T01:24:03Z",
  "Status": "Error: failed to PopulateReparentJournal on master: rpc error: code = Unavailable desc = grpc: the connection is unavailable"
}
W0119 01:24:05.726191       1 clientconn.go:696] Failed to dial zone1-info-keyspace-0-replica-0.vttablet:16002: context canceled; please retry.
W0119 01:24:05.726491       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {zone1-info-keyspace-0-replica-1.vttablet:16002 0  <nil>}
W0119 01:24:05.726650       1 clientconn.go:696] Failed to dial zone1-info-keyspace-0-replica-1.vttablet:16002: grpc: the connection is closing; please retry.
zone1-0908016100 metering_keyspace 0 rdonly zone1-metering-keyspace-0-rdonly-0.vttablet:15002 zone1-metering-keyspace-0-rdonly-0.vttablet:3306 []
zone1-1178259800 info_keyspace 0 master zone1-info-keyspace-0-replica-0.vttablet:15002 zone1-info-keyspace-0-replica-0.vttablet:3306 []
zone1-1178259801 info_keyspace 0 replica zone1-info-keyspace-0-replica-1.vttablet:15002 zone1-info-keyspace-0-replica-1.vttablet:3306 []
zone1-1274740500 metering_keyspace 0 replica zone1-metering-keyspace-0-replica-0.vttablet:15002 zone1-metering-keyspace-0-replica-0.vttablet:3306 []
zone1-1274740501 metering_keyspace 0 replica zone1-metering-keyspace-0-replica-1.vttablet:15002 zone1-metering-keyspace-0-replica-1.vttablet:3306 []
I0119 01:25:24.098118       1 locks.go:356] Locking shard metering_keyspace/0 for action InitShardMaster(zone1-1274740500)
W0119 01:25:25.523832       1 reparent.go:181] master-elect tablet zone1-1274740500 is not the shard master, proceeding anyway as -force was used
W0119 01:25:25.523867       1 reparent.go:187] master-elect tablet zone1-1274740500 is not a master in the shard, proceeding anyway as -force was used
I0119 01:25:25.523959       1 reparent.go:213] resetting replication on tablet zone1-1274740501
I0119 01:25:25.524060       1 reparent.go:213] resetting replication on tablet zone1-1274740500
I0119 01:25:25.524101       1 reparent.go:213] resetting replication on tablet zone1-0908016100
W0119 01:25:25.780136       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {zone1-metering-keyspace-0-rdonly-0.vttablet:16002 0  <nil>}
W0119 01:25:25.780460       1 clientconn.go:1028] grpc: addrConn.transportMonitor exits due to: context canceled
W0119 01:25:25.849399       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {zone1-metering-keyspace-0-replica-1.vttablet:16002 0  <nil>}
W0119 01:25:25.922455       1 clientconn.go:1028] grpc: addrConn.transportMonitor exits due to: context canceled
I0119 01:25:26.001097       1 reparent.go:226] initializing master on zone1-1274740500
W0119 01:25:26.001943       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {zone1-metering-keyspace-0-replica-0.vttablet:16002 0  <nil>}
W0119 01:25:26.002100       1 clientconn.go:1028] grpc: addrConn.transportMonitor exits due to: context canceled
W0119 01:25:26.002774       1 clientconn.go:934] grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: lookup zone1-metering-keyspace-0-replica-0.vttablet on 10.43.240.10:53: dial udp 10.43.240.10:53: operation was canceled"; Reconnecting to {zone1-metering-keyspace-0-replica-0.vttablet:16002 0  <nil>}
I0119 01:25:26.003239       1 locks.go:392] Unlocking shard metering_keyspace/0 for action InitShardMaster(zone1-1274740500) with error rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0119 01:25:26.003413       1 lock.go:266] results of /vt/keyspaces/metering_keyspace/0/3: {
  "Action": "InitShardMaster(zone1-1274740500)",
  "HostName": "vtctld-2545618834-nsnjt",
  "UserName": "vitess",
  "Time": "2018-01-19T01:25:24Z",
  "Status": "Error: rpc error: code = Unavailable desc = grpc: the connection is unavailable"
}
W0119 01:25:26.003728       1 clientconn.go:696] Failed to dial zone1-metering-keyspace-0-replica-0.vttablet:16002: context canceled; please retry.
zone1-0908016100 metering_keyspace 0 rdonly zone1-me

Please give me advice.

On Wednesday, January 17, 2018 at 4:47:53 AM UTC+9, Anthony Yeh wrote:

Derek Perkins

unread,
Jan 19, 2018, 9:57:59 AM1/19/18
to vitess
I'm not sure what's causing your errors, but there is a lot coming up to help run Vitess in Kubernetes. If you take a look at this PR - https://github.com/youtube/vitess/pull/3487 - you can try the updated chart to see if that helps your situation at all. We're also working on a Vitess Operator to help monitor and manage your k8s Vitess install.

Sugu Sougoumarane

unread,
Jan 19, 2018, 12:58:52 PM1/19/18
to vitess
Usually, the grpc errors are just spam. They are usually not an indication of a problem. It's something we have to live with until this bug is fixed: https://github.com/grpc/grpc-go/issues/1633.

I have a feeling that something timed out in the middle. Most likely, some vttablets didn't come up in time, which caused InitShardMaster to fail, which means that everything else after that failed.
If so, redoing those steps manually should just work.

Your other option is to use Derek's work. He's put a lot of effort and testing in the new PR. So, maybe it's worth trying that instead.

On Fri, Jan 19, 2018 at 6:57 AM, Derek Perkins <de...@derekperkins.com> wrote:
I'm not sure what's causing your errors, but there is a lot coming up to help run Vitess in Kubernetes. If you take a look at this PR - https://github.com/youtube/vitess/pull/3487 - you can try the updated chart to see if that helps your situation at all. We're also working on a Vitess Operator to help monitor and manage your k8s Vitess install.

--
You received this message because you are subscribed to the Google Groups "vitess" group.
To unsubscribe from this group and stop receiving emails from it, send an email to vitess+unsubscribe@googlegroups.com.

etha...@retigrid.com

unread,
Jan 22, 2018, 8:20:03 PM1/22/18
to vitess

Thanks for advices. 
I think this PR((https://github.com/youtube/vitess/pull/3487) is merged into helm branch.
I will try it.

I will try 
To unsubscribe from this group and stop receiving emails from it, send an email to vitess+un...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages