--
You received this message because you are subscribed to the Google Groups "metallb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metallb-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metallb-users/ffa48e9e-685c-40ea-a092-67fa7e136653n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "metallb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metallb-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metallb-users/CAHAts2iXg7U%2Btb3Mh2GjPUhdgq9jwNrnZnxG4YpcXOVQFDPGwg%40mail.gmail.com.
METALLB_ML_BIND_ADDR
setting:To view this discussion on the web visit https://groups.google.com/d/msgid/metallb-users/CAHAts2h-PBaYYKfMufeava1rCGUUhYdD-BEk7JjhsU%3D%3DMVKkLg%40mail.gmail.com.
Thanks a lot for the detailed explanation!
I think I have an idea what the bug can be. If once you hit the issue,
you restart all the speaker pods (one by one), does that fix it by any
chance?
To view this discussion on the web visit https://groups.google.com/d/msgid/metallb-users/CACaBj2aP61EmTMty0yMd6S39YBhgWs7dGbHJ%2B4GQG-rMHY5t8Q%40mail.gmail.com.
Etienne Champetier
Operations Engineer
Skype id: etiennechampetier
Web: www.anevia.com
CONFIDENTIALITY NOTICE: The information in this e-mail message is legally privileged and contains confidential information intended only for the use of the individual or entity named above. Unauthorized review, dissemination, distribution, copying or other use of this e-mail message, including all attachments, is strictly prohibited and may be unlawful. If you have received this e-mail message in error, please notify us immediately by telephone at +33 1 41983240 or by return e-mail and destroy this message and all copies thereof, including all attachments.
Hi All,Le mer. 9 sept. 2020 à 10:09, Rodrigo Campos <rod...@kinvolk.io> a écrit :Thanks a lot for the detailed explanation!
I think I have an idea what the bug can be. If once you hit the issue,
you restart all the speaker pods (one by one), does that fix it by any
chance?Just restarting 1 speaker should be enough I think, and you should see MemberList logs
--
You received this message because you are subscribed to the Google Groups "metallb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metallb-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metallb-users/0bf5e7db-6dfe-4663-b9b5-2ad31c1b65b5n%40googlegroups.com.
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ kubectl create ns metallb-system
$ helm fetch --untar bitnami/metallb
$ vim metallb/values.yaml
$ helm install metallb -n metallb-system -f metallb/values.yaml bitnami/metallb
The values.yml is attached. Behind MetalLB is the Nginx ingress with version 0.34.1 as type Load Balancer, that receives an IP from the MetalLB pool. The floating IP has an A record and multiple applications have CNAMEs to that A record.
Do you need more info? If you'd want I can also give installation details of the ingress but I think it's pretty default as well.
To view this discussion on the web visit https://groups.google.com/d/msgid/metallb-users/CACgQN7r1taS5oGx-VLg%2B6nKSUUQO4mC%3DR9u-9rEqABE9gLCS3Q%40mail.gmail.com.
Can you clarify one thing, when you see the messages like this:
{"caller":"arp.go:102","interface":"ens192","ip":"10.11.112.74","msg":"got ARP request for service IP, sending response","responseMAC":"00:50:56:ab:0a:2c","senderIP":"10.11.112.1","senderMAC":"00:22:bd:f8:19:ff","ts":"2020-09-03T16:54:22.594421005Z"}
{"caller":"arp.go:102","interface":"ens192","ip":"10.11.112.74","msg":"got ARP request for service IP, sending response","responseMAC":"00:50:56:ab:ae:f1","senderIP":"10.11.112.1","senderMAC":"00:22:bd:f8:19:ff","ts":"2020-09-03T16:54:22.593812138Z"}
Are those from the log of a single speaker, or did you combine the logs across multiple speaker instances? Based on the interface name being the same, I'm guessing this was combined logs of 2 different speakers, but I wanted to make sure.--Russell Bryant
To view this discussion on the web visit https://groups.google.com/d/msgid/metallb-users/CAHAts2h8MuyL6NL%3Dp5JrMbv%2BSSGqoXZuYKLAhAOTdLkU3BU5bA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metallb-users/CAHAts2h8MuyL6NL%3Dp5JrMbv%2BSSGqoXZuYKLAhAOTdLkU3BU5bA%40mail.gmail.com.
Hi Iohenkies,Le jeu. 17 sept. 2020 à 08:44, Etienne Champetier <echam...@anevia.com> a écrit :Hello Iohenkies,
Le jeu. 17 sept. 2020 à 07:59, iohenkies - <iohe...@gmail.com> a écrit :The network people say this has to be configured on the switches in order to make this work in this network:Any opinions?
This would be to just replace MetalLB from a quick read, not to help MetalLB workAs we are out of simple ideas, I think we need:1) full logs since speakers & controller start2) tcpdump -i ethX -p arp -w arpnodeX.pcap on all nodesLooking at what you provided in private{"caller":"main.go:202","component":"MemberList","msg":"memberlist.go:245: [DEBUG] memberlist: Failed to join 10.11.112.103: dial tcp 10.11.112.103:7946: connect: no route to host","ts":"2020-09-17T13:33:00.934375645Z"}
{"caller":"main.go:202","component":"MemberList","msg":"net.go:785: [DEBUG] memberlist: Initiating push/pull sync with: 10.11.112.101:7946","ts":"2020-09-17T13:33:00.93463846Z"}
{"caller":"main.go:202","component":"MemberList","msg":"net.go:210: [DEBUG] memberlist: Stream connection from=10.11.112.101:50654","ts":"2020-09-17T13:33:00.934717441Z"}
{"caller":"main.go:202","component":"MemberList","msg":"memberlist.go:245: [DEBUG] memberlist: Failed to join 10.11.112.102: dial tcp 10.11.112.102:7946: connect: no route to host","ts":"2020-09-17T13:33:00.935523095Z"}
{"caller":"main.go:163","error ?":null,"msg":"Memberlist join","nb joigned":1,"op":"startup","ts":"2020-09-17T13:33:00.935552288Z"}Any firewall blocking MemberlIst traffic ?Have you tried to remove CPU limits on MetalLB components ?
On Thu, Sep 17, 2020 at 11:39 AM iohenkies - <iohe...@gmail.com> wrote:Yes I did do that. Restarting one of all of the speaker pods does not solve it for a while. As said in my last email I've also removed the nodeselectors and tolerations, so the manifests and settings are 100% default. It does not solve the problems. At this time I cannot get in contact with the network guys, they don't understand how awful this problem is :(On Thu, Sep 17, 2020 at 11:07 AM Rodrigo Campos <rod...@kinvolk.io> wrote:On Wed, Sep 16, 2020 at 2:44 PM iohenkies - <iohe...@gmail.com> wrote:
>
> Hi all. First of all: thank you all for the time, in case I did not make this clear yet I am very grateful for your time.
Thank you too! :)
Were you able to try what we mentioned here:
https://groups.google.com/g/metallb-users/c/HAO0k7cCbDk/m/DCoVXzmqAwAJ
and see if, when you hit the problem and do that, it is solved for a
while? That would help us a lot to understand the root cause of the
issue and fix it :)
--
You received this message because you are subscribed to the Google Groups "metallb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metallb-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metallb-users/CAHAts2izD%2Bn1CiaQgKgBnuafNiNbw5ASTFLQrNo9qr72JO%2BNdQ%40mail.gmail.com.
--Etienne Champetier
Operations Engineer
Skype id: etiennechampetier
Web: www.anevia.com
CONFIDENTIALITY NOTICE: The information in this e-mail message is legally privileged and contains confidential information intended only for the use of the individual or entity named above. Unauthorized review, dissemination, distribution, copying or other use of this e-mail message, including all attachments, is strictly prohibited and may be unlawful. If you have received this e-mail message in error, please notify us immediately by telephone at +33 1 41983240 or by return e-mail and destroy this message and all copies thereof, including all attachments.
--Etienne Champetier
Operations Engineer
Skype id: etiennechampetier
Web: www.anevia.com
CONFIDENTIALITY NOTICE: The information in this e-mail message is legally privileged and contains confidential information intended only for the use of the individual or entity named above. Unauthorized review, dissemination, distribution, copying or other use of this e-mail message, including all attachments, is strictly prohibited and may be unlawful. If you have received this e-mail message in error, please notify us immediately by telephone at +33 1 41983240 or by return e-mail and destroy this message and all copies thereof, including all attachments.
Follow us on LinkedIn : https://www.linkedin.com/company/anevia/
Follow us on Twitter : https://twitter.com/aneviaiptv
CONFIDENTIALITY NOTICE: The information in this e-mail message contains confidential information intended only for the use of the addressee named above. Unauthorized review, dissemination, distribution, copying or other use of this e-mail message, including all attachments, is strictly prohibited and may be unlawful. If you have received this e-mail message in error, please notify us immediately by telephone at +33 1 41983240 or by return e-mail and destroy this message and all copies thereof, including all attachments
The network people say this has to be configured on the switches in order to make this work in this network:Any opinions?
On Thu, Sep 17, 2020 at 11:39 AM iohenkies - <iohe...@gmail.com> wrote:Yes I did do that. Restarting one of all of the speaker pods does not solve it for a while. As said in my last email I've also removed the nodeselectors and tolerations, so the manifests and settings are 100% default. It does not solve the problems. At this time I cannot get in contact with the network guys, they don't understand how awful this problem is :(On Thu, Sep 17, 2020 at 11:07 AM Rodrigo Campos <rod...@kinvolk.io> wrote:On Wed, Sep 16, 2020 at 2:44 PM iohenkies - <iohe...@gmail.com> wrote:
>
> Hi all. First of all: thank you all for the time, in case I did not make this clear yet I am very grateful for your time.
Thank you too! :)
Were you able to try what we mentioned here:
https://groups.google.com/g/metallb-users/c/HAO0k7cCbDk/m/DCoVXzmqAwAJ
and see if, when you hit the problem and do that, it is solved for a
while? That would help us a lot to understand the root cause of the
issue and fix it :)
--
You received this message because you are subscribed to the Google Groups "metallb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metallb-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metallb-users/CAHAts2izD%2Bn1CiaQgKgBnuafNiNbw5ASTFLQrNo9qr72JO%2BNdQ%40mail.gmail.com.
Etienne Champetier
Operations Engineer
Skype id: etiennechampetier
Web: www.anevia.com
CONFIDENTIALITY NOTICE: The information in this e-mail message is legally privileged and contains confidential information intended only for the use of the individual or entity named above. Unauthorized review, dissemination, distribution, copying or other use of this e-mail message, including all attachments, is strictly prohibited and may be unlawful. If you have received this e-mail message in error, please notify us immediately by telephone at +33 1 41983240 or by return e-mail and destroy this message and all copies thereof, including all attachments.
Argh I didn't want to only send it in private. I'll paste the mail below. I'll have to check and see if there is some blocking going on and mayme ask the firewall people :|. Which ports should be open at least?
Hello Iohenkies,Le jeu. 17 sept. 2020 à 07:59, iohenkies - <iohe...@gmail.com> a écrit :The network people say this has to be configured on the switches in order to make this work in this network:Any opinions?This would be to just replace MetalLB from a quick read, not to help MetalLB workAs we are out of simple ideas, I think we need:1) full logs since speakers & controller start2) tcpdump -i ethX -p arp -w arpnodeX.pcap on all nodes
{"caller":"main.go:202","component":"MemberList","msg":"memberlist.go:245: [DEBUG] memberlist: Failed to join 10.11.112.103: dial tcp 10.11.112.103:7946: connect: no route to host","ts":"2020-09-17T13:33:00.934375645Z"}
{"caller":"main.go:202","component":"MemberList","msg":"net.go:785: [DEBUG] memberlist: Initiating push/pull sync with: 10.11.112.101:7946","ts":"2020-09-17T13:33:00.93463846Z"}
{"caller":"main.go:202","component":"MemberList","msg":"net.go:210: [DEBUG] memberlist: Stream connection from=10.11.112.101:50654","ts":"2020-09-17T13:33:00.934717441Z"}
{"caller":"main.go:202","component":"MemberList","msg":"memberlist.go:245: [DEBUG] memberlist: Failed to join 10.11.112.102: dial tcp 10.11.112.102:7946: connect: no route to host","ts":"2020-09-17T13:33:00.935523095Z"}
{"caller":"main.go:163","error ?":null,"msg":"Memberlist join","nb joigned":1,"op":"startup","ts":"2020-09-17T13:33:00.935552288Z"}