Re: HA ,Keepalived Ipvsadm HTTP

0 views
Skip to first unread message
Message has been deleted

Gerarda Zmuda

unread,
Jul 12, 2024, 11:23:20 AM7/12/24
to quinedragi

If we see in the documentation of ipvsadm, we will see that using the flag -A we indicated "Add a virtual service", and the flag -s is for the scheduling-method, first we will try with rr that means Round Robin, we have different options such as: wrr - Weighted Round Robin, lc - Least-Connection, lblc - Locality-Based Least-Connection and more.

HA ,Keepalived ipvsadm HTTP


DOWNLOAD https://pimlm.com/2yVDfH



We have the containers' IP, so we are going to add these IP to the virtual service using ipvsadm with the flags -a to add a server to the virtual service that we specified using -t and -m to use masquerading (network access translation, or NAT).

Setup a two node LVS cluster with Apache as the virtualised services, with both nodes operating as both directors and real servers.
Setup healthchecking of services (httpd).
Setup lvs-syncing of connections (ipvsadm sync daemon).

Healthchecking of the web servers is done both at HTTP-level (with check and option httpchk) and using an auxiliary agent check (with agent-check). The latter makes it easy to put a server to maintenance or to orchestrate a progressive rollout. On each backend, you need a process listening on port 5555 and reporting the status of the service (UP, DOWN, MAINT). A simple socat process can do the trick:3

The project homepage has some applicable information, in particular on the wiki. There also exists a HOWTO -HOWTO/HOWTO/index.html which has proven useful while writing this article. But above all, consult the manpages for ipvsadm, keepalived and the references therein; they are up to date and precise.

It is possible to run an ipvsadm daemon which allows synchronization of connection states to a standby / slave ipvsadm/LVS instance, so that on failover "most" connections can keep intact. This is out of scope of this document. It is mentioned here to be aware of it and to not confuse it with the keepalived daemon (see below).

A LVS/ipvsadm loadbalancer can run standalone, i.e. without further "management" software ontop. This is helpful in setup and testing. However for production it lacks the functionality to health-check the loadbalancing targets (i.e. database servers) and adjust the loadbalancer tables accordingly. To do this, a separate user-space instance / daemon is required, and this is the functionality provided by keepalived.

Contrary to earlier Debian distros, currently there is no requirement to configure any special service (yet) for loading kernel modules and such. In older Debian versions (like Squeeze) some /etc/default/ipvsadm,keepalived files needed some tweaking to leverage kernel module loading (which seemed to fail automatically). This is currently no longer true; if working on an old (historical!) Debian version, you may have to investigate here.

I have a http service with nodePort 30080, i config keepalived VIP as :192.168.20.200/24 brd 192.168.20.255 dev eth0 label eth0:vip, when node is get VIP, but ipvsadm -ln grep 30080, the 192.168.20.180 is not in the ipvs forward table, but the ipaddr for eth0 192.168.20.51/24 is in the ipvs table; if i change the VIP to 192.168.20.180/25(is not same to 24 well), it work correct, why? please help me, tks a lot.

Yes, the column is named "%CPU", i.e. the CPU spend forone process related to all processes. As for the load average, it isbased on the length (number of processes except the current one) ofthe queue with all processes in running state. As we know, LVS doesnot interract with any processes except the ipvsadm. So, the normalmode is the LVS box just to forward packets without spending any CPUcycles for processes. This is the reason we want to see load average 0.00

I didn't really understand the arguments against or pro zeroing countersso I'm not a big help here, but if others agree we certainly can add thisfeature. It would be ipvsadm -Z as an analogy to iptables. BTW,we are proud of haveing 64-bit counters in the kernel :)

The filenames that Salvatore uses for his databases are derived from theipvsadm (hex) information in /proc/net/ip_vs.Thus one of my rrd files is lvs.C0A8026E.0017.C0A8010C.0017.rrdrepresenting VIP:port=192.168.2.110:23, RIP:port=192.168.1.12:23.You don't have to look at these files (they're binary rrd database files) and namingthem this way was easier than outputting the IP in dotted quad with perl.Salvatore supplies utilites (which he grabbed off the internet)to convert the IP:ports between dotted quad and hex.

surf to http:/my_url/ganglia.You should see a page with graphs of activity for your nodes.If you want the current information you have to Shift-reload,unlike with lvs-rrd,where the screen will automatically fresh every 5 mins or so.Presumably you can fiddle the ganglia code to accomplish thistoo (but I don't know where yet).

Here is an LVS serving telnet. The clients connect through to the realserverswhere they run their applications. Although the number of connectionsis balanced, the load on each realserver can be quite different.Here's the ipvsadm output taken at the end of the time period shown.

Though, essentially, the "source" is the original net-snmp-lvs-module-0.0.4.tar.gz found in the HOWTO somewhere. The src rpmpackages it with the ipvsadm sources plus a few patches and automates the patch, build and install procedure.

I will not go into the details on how to configure your kernel for ipvsadm etc since it's already covered enough on the web but instead focus on the challenges and subtleties of achieving a load balancing based only on the realservers themselves. I expect you reader have a minimal knowledge of the terms and usage of ipvsadm and keepalived.

Here is my ldirectord config for Openfire - ipvsadm is pretty basic, and should be easily built from this. Note the xmpp check is a Perl script that logs into Openfire - You could subsitute it with a tcp check if that is enough.

LVS persistence directs all (tcpip) connection requests from the client to one particular realserver. Each new (tcpip) connection request from the client resets a timeout (time set by the -p option of ipvsadm)LVS persistence has been part of LVS for quite a while(first implementation by Pete Kese, when it was called pcc) and was added to handle ssl connections, squids and multiport connections like ftp(squids now have their own scheduler).

LVS persistence is rarely needed and has some pitfalls (as explained below).It's useful when state must be maintained on the realserver,e.g. for https key exchanges,where the session keys are held on the realserverand the client must always reconnect with that realserver to maintain the session.

When implementing LVS persistence,there are problems in recognising a clientas the same client returning for another connection.While the application can recognise a returning client bystate information e.g. cookies(which we don't encourage, see below for better suggestions),at layer 4, where LVS operates, only the IPs and port numbers are available.If it's left to the application to recognise the client(e.g. by a cookie), it may be too late, the clientmay be on the wrong realserver and the ssl connection is refused.For LVS persistence, the client is recognised by its IP (CIP) orin recent versions of ip_vs, by CIP:dst_port(i.e. by the CIP and the port being forwarded by the LVS).If only the CIP is used to schedule persistence, then the entriesin the output of ipvsadm will be of the form VIP:0(i.e. with port=0),otherwise the output of ipvsadm will be of the form VIP:port.

When all ports (VIP:0) are scheduled to be persistent, thenrequests by a client for services on different ports(e.g. to VIP:telnet, to VIP:http)will go to the same realserver.This is useful when the client needs access to multiple portsto complete a session.Useful multi-port connections are

The ports won't neccessarily be paired in the way you wante.g. in the (admittedly unlikely) event that youhave an ftp and e-commerce setup on the same LVS, bothftp and e-commerce requests will go to the same realserver.What you'd like is for the e-commerce (80,443) requests tobe scheduled independantly of the ftp (20,21) requests.In this way your ftp requests will go to one realserver whileyour requests to the e-commerce site will go to adifferent realserver.Its simpler administrativelyto have different services (ftp, http/https) on a different lvs.

ipvsadm may have some other limit due to signedness issues and the like.But in the kernel it is stored as an unsigned int, which representsseconds. So any value between 0 and (2^32)-1 seconds is valid, which is potentially a rather long time.

In a normal (non-persistent) LVS, if you connect to VIP:telnet with rr scheduling,you will connect to each realserver in turn.This is because the director is scheduling each tcpip connection as separate items.When you logout of your telnet session and telnet to the VIP again,the director sees a new tcpip connection and schedules it round robinstyle i.e. to the next realserver in the ipvsadm table.

If two services are scheduled as persistent (here telnet, http), theyare scheduled independantly. Here I have only 1 client (so itisn't a good test) and I connect twice by telnet and then twiceby http.Scheduling is within the blocks setup by the `ipvsadm -A` command (here startingat "TCP ...".Here there are two blocks, scheduled separately.

If you setup both a non-persistent service (for testing, say telnet)and persistence on the same VIP, then all services will bepersistent except telnet, which will be scheduledindependantly of the persistent services. In this caseconnections to VIP:telnet would be scheduled by rr (or whatever)and you would connect with all realservers in rotation, whileconnections to VIP:http will go to the same realserver.

The director will make persistent all ports except thosethat are explicitely set as non-persistent. These twosets of ipvsadm commands do not overwrite each other.Persistent and non-persistent connections can be madeat the same time.

aa06259810
Reply all
Reply to author
Forward
0 new messages