Web2py's included connection pooling VS external pooling software

432 views
Skip to first unread message

Lisandro

unread,
Oct 28, 2014, 1:09:41 PM10/28/14
to web...@googlegroups.com
I'm using web2py 2.9.5-stable+timestamp.2014.03.15.21.24.06 in a Linode [1] server (a VPS with 4 cores, 4gbs of ram, and SSD storage).
The server uses Ubuntu server 12.04, and there I have multiple instances of web2py running with lighttpd and python flup [2].

Those multiple instances of web2py correspond to serveral websites, all running an exact same copy of my web2py app, each one with it's own database. The database server is PostgreSQL. 
All the instances of web2py connect to their database using pool_size parameter [3] with a value of 20.

In the other hand, I have set max_connections parameter in postgresql.conf with a value of 50.

Yesterday, I added an new website (a new instance of web2py with the same app that the others), and I started to see intermitent http 500 errors, and this error in web2py log:

<type 'exceptions.RuntimeError'> Failure to connect, tried 5 times: Traceback (most recent call last): File "/var/www/ejemplo/gluon/dal.py", line 7845, in __init__ self._adapter = ADAPTERS[self._dbname](**kwargs) File "/var/www/ejemplo/gluon/dal.py", line 688, in __call__ obj = super(AdapterMeta, cls).__call__(*args, **kwargs) File "/var/www/ejemplo/gluon/dal.py", line 2870, in __init__ if do_connect: self.reconnect() File "/var/www/ejemplo/gluon/dal.py", line 669, in reconnect self.connection = f() File "/var/www/ejemplo/gluon/dal.py", line 2868, in connector return self.driver.connect(msg,**driver_args) File "/usr/lib/python2.7/dist-packages/psycopg2/__init__.py", line 179, in connect connection_factory=connection_factory, async=async) OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser connections

I analyzed the "pg_stat_activity" table on postgresql, and I see always around 5 idle connections per web2py instance. First doubt there: should this number be more close to the "pool_size" parameter specified on db.py?  

Anyway, today I changed the "max_connections" parameter in postgresql.conf to a value of 75, and the errors apparently disappear, but I don't fully understand the situation. In addition, I can see that the postgres takes up to 1.5gb of used ram, and I don't know if that's ok. So, I'm wondering if I should try setting pool_size in 0 (that is, disable web2py's connection pooling) and enabling some external pooling software like pgBouncer [4].

Any tip or comment will be very appreciated. Thanks in advance.

Niphlod

unread,
Oct 28, 2014, 3:55:42 PM10/28/14
to web...@googlegroups.com
theoretically a db = DAL(...., pool_size=5) will create AT MOST 5 connections to that db. you have 20, so any app's instance will create AT MOST 20 connections to the db. if you postgres accepts AT MOST 50 connections, you'll reach the top at 2 apps and a half. As for the ram consumed by postgres, it's a setting too. Of course if you have 4 gb of ram, 1.5 assigned to postgres seems normal (if not too conservative). Most db engines (postgresql included) benefit in any operation if more ram is available.

Lisandro

unread,
Oct 28, 2014, 5:30:02 PM10/28/14
to web...@googlegroups.com
Thanks, now I understand, the error was probably caused by a wrong configuration, because I had almost 10 websites connecting with pool_size=20, and the postgresql server was limited to 50 max_connections. Now I changed the values and it's working better.

In addition, I've been reading a lot about tuning up postgresql (being a newby of server administration), and I ended up with pgtune [1]. It showed me that I needed to make some ajustments to postgresql.conf considering the resources of my VPS.

Thanks again!

Lisandro

unread,
Nov 28, 2014, 9:31:02 AM11/28/14
to web...@googlegroups.com
I go back to this thread because today I ran with the same problem: postgresql reaching max_connection limits and, therefor, some of my websites throwing intermitent HTTP 500 errors (because web2py couldn't connect to the database).

To remind, we are talking of a VPS with multiple instances of web2py running, all of them serving the same web2py app, each one connecting to a different postgresql database (however the database structure is the same accross all the databases). Each web2py instance is served by a lighttpd virtual host through fastcgi. Each virtual host (that is, each web2py instance) receives a different volume of traffic (that is obvious, they are different websites with different public). 

The original problem (the one that caused I post this question in the first place) was that the postgresql database server was reaching the "max_connections" limit and, in consecuence, some of the websites were throwing intermitent HTTP 500 errors (web2py couldn't connect to database).

Then, the user oriented me with "pool_size" parameter of DAL constructor. Thanks again! 
I've been reading the web2py documentation about pooling [1] and I notice that it says that "When the next http request arrives, web2py tries to recycle a connection from the pool and use that for the new transaction. If there are no available connections in the pool, a new connection is established".
So, if I didn't get it wrong, I deduce that with web2py's pooling mechanism I can't overcome the "max_connections" postgresql limit. That is because, no matter the size of the pool, if the pool is full and the website is receiving a lot of requests, new connetions will be created, and eventually the database server will reach the "max_conectios" limit.

So, I read about pgBouncer [2], with special attention to the configuration parameter "max_client_conn" [3]. Also I've found this two posts [4] [5] explaining that this parameter can be set in **any** number, independently of the "max_connections" postgresql configuration.

Therefor, ¿is it ok to say that web2py's pooling mechanism **won't** let you overcome postgresql max_connections limit, but in the other hand, you **will** be able to overcome that limit using pgbouncer? Thanks in advance. 





El martes, 28 de octubre de 2014 16:55:42 UTC-3, Niphlod escribió:

Niphlod

unread,
Nov 28, 2014, 12:25:52 PM11/28/14
to web...@googlegroups.com


On Friday, November 28, 2014 3:31:02 PM UTC+1, Lisandro wrote:
I go back to this thread because today I ran with the same problem: postgresql reaching max_connection limits and, therefor, some of my websites throwing intermitent HTTP 500 errors (because web2py couldn't connect to the database).

To remind, we are talking of a VPS with multiple instances of web2py running, all of them serving the same web2py app, each one connecting to a different postgresql database (however the database structure is the same accross all the databases). Each web2py instance is served by a lighttpd virtual host through fastcgi. Each virtual host (that is, each web2py instance) receives a different volume of traffic (that is obvious, they are different websites with different public). 

The original problem (the one that caused I post this question in the first place) was that the postgresql database server was reaching the "max_connections" limit and, in consecuence, some of the websites were throwing intermitent HTTP 500 errors (web2py couldn't connect to database).

Then, the user oriented me with "pool_size" parameter of DAL constructor. Thanks again! 
I've been reading the web2py documentation about pooling [1] and I notice that it says that "When the next http request arrives, web2py tries to recycle a connection from the pool and use that for the new transaction. If there are no available connections in the pool, a new connection is established".
So, if I didn't get it wrong, I deduce that with web2py's pooling mechanism I can't overcome the "max_connections" postgresql limit. That is because, no matter the size of the pool, if the pool is full and the website is receiving a lot of requests, new connetions will be created, and eventually the database server will reach the "max_conectios" limit.

no, you got it wrong again. pool_size=5 will create AT MOST 5 connections . if a 6th is needed, users will wait for a connection to be freed. 
if your postgresql accept at most 50 connections, do the math.
Every db = DAL(, pool_size=5) lying around will create AT MOST 5 connections, and that means you can host 10 apps.
If you need 50 apps, set pool_size=1 and let users wait, or set max_connections in postgres to a higher value.

Lisandro Rostagno

unread,
Nov 28, 2014, 2:48:07 PM11/28/14
to web...@googlegroups.com
Mmm... I see. That was my understanding in the first place.
At that time I did the maths, I had 10 apps, each one using a
pool_size of 3. In postgresql.conf max_connections was set to 80.
However this morning, with those numbers, almost every of my websites
was throwing intermitent HTTP 500 errors, and the error tickets were
all the same: FATAL: remaining connection slots are reserved for
non-replication superuser connections.

Right now, I have almost 13 websites, all of them with pool_size in 3,
and max_connections in 80.
However, if I check the table "pg_stat_activity" I can see 65
connections, and I can see there is 5 connections per app.

I've tried even setting pool_size to 1 for one of the apps, restarted
database server and webserver, but again I check pg_stat_activity and
I see 5 connections for that app. ¿Am I missing something too ovbious?
> --
> Resources:
> - http://web2py.com
> - http://web2py.com/book (Documentation)
> - http://github.com/web2py/web2py (Source code)
> - https://code.google.com/p/web2py/issues/list (Report Issues)
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "web2py-users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/web2py/5RTO_RqCsus/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> web2py+un...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Niphlod

unread,
Nov 30, 2014, 7:48:47 AM11/30/14
to web...@googlegroups.com
did you restart the webserver ? I don't think that changing pool_size at runtime when connections are still open will make the number of active connection dropped.

Lisandro Rostagno

unread,
Nov 30, 2014, 9:52:27 AM11/30/14
to web...@googlegroups.com
Yes in deed. I've restarted the webserver and the database server.
Recently I've tried setting pool_size to 1 for every app, that is, for every website. Restarted postgresql and webserver (lighttpd). And then I used this SQL statement to check the total count of connections for every database (or what it is the same, for every app, because every app has its own database):

select datname, count(*) from pg_stat_activity group by datname order by datname;

Just to remind, I have around 13 apps running, that is, 13 websites, 13 databases. 
With this new configuration of every app using a pool size of 1, I restarted the database server and the webserver, and then I ran the previous SQL statement to see the total connections for every app, and I see 5 idle connections for every app, that is, for every website that has some visitors browsing the site. 
A couple of the websites almost never have visitors, so, for those websites, there were no idle connections. Then I go to the homepage of those websites, rechecked connections, and there I see 5 idle connections for those websites.

I already checked and re-checked the code of my app to be shure that I'm setting "pool_size" parameter correctly. 


In the other hand, I've been testing pgbouncer on localhost, reading about it, and I'll be setting it for production. For what I've read, independently of the postgresql max connections, I can set pgbouncer to a max_client_conn of 2000 (for example) with a default_pool_size of 20. Then all the apps connect to pgbouncer, and pgbouncer will multiplex connections to postgres. However I don't want to mix things in this post, regardless of pgbouncer, I would like to understand why I can't get to work web2py's pooling mechanism.

I'm really grateful for your help! I'll continue trying to figure it out. Any comment or suggestion will be appreciated. Thanks!

Michele Comitini

unread,
Nov 30, 2014, 2:04:18 PM11/30/14
to web...@googlegroups.com
pool_size==number of threads in a web2py process.
I  suggest to work around the problem by setting the number of threads to 1 in you flup server.  I.e. no threading.
You should also see smoother performance across applications on higher loads.

You received this message because you are subscribed to the Google Groups "web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to web2py+un...@googlegroups.com.

Michele Comitini

unread,
Nov 30, 2014, 4:00:17 PM11/30/14
to web...@googlegroups.com
p.s. by "no threading" I mean to use processes in place of threads.  The number of processes is something you must tune based on server resources, 2xn where n is the number of cores is a safe choice.

Lisandro Rostagno

unread,
Dec 1, 2014, 11:47:24 AM12/1/14
to web...@googlegroups.com
Sorry about the delay. I recently installed pgbouncer. I let postgresql max_connection set to 80, and configure pgbouncer with a max_client_conn of 1000 and a default_pool_size of 20. 
Now when I check pg_stat_activity I can see different amounts of idle connections per app, more accordingly with that app's traffic. What I mean is that I can see more connections for the apps with higher volumes of traffic and less connections for the apps with lower volumes of traffic.

What I still don't understand is the "max_connections" setting of postgresql  vs the "max_client_conn" of pgbouncer. For what I've read in [1] and [2] it's ok to set those variables for example in the way I did, leting postgresql max_connections in an appropiated value (in my case, using pgtune, 80 max_connections) and using a high value for "max_client_conn" on pgbouncer configuration.

What isnt' clear to me is: what will happen when one of the apps has more than 20 active connections to pgbouncer and requests keep coming in? The ideal (for me, in this case) would be that next requests just stay waiting (browser saying "waiting for domain.com....").


In the other hand, related to Michele comment, right now I have every flup server running with "max-procs" set to 1, this is how my lighttpd virtual hosts look like:

$HTTP["host"] == "diarioprimicia.com.ar" {
    server.document-root = "/var/www/diarioprimicia"
    server.dir-listing = "disable"
    server.error-handler-404 = "/diarioprimicia.fcgi"
    server.kbytes-per-second = 256
    connection.kbytes-per-second = 128
    accesslog.filename = "/var/log/lighttpd/diarioprimicia_access.log"
    fastcgi.server = (
        ".fcgi" => ("localhost" => (
            "check-local" => "disable",
            "max-procs" => 1,
            "socket" => "/var/tmp/lighttpd/diarioprimicia.sock",
            "bin-path" => "/var/www/diarioprimicia/diarioprimicia.fcgi")
        )
    )
}


Then the file indicated by "bin-path" contains the following:

#!/usr/bin/python
import sys
import gluon.main
from flup.server.fcgi_fork import WSGIServer
application=gluon.main.wsgibase
WSGIServer(application).run()


Another "strange" thing I see (strange for me, because I don't fully understand in) is that, regardless of setting "max-procs" to 1, when I use pgrep to check for fastcgi processes I can see exactly 5 processes for every app.

I'm sorry to mix all this stuff in this post, if you think that I should move it to other forums, let me know.
Thank you very much!



Niphlod

unread,
Dec 1, 2014, 3:40:56 PM12/1/14
to web...@googlegroups.com
@lisandro: michele is right.. the pool_size parameter calculation are accurate only if there's one process per app. web2py can't coordinate pools among different processes...
also, max_client_conn is exactly the maximum number of connection the pgbouncer process will allow "coming in". Once over the max, others will be queued.
What pgbouncer does is exactly what web2py does if it runs in a single process and with a single DAL connection: it keeps n connections open to the backend and recycles connections coming in. Once "full", the latest coming in needs to wait for a connection to be freed.

Michele Comitini

unread,
Dec 1, 2014, 3:56:09 PM12/1/14
to web...@googlegroups.com

What you have here is lighttpd starting 1 forking flup server.  The important part is this:

from flup.server.fcgi_fork import WSGIServer

With that you have a python interpreter for each request, up to a maximum.  The default is 5 spare children, the default  maximum is 50.
Under lighttpd is also possible to use fcgi_single and "max_procs" = 5 with similar results  [I'd expect a slightly bigger memory footprint].
Since you have a forking/multiprocess configuration, you need to have a single connection in the web2py connection pool so pool_size=1 is what you need, anything more is just a waste of resources and postgres connections.  The max number of open connections should be (max_num_wsgi_proc[flup] * pool_size[web2py DAL]) * (max_procs[lighttpd] * num_applications[lighttpd]).

About pgbouncer, IMHO you should use it only if you have n client and m max postgresql connections, and you have n > m.  To speedup things you can use memcache/redis and/or a more complex setup with pgpoolII and multiple postgres backends.

Lisandro Rostagno

unread,
Dec 1, 2014, 5:34:47 PM12/1/14
to web...@googlegroups.com
Thank you very much Niphlod and Michele for your help! Now I have a more clear understanding of all this. I understand now why I was seeing 5 fastcgi processes regardless of the "max-procs" configuration (that was because I was using fcgi_fork, and I was actually seeing one process and its 5 children).

Michele, abount using pgbouncer when having more clients connections than max postgresql connections, I think that's my case (correct me if I'm wronk). All the sites I'm hosting are news sites, and ocassionally some of them have sudden spikes in traffic. The last time one of the sites presented a high peak on traffic, connections reached the postgresql max_connections limit and, therefor, **all** the applications started throwing intermitent HTTP 500 errors.

So I started my research to see what could I do to **avoid compromising some apps when one of them is getting high traffic**. Above all, I want to be able to **control "the limits" for every app**. I started setting server.kbytes-per-second  and connection.kbytes-per-second for every app. However that didn't resolve the problem of high database connection requests.

With the last info you gave me, I think now I can achieve my goal "playing around" a little with this settings (please correct me if I'm worng):

 - max-procs for the fastcgi of every app: I could set it to 1 for apps with lower traffic, and set it to a higher value for apps with higher traffic (of course, always checking stats to see what happens).

 - pool_size pgbouncer parameter: I mean modifying this value for every database, using a larger pool for apps with higher traffic, and viceversa. The cool stuff here is that with pgbouncer I can check stats of clients, pools, etc. My goal here is to set a limit for an app, and if a new connection is requested but the pool is full, the client keeps waiting until a connection is freed (no http 500 error).

I'll be trying these changes and I'll be posting here the results. 
As always, any comment is really appreciated. Thank you again for the help!






Michele Comitini

unread,
Dec 2, 2014, 4:57:39 PM12/2/14
to web...@googlegroups.com
Everything ok with your plan.  One note, instead of playing with the max-procs in lighttpd you can use the fastcgi server parameters. btw don't use flup use this:
https://github.com/momyc/gevent-fastcgi
and arrange the num_workers parameter.
to do that differently for each application may end up with one adapter script for each different application.


Lisandro Rostagno

unread,
Dec 3, 2014, 9:44:15 AM12/3/14
to web...@googlegroups.com
Actually I was referring to max-procs parameter of the fastcgi server that is setup in lighttpd virtual host configuration. I don't know if you mean that. In my virtual host configuration, I setup the fastcgi server like this:

    fastcgi.server = (
        ".fcgi" => ("localhost" => (
            "check-local" => "disable",
            "max-procs" => 2,
            "socket" => "/var/tmp/lighttpd/diarioprimicia.sock",
            "bin-path" => "/var/www/diarioprimicia/diarioprimicia.fcgi")
        )
    )

Is that what you mean by fastcgi server parameters? 

Yesterday I changed from "fcgi_fork" to "fcgi" (still using flup), and I set "max-procs" to 2 (like example above), and then used pgrep to check for processes, and now I see 2 as expected. I also noticed a better usage of ram as you said! 

I must ask: ¿is there an obvious reason not to use flup? I have to say that I'm no "server administrator guru". I just started using web2py, everything scaled quick, and after some time, step by step, I ended up where I'm now, with linode with ubuntu server, running web2py on production with several sites.. I already have scripts that I can easily run to setup a full site: as I said, the app installed in every site is the same, so my script takes care of creating the folders, the database, cloning repository, creating virtual host configuration, fcgi handler, etc. 
So now if I want to switch to something else, I must have a very good reason because I would have to rewrite the scripts and migrate all the current sites. So that's why I'm asking if there is an obvious reason to stop using flup (I've read some place that the author has discontinuated it but I'm not shure).
I also read good stuff about nginx, some say that I should change from lighttpd to nging, but everythin is working so fine that I'm not shure if I should change anything :P


In the other hand, in relation to the database connections and pooling mechanism, I posta question in the mailing list of pgbouncer, to clarify a couple of doubts I had specifically about pgbouncer. Here is the full conversation, if it helps anybody:


Just to tell, now I have pgbouncer running on transaction pooling mode (web2py's pooling mechanism is disabled, that is, DAL with pool_size=0). I started lowering the pool_size of one of the database/user pairs (that is, lowering the pool_size of one of the sites), and in turn out to one point where I couldn't access the website, my browser kept "waiting for site". That is what I was looking for, so for the moment I'm going to use this configuration.

Notice that in pgbouncer the pool_size is asigned to the database/user pair, so regardless of the number of "max-procs" in fastcgi server, if every process connects with DAL(...) to the same database/user, all the fastcgi processes will "share" the pool (I mean, it's transparent to pgbouncer).


Thanks again and sorry about the extense emails. I'm working on that :P
 

Michele Comitini

unread,
Dec 3, 2014, 12:16:27 PM12/3/14
to web...@googlegroups.com
why not flup?
- not event based.  Event based servers are usually more responsive under load.
- seems pretty unmantained.
- misses some tuning options.

max-procs in lighttpd configuration:
- leave it to 1
- work with the following options in your script:
WSGIServer(application, minSpare=1, maxSpare=5, maxChildren=50,
                 maxRequests=0)

explanation from flup code:
    """
    A preforked server model conceptually similar to Apache httpd(2). At
    any given time, ensures there are at least minSpare children ready to
    process new requests (up to a maximum of maxChildren children total).
    If the number of idle children is ever above maxSpare, the extra
    children are killed.

    If maxRequests is positive, each child will only handle that many
    requests in its lifetime before exiting.
   
   """


Lisandro Rostagno

unread,
Dec 4, 2014, 8:14:42 AM12/4/14
to web...@googlegroups.com
2014-12-03 14:16 GMT-03:00 Michele Comitini <michele....@gmail.com>:
> why not flup?
> - not event based. Event based servers are usually more responsive under
> load.
> - seems pretty unmantained.
> - misses some tuning options.

I see. I've read a little about gevent and I understand that it would
be a valuable incorporation to my actual approach. I'll be giving it
a try in the next days. Thanks for the tip.

>
> max-procs in lighttpd configuration:
> - leave it to 1
> - work with the following options in your script:
> WSGIServer(application, minSpare=1, maxSpare=5, maxChildren=50,
> maxRequests=0)
>
> explanation from flup code:
> """
> A preforked server model conceptually similar to Apache httpd(2). At
> any given time, ensures there are at least minSpare children ready to
> process new requests (up to a maximum of maxChildren children total).
> If the number of idle children is ever above maxSpare, the extra
> children are killed.
>
> If maxRequests is positive, each child will only handle that many
> requests in its lifetime before exiting.
>
> """

I've made this change to one of the apps. I'll be testing with other
apps in the next days. To get it running I had to import "fcgi_fork"
from flup.server instead of simple "fcgi".

I know this subject is way away from this group, but I'm tempted to
ask, given the quality of the info I've received here. If I setup an
app with more minspare and maxspare children than other, ¿would this
mean that the app will have more resources assigned than the other
app?

This is a question that I'm asking since a while: how do I limit the
server resources used by every app (that is, accordingly to the "plan"
every client has paid for his app). I've already limited network
bandwidth for every app, and recently I was able to limit the
connections to the database for every app.

So, if I assign more spare children to an app, ¿would it be that the
app is being assigned with more CPU?

Michele Comitini

unread,
Dec 6, 2014, 2:48:28 AM12/6/14
to web...@googlegroups.com



I've made this change to one of the apps. I'll be testing with other
apps in the next days. To get it running I had to import "fcgi_fork"
from flup.server instead of simple "fcgi".

I know this subject is way away from this group, but I'm tempted to
ask, given the quality of the info I've received here. If I setup an
app with more minspare and maxspare children than other, ¿would this
mean that the app will have more resources assigned than the other
app?

This is a question that I'm asking since a while: how do I limit the
server resources used by every app (that is, accordingly to the "plan"
every client has paid for his app). I've already limited network
bandwidth for every app, and recently I was able to limit the
connections to the database for every app.

So, if I assign more spare children to an app, ¿would it be that the
app is being assigned with more CPU?


I would say no unless you have a number of CPU >= total number of possible processes from all app.
  That would be a really large server!
minspare, maxspare are two bounds that you need to shape based on the physical resources at disposal and the need to make the app responsive.  Spares stay idle but ready to answer to new requests, the more you have the better is responsiveness under variable loads.  But idle processes consume memory, while running processes also consume CPU time.  If you have 4 cores, you will have no more than 4 processes running in any unit of time.  The O.S. kernel will try to manage all the processes requesting to run by assigning them to a CPU, by various criteria, the fact is that each time the CPU has to switch from one process to another (context switching) it has to do a lot of work.  So having too many processes in the run queue compared to the number of CPUs, makes the system waste much of it's time in context switching.
To assign different priorities to each application, minspare, maxspare are only a little part of a complex problem.   Lighttpd can help by managing the network traffic across applications, the Linux (BSD too!) kernel has many options, to enforce limits and priorities on resource usage to a single or a group processes.
The literature about this is overwhelming :-)

 

Lisandro Rostagno

unread,
Dec 6, 2014, 7:49:28 AM12/6/14
to web...@googlegroups.com
Thank you very much Michele for your answer.
I understand that assigning different priorities to each web app is a
complex problem. I mean, it's "complex" in the sense that it involves
many different areas. But I think that with the information I have and
the configuration variables I know, I already can do a pretty good
aproximation to what I want:
- set server.kbytes-per-second in every virtual host
- assign a pool_size (in pgbouncer configuration) for the database of
every webapp
- change the values of minspare and maxspare for the fastcgi process
of every webapp

Anyway, my server is still having very little traffic: an average of 5
requests per second, that is, between all the webapps together, so I
guess that I can still scale without worrying too much.

Well that's it, I won't bother with anymore questions :P
Thanks a lot Michele and Niphlod for all the help!
Regards, Lisandro.
Reply all
Reply to author
Forward
0 new messages