PHP-FPM on highload tips

2,544 views
Skip to first unread message

Nanu

unread,
Jun 21, 2008, 9:37:41 PM6/21/08
to highload-php-en
When you running a highload website with PHP-FPM via FastCGI, the
following tips may be useful to you : )

1. Compile PHP's modules as less as possible, the simple the best
(fast);

2. Increas PHP FastCGI child number to 100 and even more. Sometime,
200 is OK! ( On 4GB memory server);

3. Using SOCKET PHP FastCGI, and put into /dev/shm on Linux;

4. Increase Linux "max open files", using the following command (must
be root):
# echo 'ulimit -HSn 65536' >> /etc/profile
# echo 'ulimit -HSn 65536 >> /etc/rc.local
# source /etc/profile

5. Increase PHP-FPM open file description rlimit:
# vi /path/to/php-fpm.conf
Find "<value name="rlimit_files">1024</value>"
Change 1024 to 4096 or higher number.
Restart PHP-FPM.

6. Using PHP code accelerator, e.g eAccelerator, XCache. And set
"cache_dir" to /dev/shm on Linux.

mike

unread,
Jun 22, 2008, 4:00:24 AM6/22/08
to highloa...@googlegroups.com
On 6/21/08, Nanu <nan...@gmail.com> wrote:
>
> When you running a highload website with PHP-FPM via FastCGI, the
> following tips may be useful to you : )
>
> 1. Compile PHP's modules as less as possible, the simple the best
> (fast);

Compiling more into the core of PHP means a larger footprint each
instance created. Sometimes it makes sense to have modules for those
modules that don't need to be used very often...

> 2. Increas PHP FastCGI child number to 100 and even more. Sometime,
> 200 is OK! ( On 4GB memory server);

I'd say it depends on your load. PHP-FPM will soon adapt to the load
anyway, so this should be moot.

> 3. Using SOCKET PHP FastCGI, and put into /dev/shm on Linux;

Many people have had issues with socket-based PHP/FastCGI. In fact,
someone just had an issue on nginx mailing list (or was it IRC?) the
other day - he changed to using TCP over localhost and his issues went
away. I have read over the years many issues about using sockets
(possibly related to heavy MySQL traffic too, I forget)

> 6. Using PHP code accelerator, e.g eAccelerator, XCache. And set
> "cache_dir" to /dev/shm on Linux.

don't forget APC :) imho eAccelerator hasn't kept up with the times,
it used to be Turck MMCache, then it slowly died out, renamed to
eAccelerator, but didn't seem to have as much support/was buggier than
APC. I've also used xcache, I saw no noticable difference though, and
APC has the file upload hooks (although I am not using them) - I
figure APC is probably the best as it's maintained by PHP core
developers and if I recall a bytecode cache will be built in to PHP6
anyway and it's probably based off APC if so...

Nanu

unread,
Jun 22, 2008, 5:22:19 AM6/22/08
to highload-php-en
Long time ago, when I use APC I got the problem that if there are two
PHP scripts with the same filename ( in different directory ), the APC
can not cache the two scripts with the same filename correctly. I
don't know this problem fixed or not. And eAccelerator keep up all the
time, but very very slow :), as the 0.9.5.3 release last month.

I tested XCache, APC and eAccelerator on many China highload websites,
the performance is eAccelerator > APC > XCache, those websites have a
PageView of more than 3,000,000 per day, even more than 5,000,000.

On 6月22日, 下午4时00分, mike <mike...@gmail.com> wrote:

mike

unread,
Jun 22, 2008, 1:21:52 PM6/22/08
to highloa...@googlegroups.com
Interesting. I've never benchmarked my stuff. Perhaps you should kick
this over to the APC team and make them feel bad about it. They're
core PHP devs, why is someone else's bytecache faster? :)

I've got probably ~ 3 million PHP-based requests per day spread over 3
webservers. Total number of FastCGI engines is only something like 70
(I have different pools for each of my clients) and it seems
sufficient. I am excited when PHP-FPM handles the load for me so I am
not wasting resources and if I am getting close to limits it will
launch more engines for me... but throwing a million engines at
something, I guess if you have a dedicated machine + the RAM for it,
go for it :)

Also - if you do authenticated downloads or anything requiring PHP to
open a file and send it to the browser (which keeps that engine busy
for a long time) look at offloading it (X-Accel-Redirect in nginx
[nginx is my webserver of choice now), X-Lighttpd-Sendfile [I believe]
in Lighttpd, and Apache has module(s) too) - that way PHP can stop
having to process spoonfeeding the files to the end user and that work
is offloaded to the webserver.

jackbillow

unread,
Jun 26, 2008, 10:19:54 AM6/26/08
to highloa...@googlegroups.com
this discuss is interesting,I like it.
Reply all
Reply to author
Forward
0 new messages