Has anyone experienced mongrel_cluster not starting because of
(supposed) permission problems? After a long day of fighting with
various issues, ranging from setup_scm (thanks for the 67002 pastie,
saved my day) to setup_admin_as_root and realising that 7.0.4 should
_not_ be used yet, my application fails to start because of:
/usr/lib/ruby/1.8/fileutils.rb:243:in `mkdir': Permission denied -
script/../config/../tmp/sockets (Errno::EACCES)
I've added the tmp directory after initial svn checkin, but the
mongrel cluster refuses to start. Relevant logs:
(mongrel.log)
** Daemonized, any open files are closed. Look at log/mongrel.
8000.pid and log/mongrel.log for info.
** Starting Mongrel listening at 127.0.0.1:8000
** Changing group to app_domaine.
** Changing user to mongrel_domaine.
** Starting Rails with production environment...
** Daemonized, any open files are closed. Look at log/mongrel.
8001.pid and log/mongrel.log for info.
** Starting Mongrel listening at 127.0.0.1:8001
** Changing group to app_domaine.
** Changing user to mongrel_domaine.
** Starting Rails with production environment...
/usr/lib/ruby/1.8/fileutils.rb:1244:in `initialize': Permission denied
- /var/www/apps/domaine/current/config/../public/blank.html
(Errno::EACCES)
from /usr/lib/ruby/1.8/fileutils.rb:1244:in `copy_file'
from /usr/lib/ruby/1.8/fileutils.rb:1243:in `copy_file'
from /usr/lib/ruby/1.8/fileutils.rb:1213:in `copy'
from /usr/lib/ruby/1.8/fileutils.rb:447:in `copy_entry'
from /usr/lib/ruby/1.8/fileutils.rb:1306:in `traverse'
from /usr/lib/ruby/1.8/fileutils.rb:445:in `copy_entry'
from /usr/lib/ruby/1.8/fileutils.rb:423:in `cp_r'
from /usr/lib/ruby/1.8/fileutils.rb:1377:in `fu_each_src_dest'
... 26 levels...
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/
mongrel_rails:83:in `run'
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/
command.rb:211:in `run'
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/
mongrel_rails:243
from /usr/bin/mongrel_rails:16
/usr/lib/ruby/1.8/fileutils.rb:1244:in `initialize': Permission denied
- /var/www/apps/domaine/current/config/../public/blank.html
(Errno::EACCES)
from /usr/lib/ruby/1.8/fileutils.rb:1244:in `copy_file'
from /usr/lib/ruby/1.8/fileutils.rb:1243:in `copy_file'
from /usr/lib/ruby/1.8/fileutils.rb:1213:in `copy'
from /usr/lib/ruby/1.8/fileutils.rb:447:in `copy_entry'
from /usr/lib/ruby/1.8/fileutils.rb:1306:in `traverse'
from /usr/lib/ruby/1.8/fileutils.rb:445:in `copy_entry'
from /usr/lib/ruby/1.8/fileutils.rb:423:in `cp_r'
from /usr/lib/ruby/1.8/fileutils.rb:1377:in `fu_each_src_dest'
... 26 levels...
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/
mongrel_rails:83:in `run'
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/
command.rb:211:in `run'
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/
mongrel_rails:243
from /usr/bin/mongrel_rails:16
apache error log:
[...]
[Wed Jun 13 00:05:56 2007] [error] proxy: BALANCER: (balancer://
domaine_cluster). All workers are in error state
[Wed Jun 13 00:06:09 2007] [error] proxy: BALANCER: (balancer://
domaine_cluster). All workers are in error state
[Wed Jun 13 00:14:23 2007] [error] (111)Connection refused: proxy:
HTTP: attempt to connect to 127.0.0.1:8000 (127.0.0.1) failed
[Wed Jun 13 00:14:23 2007] [error] ap_proxy_connect_backend disabling
worker for (127.0.0.1)
[Wed Jun 13 00:14:23 2007] [error] (111)Connection refused: proxy:
HTTP: attempt to connect to 127.0.0.1:8001 (127.0.0.1) failed
[Wed Jun 13 00:14:23 2007] [error] ap_proxy_connect_backend disabling
worker for (127.0.0.1)
[Wed Jun 13 00:14:53 2007] [error] proxy: BALANCER: (balancer://
domaine_cluster). All workers are in error state
[Wed Jun 13 00:34:44 2007] [error] (111)Connection refused: proxy:
HTTP: attempt to connect to 127.0.0.1:8001 (127.0.0.1) failed
[Wed Jun 13 00:34:44 2007] [error] ap_proxy_connect_backend disabling
worker for (127.0.0.1)
[Wed Jun 13 00:34:44 2007] [error] (111)Connection refused: proxy:
HTTP: attempt to connect to 127.0.0.1:8000 (127.0.0.1) failed
[Wed Jun 13 00:34:44 2007] [error] ap_proxy_connect_backend disabling
worker for (127.0.0.1)
[Wed Jun 13 00:35:23 2007] [error] proxy: BALANCER: (balancer://
domaine_cluster). All workers are in error state
[Wed Jun 13 00:38:19 2007] [error] (111)Connection refused: proxy:
HTTP: attempt to connect to 127.0.0.1:8000 (127.0.0.1) failed
[Wed Jun 13 00:38:19 2007] [error] ap_proxy_connect_backend disabling
worker for (127.0.0.1)
[Wed Jun 13 00:38:19 2007] [error] (111)Connection refused: proxy:
HTTP: attempt to connect to 127.0.0.1:8001 (127.0.0.1) failed
[Wed Jun 13 00:38:19 2007] [error] ap_proxy_connect_backend disabling
worker for (127.0.0.1)
Any help would be highly appreciated, thanks in advance.
/D
What version of ubuntu are you running on? Should be using Dapper
(6.06.1)
- Mike
I'm using Ubuntu 6.0.6 server, since I realized after a few attempts
that 7.0.4 did not work.
In essence, svn tasks work fine, app directory creation and symlinking
work fine, and the process chokes on the last step (or actually it
doesn't since it reports no errors), not being able to start the
mongrel_cluster.
The errors I'm getting all indicate permissions errors, all errors
about files in the public directory. I've tried manually chmod'ing the
public directory, the entire application directory, stating different
users in my deploy.rb, to no avail.
Please tell me what I can do to help, since I think deprec is a
brilliant set of tools, and it should be working, just not sure what's
wrong. Could it be any of my plugins (active_scaffold, acts-
as_taggable_on_steroids, css_graphs, simply_helpful, acts_as_ferret,
attachment_fu, fckeditor) not playing nice with files in public? That
would presumably have been remedied when I tried to manually change
permissions on the server version of the application, before launching
the cluster.
Again, thanks
/D
http://forum.slicehost.com/comments.php?DiscussionID=571&page=1#Item_1
I'll post them for your convenience:
Symptom: after using deprec to start up your application, you can't
access your rails website. A check of {app_dir}/log/mongrel.log shows
that mongrel is dying on startup trying to read {app_dir}/config/../
public/blank.html or something similar.
Cause: Your mongrel cluster is properly configured to use
app_{app_name} as its user group, but for some reason the deprec setup
task is setting your app directory subdirectories to user group
"users" instead, causing the mongrels to not have access permissions
to look at the files. They die in initialization.
Fix: After executing your cap deploy and watching the mongrels die,
log into your server, cd on over to the application directory, and
type the following:
sudo chown -R app_{app_name} public/
mongrel_rails cluster::restart -C /etc/mongrel_cluster/{app_name}.yml
You can probably put this in the deploy script itself. I haven't
figured where the heck it is yet and its 2 AM, so that and other
exciting stories another day.
Best,
/D
On Jun 28, 6:11 am, "patrick.in.g...@gmail.com"
I added this to recipes.rb in the deprec gem,
set_perms_for_mongrel_dirs task, perhaps mike can put it in for the
next release?
[...]
desc "set group ownership and permissions on dirs mongrel needs to
write to"
task :set_perms_for_mongrel_dirs, :roles => :app do
tmp_dir = "#{deploy_to}/current/tmp"
public_dir = "#{deploy_to}/current/public" # this row was added to
fix permissions
shared_dir = "#{deploy_to}/shared"
files = ["#{deploy_to}/shared/log/mongrel.log", "#{deploy_to}/
shared/log/#{rails_env}.log"]
sudo "chown -R #{mongrel_user} #{public_dir}" # this row was
added to fix permissions
sudo "chgrp -R #{mongrel_group} #{tmp_dir} #{shared_dir}"
sudo "chmod -R g+w #{tmp_dir} #{shared_dir}"
# set owner and group of mongrels file (if they exist)
files.each { |file|
sudo "chown #{mongrel_user} #{file} || exit 0"
sudo "chgrp #{mongrel_group} #{file} || exit 0"
}
end
[...]
I'm wondering why this is affecting you and not most other
slicehosters?
Is your local username different to the username on your slice
perhaps?
Will look into this soon.
Mike
> ...
>
> read more »
I'm actually not a slicehoster, so I'm afraid it may affect people
randomly. Even so, I've followed the instructions on the slicehost
wiki, so it may be that their tips are flawed. What username would you
recommend, if not deploy, for example? The root user on the slice? And
yes, my local username is different than that on the slice. Is that a
problem, you think?
Adding the code above to change permissions worked brilliantly, so you
may want to incorporate the code to the relevant task to explicitly
set permissions.
Thanks,
/D
> ...
>
> read more »
I'm actually not a slicehoster, so I'm afraid it may affect people
randomly. Even so, I've followed the instructions on the slicehost
wiki, so it may be that their tips are flawed. What username would you
recommend, if not deploy, for example? The root user on the slice? And
yes, my local username is different than that on the slice. Is that a
problem, you think?
Adding the code above to change permissions worked brilliantly, so you
may want to incorporate the code to the relevant task to explicitly
set permissions.
Thanks,
/D
> ...
>
> read more »