In first thought trivial problem, `god gem` is keeping your Resque workers busy and when you deploy it just ask `god` to kill them by: god remove {group_name}. This is prety common misunderstanding thinking that workers will stop running at same moment, some of them can even live forever trying to establish their life inside your production server and continiously eating memory. One of walkarounds looked like this:
in resque.rake create new task
namespace :resque
task :restart => :environment do
pids = Resque.workers.map(&:worker_pids) || []
pids.uniq!
if pids.any?
system("sudo kill -QUIT #{pids.join(' ')}")
puts "killed: #{pids.join(' ')}"
else
puts "resque wasn't running: #{pids.join(' ')}"
end
end
end
It worked fine till one moment when rake stopped running initilizers before task and that cause ugly(hard) to find error. My idea that I think is more beautiful in kind of way that you don't actually not rely on libraries of Resque in moment you try to make Redis stop playing around with workers.
1) modify your resque.god file
worker[:num_workers].to_i.times do |num|
God.watch do |w|
w.name = "resque-#{$env}-#{queues.split(',').join(':')}-#{num}"
w.group = "resque:#{$env}"
w.interval = 30.seconds w.dir = "/var/www/app/#{$env}/current"
w.log = "/var/www/app/#{$env}/current/log/resque_workers.log"
w.uid = 'deployer'
w.gid = 'deployer'
w.env = {"QUEUE" => "#{queues}", "RAILS_ENV" => $env}
w.start = "rake resque:work PIDFILE=/tmp/resqueworkers_#{num}" #make worker save his PID in tmp
.....
2) now modify your rake task to
task :restart => :environment do
system("sudo for i in `ls /tmp/| grep resqueworkers_`; do kill -9 `cat /tmp/$i`; done");
And now add this task in your deployment process before reloading `god` config file. Good luck.
Comments