I was recently trying to figure out the best way to do a rolling restart of a cluster of Thin instances via Capistrano so as to allow for code launches without downtime. My cluster is running behind nginx, which is just providing load balancing like so:
# Production upstream thin_production_cluster { server unix:/tmp/thin.production_1.0.sock; server unix:/tmp/thin.production_1.1.sock; server unix:/tmp/thin.production_2.0.sock; server unix:/tmp/thin.production_2.1.sock; server unix:/tmp/thin.production_3.0.sock; server unix:/tmp/thin.production_3.1.sock; }
As you can see, I’ve broken my cluster into 3 subclusters: production_1, production_2, and production_3. This allows them to be restarted one at a time via the following Capistrano task:
desc "Restart the application server cluster" task :restart_app, :roles => :app do p "Detecting Thin clusters..." num_clusters = 0 run "#{try_sudo} ls /etc/thin | grep production.*yml | wc -l" do |ch, stream, data| if stream == :err raise "Error detecting clusters!" elsif stream == :out num_clusters = data.to_i end end p "Initiating rolling restart for #{num_clusters} clusters.." (1..num_clusters).each do |n| p "Restarting cluster ##{n}..." run "#{try_sudo} thin stop -C /etc/thin/production_#{n}.yml" run "#{try_sudo} thin start -C /etc/thin/production_#{n}.yml" sleep 5 end p "Rolling restart complete." end
The nice thing about this approach is that so long as you stick to the naming convention for Thin config files (production_N.yml), no changes need to be made to the Capistrano task if you add/remove subclusters. Changes do unfortunately need to be made to the nginx.conf though.