Devise – Test user timeout in feature/integration specs using Warden

Devise is an excellent framework for strapping authentication features onto your Rails app. One of the very handy modules that provides session timeout features is :timeoutable.

Being a responsible test-driven developer, you start writing tests to ensure your application behaves correctly when the User tries to perform an action that is not allowed after their session timed out. But how to simulate that 30 minutes have gone by? (default config.timeout_in = 30.minutes)

A brief search of the nets offers a few pointers to overriding the Devise User.timedout? method but that doesn’t really help our feature spec when ensuring that User was redirected to the Login page upon performing a session-protected action.

Here’s one solution:

Devise is built on top of Warden, so let’s see if we can’t leverage Warden’s test helpers to simulate our timed out user:

Include Warden::Test::Helpers to put Warden into test mode:

# settings_page_spec.rb

include Warden::Test::Helpers

or

# spec/rails_helper.rb

RSpec.configure do |config|
  ...
    config.include Warden::Test::Helpers, type: :feature
end

Modify the Rack proxy via Warden.on_next_request in order to simulate that the User has been timed out by Devise::SessionsController:

# settings_page_spec.rb

it 'does not allow updating password' do
  expect(current_path).to eq(user_settings_path(user))
  clink_link 'Manage Password'
    expect(page).to have_button('Update Password')
    Warden.on_next_request do |proxy|
      proxy.set_user(nil)
      click_button 'Update Password'
      expect(current_path).to eq(login_path)
    end
end

We now have a method of simulating timed out behavior that does not involve playing games with time elapsed.

Redis cache – avoid timeout using ping

Are you getting those mysterious Airbrakes telling you a Timeout has occurred trying to talk to Redis and its kept you awake at night worrying what your poor users see while your (hopefully) slave instance takes over for your master in your redis-cluster?

Worry no more! Because we’re going to proactively ping our current Redis server connection to see if its up and hope to catch it napping before our users do. Ping is available via the redis library but how to get access to it from our Rails app?

Here’s how we’ll schedule a ping every 30 minutes. I’m using Rufus but you can use your scheduling gem of your choice:

require 'rufus-scheduler'
scheduler = Rufus::Scheduler.new

# keep-alive...
scheduler.every '30m' do
  store = ActiveSupport::Cache.lookup_store(MyApp::Application.config.cache_store)
  Rails.logger.info("Pinging Redis via cache-store ...")
  store.instance_variable_get(:@data).ping
end

As long as your session and cache stores both use the same cache server (but hopefully with different keys such as /sessions and /cache, respectively) you can use the above method of retrieving the current Redis client connection held in the @data instance variable of the ActiveSupport::Cache::RedisStore retrieved by ActiveSupport::Cache.lookup_store.

Enjoy!

P.s. want connection pooling? check out this cool contribution by @findchris on github: https://github.com/redis-store/redis-activesupport/issues/22

customize Nginx global directives with Capistrano

If you’re using Capistrano to maintain an Nginx-based deployment, you’ve probably searched around for a helpful gem or two.  Personally, I like this gem so far:

https://github.com/kalys/capistrano-nginx-unicorn

It offers the easiest configuration if you’re used to rvm as your Ruby package manager.  I think it handles rbenv well judging by some comments, not so sure about chruby.

Anyway, there are several gems out there for this but they all seem to only offer configuration for a virtual server that is added to the /etc/nginx/sites-available (and then symlinked from /etc/nginx/sites-enabled).

What if you would like to customize Nginx global directives that exist outside of a http or server block and therefore not inherited by your custom server directives?

Two such useful directives for tuning the performance of Nginx are `worker_processes` and `worker_connections`.

Here’s an example task to add to your config/deploy.rb file that you can customize as you need.  In my case, my Ubuntu server installed a default /etc/nginx/nginx.conf file that had set `worker_processes` to 4 (too high as only one core) and `worker_connections` to 768 (too low for the box).

namespace :myapp do
  desc "Install Nginx"
  task :install_nginx do
    on roles(:web, :app), in: :sequence, wait: 10 do
      execute "sudo apt-get install nginx-full -y"
    end
  end

  desc "Tune Nginx config"
  task :tweak_nginx_config do
    on roles(:web, :app), in: :sequence, wait: 10 do
      execute "sudo perl -i -pe's|worker_processes 4|worker_processes 1|' /etc/nginx/nginx.conf"
      execute "sudo perl -i -pe's|worker_connections 768|worker_connections 1024|' /etc/nginx/nginx.conf"
    end
  end
end

before 'deploy', 'gonebusy:install_nginx'
before 'deploy', 'gonebusy:tweak_nginx_config'

And there you have it!  Now onto tweaking our server config via the above gems useful variables…

RightScale “Restore a Database” is missing a step

Especially if you are interested in restoring a database server from a primary backup:

The instructions for “Restore a Database from a Primary Backup” in this runbook for PostgreSQL  –  Database Manager for PostgreSQL 9.1 (v13.5 LTS) – Runbook  –  resulted in continual failure no matter what I tried.

After coming across this post  –  How can I manually reattach and mount my Rightscale created database EBS volumes?  –  it occurred to me to try and Detach the automatically mounted 2 volumes BEFORE running “db::do_primary_restore_and_become_master”.

This proved successful, with all data restored correctly.

I cross-checked the MySQL runbooks and they had the same problem.

careful using git cherry-pick to grab a commit from a “newer” branch

I just discovered some pretty surprising git behavior:

We had two release branches, lets call them v100 (current production) and v101 (next release candidate).  A bug came up and I squashed it on v101.

Someone then brought up that we should squash that same bug on v100 and release a patch v100.1 to production.  Fine.

To squash the bug, I used ‘git cherry-pick’ to grab the commit I made on v101 and apply it to v100.  This worked as you would expect.

Here’s the bad part: When I next attempted to push v100 to remote, I was prompted to merge changes.  When I then pulled v100 from origin, I was presented with an entire set of commits from v101 performed after v100 had already been “frozen”!!!

I believe the reason these additional commits (from v101) were pulled into v100 has to do with the way that git uses the SHA not only to identify a commit, but to find all the preceding commits.  Here’s a more in-depth discussion:

http://stackoverflow.com/questions/1241720/git-cherry-pick-vs-merge-workflow

Be careful picking cherries out there!

updating to ActiveAdmin 0.6.0 and Devise 2.2.3

I decided to upgrade from ActiveAdmin 0.5.1 to 0.6.0 because the configuration syntax for panels, columns, etc. seemed to have been streamlined quite a bit.  Looked cool, right?

Well, this turned to take way longer than it should.  Here’s what you need to do:

After running ‘bundle update’ to get the latest activeadmin gem 0.6.0, I took a look at the “Upgrading” section of the README at https://github.com/gregbell/active_admin

Sure enough, this

$> rails generate active_admin:assets

turned out to be a good thing.  But it wasn’t enough.

Right away I noticed that Devise had been upgraded to 2.2.3 from 2.1.2.  I figured I should probably check out their upgrade steps.  You can find those here: https://github.com/plataformatec/devise/wiki/How-To:-Upgrade-to-Devise-2.2

My specs were still failing with

uninitialized constant Admin::DashboardController

but the AA Readme recommendation to ensure that app/admin/dashboards.rb looked like the default turned out to be a red-herring.

I noticed that a fresh ‘rails generate active_admin:install’ wanted to drop a new app/admin/dashboard.rb file.  This new config file had all the jazzy new configuration syntax, so after copying over my section configs from the dashboards.rb to dashboard.rb, renaming ‘section’ to ‘panel’, and removing the dashboards.rb file, I fired up my specs again.  My newly styled dashboard looked great but there was still a problem: seems that my root ‘/’ path was no longer pointing at a valid controller.  Say huh?

I could see in ‘rake routes’ that I had two routes for ‘/’ – one from my manual route and a mystery one that looked like the commented out root_to configuration in config/initializers/active_admin.rb.  Turns out some other folks had just encountered this: https://github.com/gregbell/active_admin/issues/2049

Following the advice there to move my manual root route up above ActiveAdmin in routes.rb did indeed get me back in shape.  Looking forward to AA 0.6.1…

sending HTML formatted email from a Unix command line should not be this much of a pain

but it is….

thanks to Ygor from a 2004 post here: http://www.unix.com/unix-advanced-expert-users/14177-unable-sent-mail-html-format-mailx-command.html

you can send HTML-formatted email like so via ‘sendmail’ (rather than ramming your head against a wall with ‘mailx’):