Devise is an excellent framework for strapping authentication features onto your Rails app. One of the very handy modules that provides session timeout features is
Being a responsible test-driven developer, you start writing tests to ensure your application behaves correctly when the User tries to perform an action that is not allowed after their session timed out. But how to simulate that 30 minutes have gone by? (default
config.timeout_in = 30.minutes)
A brief search of the nets offers a few pointers to overriding the Devise
User.timedout? method but that doesn’t really help our feature spec when ensuring that User was redirected to the Login page upon performing a session-protected action.
Here’s one solution:
Devise is built on top of Warden, so let’s see if we can’t leverage Warden’s test helpers to simulate our timed out user:
Warden::Test::Helpers to put Warden into test mode:
RSpec.configure do |config|
config.include Warden::Test::Helpers, type: :feature
Modify the Rack proxy via
Warden.on_next_request in order to simulate that the User has been timed out by Devise::SessionsController:
it 'does not allow updating password' do
clink_link 'Manage Password'
expect(page).to have_button('Update Password')
Warden.on_next_request do |proxy|
click_button 'Update Password'
We now have a method of simulating timed out behavior that does not involve playing games with time elapsed.
By now you’ve heard all of the hype around Docker. No doubt you have begun forming your own opinion on whether that hype is deserved. (If you’re lacking in opinions, no doubt Reddit can help you find one.)
Conversely, I hope one thing all Engineering teams can agree on is the need for Continuous Integration testing. If your team is not employing a CI by now – there really is no excuse other than just bad engineering practices. With platforms like Jenkins available as open source and the wide-spread availability of cheap hosting solutions, what again was your Lead Engineer’s reasoning for not maintaining a current CI?
When it came time to setup yet another Jenkins-based CI for a Rails web application, I just had to see if I could dockerize the server installation to take advantage of all that Docker has to offer. I am happy to say that the results were particularly fruitful and I can now setup a new Jenkins container via a service like Tutum (backed by DigitalOcean, AWS, etc.) in just minutes.
I’ve rolled up the dockerfile into a repo here: https://github.com/alexagranov/jenkins-ansible-docker
The dockerfile of interest is the one in the
– server based on Ubuntu 14.04
– latest Jenkins
– git-lfs (large-file support)
– Ruby 2.1.5 (via RVM)
– Firefox – for Selenium-based tests
– Xvfb – for headless Firefox
– Postfix – for email notification
I’ve also included a sample job execution script in the Readme.md that demonstrates how to juggle a few environment variables.
If your application requires Redis and/or a Database, don’t forget the Docker paradigm calls for those services to run in their own containers.
Are you getting those mysterious Airbrakes telling you a Timeout has occurred trying to talk to Redis and its kept you awake at night worrying what your poor users see while your (hopefully) slave instance takes over for your master in your redis-cluster?
Worry no more! Because we’re going to proactively ping our current Redis server connection to see if its up and hope to catch it napping before our users do. Ping is available via the redis library but how to get access to it from our Rails app?
Here’s how we’ll schedule a ping every 30 minutes. I’m using Rufus but you can use your scheduling gem of your choice:
scheduler = Rufus::Scheduler.new
scheduler.every '30m' do
store = ActiveSupport::Cache.lookup_store(MyApp::Application.config.cache_store)
Rails.logger.info("Pinging Redis via cache-store ...")
As long as your session and cache stores both use the same cache server (but hopefully with different keys such as /sessions and /cache, respectively) you can use the above method of retrieving the current Redis client connection held in the @data instance variable of the ActiveSupport::Cache::RedisStore retrieved by ActiveSupport::Cache.lookup_store.
P.s. want connection pooling? check out this cool contribution by @findchris on github: https://github.com/redis-store/redis-activesupport/issues/22
If you’re using Capistrano to maintain an Nginx-based deployment, you’ve probably searched around for a helpful gem or two. Personally, I like this gem so far:
It offers the easiest configuration if you’re used to rvm as your Ruby package manager. I think it handles rbenv well judging by some comments, not so sure about chruby.
Anyway, there are several gems out there for this but they all seem to only offer configuration for a virtual server that is added to the /etc/nginx/sites-available (and then symlinked from /etc/nginx/sites-enabled).
What if you would like to customize Nginx global directives that exist outside of a http or server block and therefore not inherited by your custom server directives?
Two such useful directives for tuning the performance of Nginx are `worker_processes` and `worker_connections`.
Here’s an example task to add to your config/deploy.rb file that you can customize as you need. In my case, my Ubuntu server installed a default /etc/nginx/nginx.conf file that had set `worker_processes` to 4 (too high as only one core) and `worker_connections` to 768 (too low for the box).
namespace :myapp do
desc "Install Nginx"
task :install_nginx do
on roles(:web, :app), in: :sequence, wait: 10 do
execute "sudo apt-get install nginx-full -y"
desc "Tune Nginx config"
task :tweak_nginx_config do
on roles(:web, :app), in: :sequence, wait: 10 do
execute "sudo perl -i -pe's|worker_processes 4|worker_processes 1|' /etc/nginx/nginx.conf"
execute "sudo perl -i -pe's|worker_connections 768|worker_connections 1024|' /etc/nginx/nginx.conf"
before 'deploy', 'gonebusy:install_nginx'
before 'deploy', 'gonebusy:tweak_nginx_config'
And there you have it! Now onto tweaking our server config via the above gems useful variables…
Especially if you are interested in restoring a database server from a primary backup:
The instructions for “Restore a Database from a Primary Backup” in this runbook for PostgreSQL – Database Manager for PostgreSQL 9.1 (v13.5 LTS) – Runbook – resulted in continual failure no matter what I tried.
After coming across this post – How can I manually reattach and mount my Rightscale created database EBS volumes? – it occurred to me to try and Detach the automatically mounted 2 volumes BEFORE running “db::do_primary_restore_and_become_master”.
This proved successful, with all data restored correctly.
I cross-checked the MySQL runbooks and they had the same problem.
I just discovered some pretty surprising git behavior:
We had two release branches, lets call them v100 (current production) and v101 (next release candidate). A bug came up and I squashed it on v101.
Someone then brought up that we should squash that same bug on v100 and release a patch v100.1 to production. Fine.
To squash the bug, I used ‘git cherry-pick’ to grab the commit I made on v101 and apply it to v100. This worked as you would expect.
Here’s the bad part: When I next attempted to push v100 to remote, I was prompted to merge changes. When I then pulled v100 from origin, I was presented with an entire set of commits from v101 performed after v100 had already been “frozen”!!!
I believe the reason these additional commits (from v101) were pulled into v100 has to do with the way that git uses the SHA not only to identify a commit, but to find all the preceding commits. Here’s a more in-depth discussion:
Be careful picking cherries out there!
I decided to upgrade from ActiveAdmin 0.5.1 to 0.6.0 because the configuration syntax for panels, columns, etc. seemed to have been streamlined quite a bit. Looked cool, right?
Well, this turned to take way longer than it should. Here’s what you need to do:
After running ‘bundle update’ to get the latest activeadmin gem 0.6.0, I took a look at the “Upgrading” section of the README at https://github.com/gregbell/active_admin
Sure enough, this
$> rails generate active_admin:assets
turned out to be a good thing. But it wasn’t enough.
Right away I noticed that Devise had been upgraded to 2.2.3 from 2.1.2. I figured I should probably check out their upgrade steps. You can find those here: https://github.com/plataformatec/devise/wiki/How-To:-Upgrade-to-Devise-2.2
My specs were still failing with
uninitialized constant Admin::DashboardController
but the AA Readme recommendation to ensure that app/admin/dashboards.rb looked like the default turned out to be a red-herring.
I noticed that a fresh ‘rails generate active_admin:install’ wanted to drop a new app/admin/dashboard.rb file. This new config file had all the jazzy new configuration syntax, so after copying over my section configs from the dashboards.rb to dashboard.rb, renaming ‘section’ to ‘panel’, and removing the dashboards.rb file, I fired up my specs again. My newly styled dashboard looked great but there was still a problem: seems that my root ‘/’ path was no longer pointing at a valid controller. Say huh?
I could see in ‘rake routes’ that I had two routes for ‘/’ – one from my manual route and a mystery one that looked like the commented out root_to configuration in config/initializers/active_admin.rb. Turns out some other folks had just encountered this: https://github.com/gregbell/active_admin/issues/2049
Following the advice there to move my manual root route up above ActiveAdmin in routes.rb did indeed get me back in shape. Looking forward to AA 0.6.1…
but it is….
thanks to Ygor from a 2004 post here: http://www.unix.com/unix-advanced-expert-users/14177-unable-sent-mail-html-format-mailx-command.html
you can send HTML-formatted email like so via ‘sendmail’ (rather than ramming your head against a wall with ‘mailx’):