In Ruby, Rack is our webserver baseline. It is an incredibly simple interface. A rack app is any object which has a public #call method that takes a single argument, typically called env, which represents the environment of a HTTP request (params, headers, etc) and returns a three item array containing:

  1. An integer containing a response code
  2. The response headers
  3. An object responding to #each that emits strings to build the response body

which represents the response to the request.

The cool thing about this interface, is that it’s very easy to insert an intermediate object between the request and the app generating the response, “wrapping” it, to modify the behaviour of request processing. These intermediate objects are called Rack middlewares. They have the exact same interface as a rack app, but with one more refinement: their constructor takes the app that they are wrapping.

This contract between Rack and your code is designed to be easy to understand. What this means is that once you’ve got an idea for a rack middleware you can just type it out. You can quickly write them in config.ru as a Ruby class, and use them by invoking  invoke use with the classname of the middleware you want. Once you’ve done those two things you’re good to go. This simplicity makes them very attractive for development and even production debugging.

What follows is the full source code of a couple of middleware that I’ve created and some explanation of how they’ve made my life much easier.

The potato middleware

I don’t know why I called it this. When I was working with one of my colleagues, they were like “WTF is potato”. Still, here we are. This is a debugging tool of last resort. Something that you break out when you’re beginning to lose faith that the Ruby programming language functions on a fundamental level.

class Potato
  def initialize(app)
    @app = app
  end

  def call(*args, &blk)
    @app.call(*args, &blk)
  rescue BasicObject => e
    require 'pry'; binding.pry
  end
end

use Potato
run Rails.application

Sometimes you can’t work out why an exception is blowing up your app. Sometimes the rails debugger doesn’t kick in. Sometimes you’re tearing your hair out in frustration and you need some help.

This middleware drops a pry at the very outer most layer of your app. I’m a big fan of the pry debugger, as it lets you deeply investigate the state of a bunch of things in your application. Putting it here at the very outermost level of your application lets you be certain that you’re bypassing anything Rails has introduced, or anything else that might be in the way making it harder for you to debug your application. This middleware has enabled me to better understand what’s going on in my application a number of times.

Stackprof middleware

class ProfilerMiddleware
  def initialize(app)
    @app = app
  end

  def call(env)
    if env["QUERY_STRING"].to_s.include?("stackprof")
      StackProf.start(mode: :wall, interval: Integer(ENV.fetch("STACK_PROF_INTERVAL"], "250"), raw: true)
    end

    result = @app.call(env)

    if env["QUERY_STRING"].to_s.include?("stackprof")
      StackProf.stop
      file_name = "/tmp/foo.txt"
      StackProf.results(file_name)
      result = [200, { "Content-Type" => "text/plain" }, [File.read(file_name)]]
    end

    resul
  end
end

This one’s a little more complicated. Stackprof is one of my all time favourite profiling tools in Ruby. At DigitalOcean, we run a lot of our apps in docker containers on kubernetes. Which means that we can’t just SSH in and introspect running processes or the file system. I needed a way to be able to profile the execution of a request, in production, and get the results back. This is what I came up with.

When this middleware is installed if you include stackprof anywhere in the URL query string, the response body gets replaced with the result of a profiling run. You can then use this to generate flamegraphs or other useful profiling information. We don’t directly put this in to customer facing applications in production, but it’s been very useful for performance improvements in internal applications.

Unicorn worker killer

Unicorn worker killer can be configured to kill individual unicorn workers when they’ve either served a certain number of requests, or a certain amount of memory. I didn’t write this one, but it’s been useful to me to improve the stability of my production applications. Frequently, you’ll observe that a unicorn stops responding to requests to some reason. Generally speaking, I’ve found that rebooting processes in production frequently is a way to make them more stable, and so this is just a convenience wrapper for that. If you’re not using Unicorn, you probably don’t need this.

So there you have it, a few rack middlewares, two that you can write by hand, one that you can install from a gem, that are very useful. Thanks for reading!